Quantcast
Channel: TechNet Blogs
Viewing all 34890 articles
Browse latest View live

Microsoft Presentation Translator,即時翻譯簡報上字幕!

$
0
0

Microsoft Translator推出了Microsoft Garage專案的Presentation Translator。此PowerPoint增益集可以即時展現翻譯和上字幕,直接在超過60多種支援的檢索語言的簡報上顯示字幕。除了翻譯的字幕外,聊天室中的觀眾可以在自己的手機、平板和電腦上選擇語言並追蹤簡報進度。

主要功能:

  • Live字幕:能辨識支援10種語言阿拉伯語、中文(國語)、英語、法語、德語、義大利語、日語、葡萄牙語、俄語和西班牙語。並可輸出翻譯成60種以上的語言。。
  • 個人化語音辨識:演講者可以選擇使用投影片中的詞彙和投影片中的自訂語音識別引擎,以符合專業術語應用、技術詞彙、產品和地名等。
  • 翻譯投影片:翻譯PowerPoint投影片的文字時,可以同時保留原始格式,其中包括由左到右和由右到左的語句之間的翻譯。
  • 觀眾參與:分享QR Code或五個字母的聊天代碼,您的觀眾可以在自己的裝置上根據所選語言追蹤簡報進度。
  • 開發多國語言 Q&A:支援觀眾以10種不同語言(語音)問問題,或者以60多種文字發問。
  • 透過Accessibility的包容性:幫助聾啞或聽力障礙的觀眾們追蹤簡報進度,並參與討論。

 

觀看下列影片了解如何使用Presentation Translator

 

Presentation Translator是基於Microsoft Translator的即時功能所支持,其內建於Microsoft Translator APIs下。作為完整的Microsoft Cognitive Services一部分,Microsoft Translator使企業能夠在其應用程式或服務之中增添點到點的即時語音翻譯。

下載PowerPointPresentation Translator(僅適用於Windows),請造訪 aka.ms/presentationtranslator

要了解Presentation Translator的更多消息,請造訪 aka.ms/PresentationTranslatorLearnMore


Microsoft Forms でよく本サポートに寄せられるお問い合わせ

$
0
0

こんにちは、Office サポート チームです。

 

Microsoft Forms は、これまでプレビュー版としてOffice 365 の 教育機関向けテナント (Education ライセンス) のみで
実装されていましたが、2017 7 25 日より Education ライセンス以外の Office 365 Enterprise Office 365 Business 等の
各テナントにリリースが開始されました。

 

これに伴って、本サポートにも Forms について多くのお問い合わせをいただきます。
本記事では、Forms の概要とよくお問い合わせいただくご質問をまとめました。

 

Microsoft Forms の概要

Forms は、アンケートやクイズを作成し、回答結果を簡単に集計することができるアプリケーションです。
ユーザーから返ってきた回答は、簡単に Excel にエクスポートできますので、結果を簡単に確認できます。
また、クラウドベースの Web サービスとして提供されていますので、
インターネットでアクセスすることで、いつでもどこでも利用することができます。

 

Forms の概要につきましては、以下のサイトにてご案内しております。

タイトル : Microsoft Forms とは

URL         : https://support.office.com/ja-jp/forms

タイトル : Microsoft Forms を使用してフォームを作成する

URL         : https://support.office.com/ja-jp/article/4ffb64cc-7d5d-402f-b82e-b1d49418fd9d

 

 

Microsoft Forms をご利用いただけるライセンス

現時点では、以下のライセンスのテナントに順次リリースされる予定です。

なお、前述しましたように、現在 Forms は各テナントに順次リリースしている最中となりますので、
以下のライセンスのテナントであっても、リリース前のテナントでは、アプリケーション ランチャーに [Forms] タイルが表示されないことがあります。
その場合は、直接 https://forms.office.com/ にアクセスいただくことで Forms がご利用いただけますのでお試しください。

 

<現時点で Forms を利用可能なライセンス>

- Office 365 ProPlus

- Office 365 Enterprise E1

- Office 365 Enterprise E3

- Office 365 Enterprise E5

- Office 365 Enterprise K1

- Office 365 Business

- Office 365 Business Premium

- Office 365 Business Essentials

 

英語サイトとなりますが、以下の公開資料でも Microsoft Forms をご利用いただける主なライセンスが記載されております。

タイトル : Frequently asked questions about Microsoft Forms (英語ページ)

URL         : https://support.office.com/en-us/article/495c4242-6102-40a0-add8-df05ed6af61c

該当箇所見出し : Who can use Microsoft Forms?

 

 

Microsoft Forms をご利用いただく際の制限について

現時点では、Forms は専用の独自領域で管理 / 運用されており、Forms のコンテンツのデータによって、
データ容量でその他のサービス (Share Point ) の領域に影響を及ぼすことはありません。
また、現時点では、保存期間につきましても制限を設けておりません。

しかしながら、現時点で、以下のような制限があります。

 

1) 1 つのコンテンツ (アンケートやクイズ) での回答者数の制限

2) 1 つのコンテンツの設問数、および選択肢に対する 4KB の制限

3) 1 ユーザーの作成できる最大コンテンツ数

 

1) 1 つのコンテンツ (アンケートやクイズ) での回答者数の制限

--------------

Forms での回答の上限数は 5,000 回となっております。
詳細は以下の弊社の公式サイトにてご案内しております。
なお、当該上限数を変更する機能はございません。

 

タイトル : Microsoft Forms についてよく寄せられる質問

URL         : https://support.office.com/ja-jp/article/-495c4242-6102-40a0-add8-df05ed6af61c?ui=ja-JP&rs=ja-JP&ad=JP

該当箇所見出し : Microsoft Forms に対して許可される回答の数に制限はありますか?

 

2) 1 つのコンテンツの設問数、および選択肢に対する 4KB の制限

--------------

Forms コンテンツに設定する設問数、および選択肢の設問内に設定可能なオプション数に上限があります。
この制限は、コンテンツに挿入した画像のサイズは影響せず、あくまで設問数や選択肢に対する制限になります。

 

システム内部処理の制約として 4KB の制約があり、多数の設問数や選択肢を設定することで、
様々な内部処理の累計が 4KB を超えるとそれ以上の追加はできなくなります。

 

この制限は、システム内部処理の動作に依存するため、一概に何項目の設問数や選択肢を追加した場合
といった具体的な項目数で上限値をご案内することはできません。

 

目安となりますが、例えば以下を 1 つのコンテンツに設定することでこの制限に達します。

- 66 項目以上の選択肢の追加

- 27 項目以上の分岐設定

 

弊社開発部門ではこの 4KB 制限を取り除く修正を実施することを計画していますので、
将来的には本制限はなくなる予定です。

 

3) 1 ユーザーの作成できる最大コンテンツ数

--------------

1 ユーザーが同時に作成できる Forms のコンテンツ数は 200コンテンツまでとなっております。
200 を超えて作成したい場合は、いずれかのコンテンツを削除する必要があります。

 

 

管理者メニューからの回答範囲の制限について

Forms で作成されたアンケートやクイズのコンテンツに回答可能なユーザーの範囲は、
コンテンツ作成ユーザーが、コンテンツ単位で設定します。

具体的な設定は、下記の 2 つになります。

 

<回答範囲の設定>

[共有] - [回答の送信と収集]

  1. リンクにアクセスできるすべてのユーザーが回答可能
  2. 自分の所属組織内のユーザーのみが回答可能

 

回答範囲の設定は、この 2 つのみとなりますので、ユーザー個別に回答権限をつける等の制御はできません。
また、この設定は、Office 365 管理者メニューから制御する設定もありません。

 

OneDrive for Business 、Excel Online 、OneNote Online でのFormsの利用について

Microsoft Forms のリリースに伴い、OneDrive for BusinessExcel OnlineOneNote Online でも、Microsoft Forms と機能連携し、
Forms のメニューが追加されます。

 

この機能につきましては、以下の弊社サイトでも詳細をご案内しておりますので、ご確認ください。

タイトル : Microsoft Forms を使用してフォームを作成する

URL         : https://support.office.com/ja-jp/article/4ffb64cc-7d5d-402f-b82e-b1d49418fd9d

 

OneDrive Forms との連携機能が利用可能な環境では、OneDrive [Excel アンケート] 機能が [Excel Forms] 機能に置き換わります。

これと同時に、Forms のライセンスを無効化しているユーザーにおいては、OneDrive for Business Excel Forms を選択しても、アクセス権限がない旨のメッセージが表示される画面に遷移し、Excel Forms をお使いいただくことができません。

このような動作となりますので、Forms のリリース以前に OneDrive [Excel アンケート] 機能をご利用されていたユーザーでは、Forms のライセンスを有効化にする等の運用をご検討いただく必要があります。

 

Microsoft Forms を無効化する

Microsoft Forms の製品ライセンスは、デフォルトでオン (有効) に設定され、デフォルト値をオフ (無効) に設定する方法はありません。
このため、Forms の利用を制限したい場合、ユーザーの Forms のライセンスを無効化する必要があります。

 

テナントに所属するユーザーの Forms ライセンスを無効化する具体的な手順を、下記にご案内します。
目的のユーザー数が多い場合は、方法 2 、方法 3 をご確認ください。

 

以下の方法で目的のユーザーを無効化した後は、管理者様等で Forms について実際にご利用いただき、
ぜひ Forms のご利用をご検討ください。

 

方法 1 : ユーザー 11人を UI から設定する

方法 2 : PowerShell を利用してテナントに所属するすべてのユーザーの Forms を無効化する

方法 3 : UI を利用してテナントに所属するすべてのユーザーの Forms を無効化する

 

 

方法 1 ) ユーザー 11人を UI から設定する

--------------

下記の Office 365 メニューから、製品ライセンスを オフ に設定することで Forms が無効化されます。

[管理者] - [ユーザー] - [アクティブなユーザー] - < 任意のユーザーを選択> - 製品ライセンスの [編集]

 

 

方法 2. ) PowerShell を利用してテナントに所属するすべてのユーザーの Forms を無効化する

--------------

テナントに所属するユーザー数が多い場合は、PowerShell を利用してすべてのユーザーの Forms のライセンスを
一括で無効化にすることも可能です。

 

下記弊社ブログにて、PowerShell のスクリプトを用いて、テナントに所属するすべてのユーザーの Forms ライセンスを
無効化する手順をご紹介しております。
下記サイト内の「3. PowerShell を使用して一括で無効化する方法」 をご参照ください。

 

タイトル : Office 365 Microsoft Forms PowerShell を使用して一括で有効化、無効化 (停止) する方法について

URL         : https//blogs.technet.microsoft.com/officesupportjp/2016/06/29/office-365-forms-enable-disable/

 

一括無効化を実施後、管理者画面の [製品ライセンス] から [Microsoft Forms (プラン*)] の設定が
オフに切り替わっていることをご確認ください。

 

方法 3 ) UI からテナントに所属するすべてのユーザーの Forms を無効化する

--------------

UI から、テナントに所属するすべてのユーザーの Forms ライセンスを無効化することも可能です。
詳細な手順は以下の弊社サイトでもご紹介しております。

 

タイトル : Microsoft Forms の有効化と無効化

URL         : https://support.office.com/ja-jp/article/-8dcbf3ab-f2d6-459a-b8be-8d9892132a43

 

 

* 補足

--------------

方法 2 PowerShell を利用した設定方法では、全ユーザーの Forms のライセンスのみを無効化します。 これに対して 方法 3 UI からの全ユーザーの設定方法は、[既存の製品の置換] 画面で設定された製品ライセンスがすべて反映されます。

このため、方法 3 UI から実施する場合は、他の製品ライセンスの有効 / 無効を目的の設定をしている必要があります。
ユーザーによって製品ライセンスが異なる場合は、方法 2 PowerShell でのライセンス変更をご検討ください。

 

 

Microsoft Forms のよくあるお問い合わせ

他にも、以下の弊社サイトで Microsoft Forms についてよくあるお問い合わせをご案内しております。

 

タイトル : Microsoft Forms についてよく寄せられる質問

URL         : https://support.office.com/ja-jp/article/-495c4242-6102-40a0-add8-df05ed6af61c?ui=ja-JP&rs=ja-JP&ad=JP

タイトル : Frequently asked questions about Microsoft Forms (英語ページ)

URL         : https://support.office.com/en-us/article/495c4242-6102-40a0-add8-df05ed6af61c

 

– 注意事項

本情報の内容 (添付文書、リンク先などを含む) は、作成日時点でのものであり、予告なく変更される場合があります。

Office 2007 榮休!想繼續更新、收發 Email 可以點做?

$
0
0

為確保你的 Office 軟件可以暢順運作,配合現正使用的電腦系統,現在是時候轉換或升級你的軟件了。

 

 

 

立即升級至Office 365,助您一站式更新您的電郵及Office應用程式

 

 

即日起至2017623日,凡訂購指定金額的Office 365產品,可獲下列優惠:

 

條款及細則:

  • 優惠由即日起至2017926
  • 優惠只限於全新或新增訂購中小企客戶
  • 優惠只適用於250人或以下的公司
  • 客戶需最少購買10個使用者並承諾使用1年服務
  • 客戶必須於優惠期內經 Microsoft 經銷商、分銷商或零售商訂購 Office 365 商務進階版或任何企業版產品以獲取有關優惠

 

 

條款及細則:

  • 優惠由即日起至2017926
  • 優惠只限於全新或新增訂購中小企客戶
  • 優惠只適用於250人或以下的公司
  • 客戶必須於優惠期內經 Microsoft 經銷商、分銷商或零售商訂購 50 套或以上 Office 365 商務基本版、商務版、商務進階版或任何企業版產品以獲取有關優惠
  • 每位客戶最多獲一堂2小時上門課程

 

以上優惠由 Microsoft 特約經銷商提供,詳情請致電 2804 4469。

Diese 10 Startups bringen ihr Geschäft mit dem Microsoft Accelerator Berlin auf das nächste Level

$
0
0

Eine unglaubliche Idee: Das ist der Ausgangspunkt der meisten Startups. Doch ganz egal, wie revolutionär diese Idee auch ist, sie allein kann den Erfolg nicht garantieren. Startups müssen alles daran setzen, die anfängliche Inspiration in ein tragfähiges Konzept zu überführen. Kontakte, Ressourcen, Fachwissen, Weiterbildung, Führungsqualitäten usw. – all das ist notwendig, damit eine Innovation Früchte tragen kann. Das Microsoft Accelerator-Programm, das in sieben Ländern läuft, soll Startups mit dem Know-how, Netzwerk, Tools und Ressourcen unterstützen, die sie brauchen, um ihre hervorragenden Ideen auf die nächste Stufe zu heben.

Im Lauf der Jahre haben 647 Startups das Accelerator-Programm durchlaufen, die insgesamt mehr als 3 Milliarden US-Dollar einwerben konnten. Im Kern sind das Early-Stage-Unternehmen, die über die Entwicklung ihres Produkts hinaus auch jeden weiteren Aspekt ihres Geschäfts skalieren wollen.

Während des Programms und auch darüber hinaus fungiert der Microsoft Accelerator als Katalysator: Die teilnehmenden Startups werden langfristig in das weltweite Netzwerk von Microsoft eingebunden, das aus Kunden, Partnern, Mentoren und Tech-Experten besteht - beste Voraussetzungen für die Expansion und den Erfolg ehrgeiziger Startups.

Microsoft Accelerator Berlin
Der Microsoft Accelerator in Berlin hat sich weiterentwickelt. In diesem Jahr liegt der Fokus auf ausgewählten Startups, die bereits eine Series-A-Finanzierungsrunde abgeschlossen haben und sich Lösungen im Bereich künstliche Intelligenz (KI), maschinelles Lernen, das Internet der Dinge (IoT), Mixed Reality oder Blockchain entwickeln.

Es ist wichtig zu betonen, dass das Investment von Microsoft in diese Startups weit über eine reine Finanzierung und die technologische Unterstützung hinausgeht: Microsoft befähigt sie, ihr Geschäft weiterzuentwickeln, zu skalieren und unterstützt sie kontinuierlich in ihrem Reife- und Wachstumsprozess.

Als weltweit führender Technologiekonzern bietet Microsoft den Startups dafür eine breite Palette vielfältiger Ressourcen, die den Gründern genau dann zur Verfügung stehen, wenn sie sie in ihrer Entwicklung benötigen.

Den zehn ausgewählten Startups im Berliner Accelerator steht ein intensives, viermonatiges Programm bevor, das sie bei ihrem Reife- und Skalierungsprozess begleitet. Es umfasst:

  • Zugang zu Microsofts Treibern für Vertrieb und Marketing
  • Technische Unterstützung – Beratung und Schulung
  • Coaching für Führungsaufgaben
  • Guthaben für die Nutzung von Microsoft Azure

 

Die neue Klasse
Diese zehn Startups nehmen in diesem Jahr am Microsoft Accelerator Berlin teil:

BigchainDB
Stellen Sie sich vor, Sie kennen die Beschaffenheit von allem, was Ihnen in die Hände kommt: Sie wissen alles über die Materialien, den Herstellungs- und Lieferprozess sowie die Art der Nutzung. Die Blockchain-Technologie macht ein solches Niveau der Authentifizierung möglich. BigchainDB baut auf dieser Technologie auf und macht Nutzern den Einsatz von datenbasierten Blockchain-Anwendungen möglich. BigchainDB erweitert die Vorteile moderner Datenbanken in den Bereichen Skalierbarkeit und Abfragbarkeit um die Blockchain-Eigenschaften Dezentralisierung, Schutz vor Veränderungen sowie Native Assets. BigchainDB richtet sich an Unternehmen, die datenbasierte Use Cases implementieren wollen – ohne Abstriche in Sachen Skalierbarkeit, Sicherheit oder Leistung.

Building Radar
Building Radar ist die einzige Echtzeit-Suchmaschine für Bauprojekte in aller Welt. Die Nutzer sparen beim Suchen bis zu 80 % ihrer Zeit und finden Bauvorhaben mehrere Monate früher als ihre Mitbewerber. Mit diesem Dienst lässt sich der Fortschritt von der Planung bis zur Umsetzung nachverfolgen, sodass die entsprechenden Immobilien zum optimalen Zeitpunkt verkauft werden können. Die Kunden von Building Radar stammen aus den verschiedensten Sektoren der Baubranche und umfassen DAX-Konzerne und mittelständische Unternehmen wie die weltbekannten Marken Viessmann oder Vitra.

Crate.io
CrateDB ist eine Open-Source-SQL-Datenbank für Unternehmen, die Maschinendaten verarbeitet. Crate bietet eine einzigartige Kombination aus massiver, horizontaler Skalierung, Echtzeit-Verarbeitung, Eigenschaften einer Dokumentendatenbank und einer Standard-SQL-Benutzeroberfläche. Mit Crate können riesige Datenmengen von Maschinen oder aus dem IoT in Echtzeit in der Cloud verarbeitet werden. Über 800 Cluster laufen bereits mit Crate. Das System ermöglicht Anwendungsfälle wie Echtzeit-Backup für die Bereiche IoT (Industrie und Verbraucher), Sicherheit, Geodaten und Logdaten. Sowohl Kunden im Silicon Valley (Skyhigh, Qualtrics, Stackrox, Fullpower, NBC usw.) als auch viele europäische Kunden betreiben ihre Anwendungen mit dem verteilten SQL-System von CrateDB.

Giant Swarm
Giant Swarm beschäftigt sich mit der Verbreitung einer modernen Software-Infrastruktur, wie sie beispielsweise von Google und Facebook verwendet werden, um agile, resiliente, verteilte Systeme im passenden Maßstab zu betreiben. Giant Swarm ist die beste Möglichkeit, Microservices in Docker-Containern auf Kubernetes-Clustern zu betreiben – vor Ort und in der Cloud. Giant Swarm stellt sicher, dass die Infrastruktur jederzeit betriebsbereit und auf dem neuesten Stand ist, damit die Kunden sich auf ihre Anwendungen konzentrieren können. Zu den Kunden gehören weltweit führende Unternehmen wie Versicherer, Einzelhändler, Software-Lieferanten und Telekommunikationsfirmen aus dem Fortune 500.

LeanIX
LeanIX hilft Unternehmen dabei, innerhalb ihrer Branche die Führung im digitalen Bereich zu übernehmen. Dank der Software-as-a-Service Anwendung treffen Unternehmen bessere und schnellere Entscheidungen darüber, wie sie ihre IT-Architektur laufend optimieren können. LeanIX sorgt für die 360-Grad-Transparenz, die notwendig ist, um die Komplexität der IT-Landschaft zu verringern. Gleichzeitig stellt das System die Compliance sicher und ermöglicht Wachstum durch die Einführung modernster Technologien. Über 80 führende Marken wie adidas, DHL, Merck und Zalando vertrauen auf LeanIX und die innovative Lösung des Startups.

nyris
nyris entwickelt unter Nutzung von neuesten Lösungen aus dem Bereich künstliche Intelligenz und von Frameworks aus dem Deep Learning eine fortschrittliche Bilderkennungsanwendung. Das Unternehmen stellt Kunden aus Einzelhandel und Industrie seine Technologie in Form von SaaS zur Verfügung. Das einzigartige Konzept liefert in den Bereichen 1D (Codes), 2D (print2web) und 3D (reale Produkte) die schnellsten und genauesten Ergebnisse in einer API. Es kann 500, 500.000 oder 500 Millionen Bilder von Produkten oder Objekten durchsuchen, ohne beim Abgleich an Leistung oder Geschwindigkeit zu verlieren (weniger als eine Sekunde).

So1
So1 bietet einen völlig neuen Ansatz für Preis-Promotions in der Konsumgüterbranche: Hersteller und Einzelhändler können einzelne Kunden mit individuellen Aktionen erreichen und einen ROI erzielen, der das üblicherweise mögliche Niveau bis um den Faktor zehn übertrifft. Die dem So1-System zugrundeliegende Technologie beruht auf modernsten ökonometrischen Methoden und berechnet den genauen Preisnachlass, der notwendig ist, damit potenziell wertvolle Kunden die Marke oder den Einzelhändler wechseln. Die von So1 auf diese Weise berechneten Preis-Promotions können dann mit reichweitenstarken mobilen Anwendungen und per Digitaldruck ausgeliefert werden.

Styla
Styla ist ein CMS der nächsten Generation, das in Form von SaaS angeboten wird. Es beruht auf zwei wichtigen Säulen des E-Commerce: auf der Inspiration von Nutzern und auf deren Gewinnung als Käufer. Styla bietet eine revolutionäre Automatisierung des Designs von Inhalten und kombiniert diese Funktion nahtlos mit Gelegenheiten für den Einkauf. Somit können die Unternehmen ihre Ressourcen darauf konzentrieren, überzeugende Storys zu erzählen und die Herzen ihrer Kunden zu gewinnen, statt sich mit Programmierung und Design zu befassen.

Tellmeplus
Tellmeplus kann auf jahrelange Forschung im Bereich künstlicher Intelligenz für Predictive Analytics zurückgreifen und ist ein spezialisierter Software-Lieferant für die Anwendung von künstlicher Intelligenz auf Big Data. Die Predictive-Objects-Plattform nutzt die neuesten Fortschritte in den Bereichen Big Data und maschinelles Lernen für die Bereitstellung von Automated Embedded Artificial Intelligence. Die Technologie von Tellmeplus wendet künstliche Intelligenz dort an, wo Entscheidungen gefällt werden müssen: in den Objekten.

Trufa
Trufa ermöglicht Verbesserungen der geschäftlichen Leistung, die Milliarden wert sind: in Form von Gewinnen, Preisen und Geschäftskapital. Die Trufa Performance Management Machine erkennt Korrelationen zwischen Geschäftsaktivitäten und finanziellen Ergebnissen, identifiziert Business-Treiber und empfiehlt Maßnahmen. Trufa verwendet bestehende Transaktionsdaten aus dem Enterprise Resource Planning (ERP), um automatisch Frühindikatoren für jede geschäftliche Situation zu bilden, ohne dass manuell erstellte Modelle oder Datenwissenschaftler benötigt werden. Aus Sicht der Nutzer geschieht all dies nebenbei.


Ein Beitrag von Iskender Dirik
CEO in Residence Microsoft Accelerator Berlin

Cacher des pages dans l’application Paramètres

$
0
0

Avec l'arrivée de Windows 10 1703, il est maintenant possible comme on le faisait avant avec le panneau de configuration de cacher des pages dans l’application paramètres.

Cette configuration se fait via une nouvelle GPO disponible sous :
Computer ConfigurationAdministrative TemplatesControl PanelSettings Page Visibility

La GPO nous propose deux scénarios d’utilisation :

Hide : pour paramétrer les pages à cacher
Showonly : pour paramétrer les pages à garder

Exemple 1 : Garder uniquement la page A propos
Showonly:about
Résultat après gpupdate /force

Exemple 2 : Garder les pages A propos et le VPN
showonly:about;network-vpn

Exemple 3 : Cacher la page « Pour les développeurs »
Hide:developers

Pour vérifier le nom de page à configurer on peut passer par la fenêtre Exécuter ou par Microsoft Edge

Liste des noms des pages :
ms-settings:about
ms-settings:activation
ms-settings:appsfeatures
ms-settings:appsforwebsites
ms-settings:backup
ms-settings:batterysaver
ms-settings:bluetooth
ms-settings:colors
ms-settings:cortana
ms-settings:datausage
ms-settings:dateandtime
ms-settings:defaultapps
ms-settings:developers
ms-settings:deviceencryption
ms-settings:display
ms-settings:emailandaccounts
ms-settings:extras
ms-settings:findmydevice
ms-settings:lockscreen
ms-settings:maps
ms-settings:network-ethernet
ms-settings:network-mobilehotspot
ms-settings:network-proxy
ms-settings:network-vpn
ms-settings:network-directaccess
ms-settings:network-wifi
ms-settings:notifications
ms-settings:optionalfeatures
ms-settings:powersleep
ms-settings:printers
ms-settings:privacy
ms-settings:personalization
ms-settings:recovery
ms-settings:regionlanguage
ms-settings:storagesense
ms-settings:tabletmode
ms-settings:taskbar
ms-settings:themes
ms-settings:troubleshoot
ms-settings:typing
ms-settings:usb
ms-settings:windowsdefender
ms-settings:windowsinsider
ms-settings:windowsupdate
ms-settings:yourinfo
ms-settings:gaming-gamebar
ms-settings:gaming-gamedvr
ms-settings:gaming-broadcasting
ms-settings:gaming-gamemode

La liste complète est disponible dans l'article suivant : https://docs.microsoft.com/en-us/windows/uwp/launch-resume/launch-settings-app

IFA 2017 Talks: Thomas Kowollik zeigt die Must-Haves für die Weihnachtssaison

$
0
0

Die IFA 2017 ist nicht nur die global führende Messe für Consumer Electronics und Home Appliances, sondern auch der Ort, um neue Tech Trends zu präsentieren. In unserem Microsoft IFA Studio haben wir uns mit verschiedenen Gästen über genau diese Trends unterhalten. Herausgekommen sind vier Stunden Live-Talk, umfassende Informationen zu den neuesten Produkten und unzählige Impressionen von der wichtigsten Consumer-Messe in Berlin.

Als Segment Lead für das Consumer und Devices Geschäft ist Thomas Kowollik sowohl für die Zusammenarbeit mit unseren OEM-Partnern als auch für den Austausch mit den Konsumenten verantwortlich. Im Interview erläutert er die verschiedenen Produkt-Highlights von Microsoft auf der IFA 2017.

Surface Pro, Laptop und Studio, die Xbox One X sowie erste Windows Mixed Reality Headsets – die Bandbreite an Themen auf der diesjährigen IFA ist groß. Beim Interview in unserem Microsoft IFA Studio greift Thomas Kowollik zunächst jedoch zwei persönliche Highlights aus der Produktpalette heraus, die in diesem Jahr auf unserem IFA-Stand zu sehen sind: Das Surface Studio und die Windows Mixed Reality Headsets.

Ersteres sorgte bereits vor dem offiziellen IFA-Start für Aufsehen, denn das Surface Studio wurde von der Computer BILD mit dem ‚Goldenen Computer‘ in der Kategorie Design ausgezeichnet. Thomas Kowollik durfte diesen Preis stellvertretend entgegennehmen und stellte dabei auch die vielen Möglichkeiten des kreativen Arbeitens mit dem Device in den Mittelpunkt.

Letzteres, das Thema Windows Mixed Reality, ist nicht nur eines der Trendthemen der IFA 2017, sondern auch im Rahmen der Zusammenarbeit mit unseren OEM-Partnern von besonderer Bedeutung: Denn Acer, ASUS, Lenovo und Dell haben auf der Messe erste Windows Mixed Reality Headsets vorgestellt, die sowohl bei ihnen als auch am Stand von Microsoft getestet werden können. Darüber hinaus sind die Headsets für Thomas Kowollik in diesem Jahr das Must-Have unter dem Weihnachtsbaum.

Zum Interview mitgebracht hat er außerdem den Surface Laptop, eines der Geräte aus der Surface Familie. Im Livestream zeigt er, dass dieses nicht nur durch den Formfaktor selbst überzeugt. Die mit Alcantara überzogene Tastatur sorgt für ein angenehmes Tipp-Gefühl und die verschiedenen erhältlichen Farben des Laptops ermöglichen besonders viel Individualität.

Ebenfalls im Fokus des Talks steht das im Rahmen unserer Keynote für den 17. Oktober angekündigte Windows 10 Fall Creators Update. Dabei unterstreicht Thomas Kowollik in erster Linie die damit verfügbare Eye-Tracking Funktion. Diese ermöglicht es Menschen mit der Nervenkrankheit ALS nur mit ihren Augen die Maus auf dem Bildschirm zu steuern und Eingaben vorzunehmen. Dieses Feature ist ein weiterer Schritt dahin, Windows für alle zugänglich zu machen.


Ein Beitrag von Sydney Loerch
PR/Communications Intern

Office 365 の 8 月の更新: チームワーク機能を強化

$
0
0

(この記事は 2017 8 31 日に Office Blogs に投稿された記事 New to Office 365 in August—enriching teamwork の翻訳です。最新情報については、翻訳元の記事をご参照ください。)

今回は、Office チーム担当コーポレート バイス プレジデントを務める Kirk Koenigsbauer の記事をご紹介します。

成功するうえで欠かせないのは、今まで以上に効果的なチームワークです。8 月のリリースでは、Excel の共同編集機能、Microsoft Teams の機能強化、Yammer アプリの更新など、チームワークの強化に役立つ機能が満載です。それでは詳しくご紹介します。

チームでの Excel 作業を快適に

より快適な共同作業を実現する大きな一歩として、Excel の共同編集機能の一般提供を開始します。この機能を使用すると、SharePoint OnlineOneDrive または OneDrive for Business に保存されているスプレッドシートをだれでも同時に編集でき、共同で作業しているユーザーや変更箇所を確認しながら作業を進めることができます。

また、Office 365 ユーザー向けに、OneDrive SharePoint 上の Word/Excel/PowerPoint ファイルを自動で保存する機能の一般提供も開始します。個人で作業しているか共同で作業しているかにかかわらず、直近の変更内容が自動でクラウドに保存されるため、毎回 [Save] ボタンを押す必要はありません。

An animated image showing Excel co-authoring and AutoSave in action.

ワークブック内で他のユーザーの編集している箇所を確認

生徒と教師を強力にサポートする Microsoft Teams

6 月に、積極的な授業への参加の推進、プロフェッショナルな学習コミュニティの強化、校内でのより効果的なコミュニケーションなどに役立つ Teams の新機能を発表しましたが、今回は、生徒と教師にとってさらに使いやすく、一緒により多くのことを達成できる新たな更新が実施されました。

Teams OneNote Class Notebook: Teams で OneNote Class Notebook の閲覧が可能になり、生徒や教師が手軽に同じページを開くことができるようになりました。他の職員もアプリ内から OneNote Class Notebook OneNote Staff Notebook の設定にアクセスして、簡単に授業内容を管理できます。

課題をパワーアップ: 教師が Teams で配布する課題 (英語) の参考資料に Web リンクを追加できるようになったため、生徒は作業に必要な外部の情報に確実にアクセスできます。さらに、教師は課題の採点結果を学校の LMS (学習管理システム) にインポートして、成績を一元管理できます。

IT 管理を強化: IT 管理者は、Teams 内のサードパーティ製アプリのアクセス権の指定や、プライベート チャットや課題の有効化/無効化などの制御ができるようになります。これにより、各学校のニーズに合わせて安全で安心な学習環境を構築できます。

An animated image showing the use of OneNote class notebooks in teams. The user can be seen browsing the different sections and pages.

Teams の OneNote で生徒と教師の共同プロジェクトを支援

Outlook for Windows で機能豊富なプロファイルを提供

職場環境が複雑でめまぐるしく変化する今日、組織にとって最も重要な資産は従業員です。今年 3 月に、同僚、社外の人材、グループなどの関係を強化する Office 365 全体のインテリジェント プロファイルの構想を発表 (英語) しました。本日より、この構想に基づいて刷新された Outlook for Windows のエクスペリエンスを順次提供していきます。Microsoft Graph を利用したこの新しいプロファイルでは、ファイル、会話、グループ メンバーシップなど、重要な情報を簡単に入手できるようになります。

Image of the new people card displayed in Outlook for Windows. Shows contact information, organization and memberships for the selected person.

社内全体のコミュニケーションを Yammer で活性化

iPad 向け Yammer アプリのデザインを一新し、ユニバーサル検索や自動サインインなどのユーザー エクスペリエンスが向上しました。社内告知や社内コミュニティの最新状況を把握し、外出中でも必要な情報に簡単にアクセスできるようになります。

また、Office 365 の利用状況レポートに Yammer アクティビティ レポートが追加され、組織が Yammer でどのようにつながっているかを管理者が把握できるようになります。既存のコミュニティの数や各アクティビティを簡単に把握できるので、ベスト プラクティスを見つけ出して組織全体に浸透させる場合などに役立ちます。

An animated image showing the new Yammer app for iPad and Yammer groups activity report.

新しい iPad 向け Yammer アプリとアクティビティ レポートで社内全体でのチームワークを推進

Office 365 の 8 月の更新の詳細については、Windows デスクトップ版 OfficeMac 版 OfficeWindows Mobile 版 OfficeiPhone および iPad 版 OfficeAndroid 版 Office の各ページをご覧ください。Office 365 Home または Office 365 Personal をご利用の方は Office Insider にサインアップして、最新かつ最高クラスの Office 生産性向上機能をいち早くお試しください。Current Channel および Deferred Channel をご利用の法人のお客様も、先行リリースを通じてフル サポートのビルドをお試しいただけます。こちらの Web サイトで今回ご紹介した機能の入手方法を詳しくご案内しています。

Kirk Koenigsbauer

今回の更新の提供状況は次のとおりです。

  • Excel の共同編集機能Windows Office 365 をご利用の全ユーザーを対象に提供しています。
  • Word/Excel/PowerPoint の自動保存機能Windows Office 365 をご利用の全ユーザーを対象に提供しています。
  • Microsoft Teams OneNote Class Notebook Windows/Mac/Web クライアント で Office 365 Education をご利用の全ユーザーを対象に提供しています。
  • 強化された課題と管理機能 Office 365 Education をご利用の全ユーザーを対象に提供しています。
  • Outlook のリッチ プロファイルWindows で一般法人向け Office 365 プランをご利用の全ユーザーを対象に提供しています。
  • 新しい Yammer アプリApple App Store で入手できます。
  • Yammer アクティビティ レポート Office 365 管理者センターで提供しています。

※ 本情報の内容 (添付文書、リンク先などを含む) は、作成日時点でのものであり、予告なく変更される場合があります。

Azure Automation : try / catch を利用した Runbook のリトライ処理

$
0
0

こんにちは、Azure サポートチームの山口です。
本記事では Azure Automation の PowerShell Runbook で、try / catch 構文を利用して処理の再実行を行う方法を紹介したいと思います。

はじめに


『複数の仮想マシンを一斉起動する Runbook をスケジュールしていて、時間になって確認してみると一台の仮想マシンだけ起動に失敗していた、、、』といったお問い合わせを最近多く頂きます。Azure にアクセスするタイミングによっては、コマンドレットやパラメータの指定は正しいのに、Azure 側を起因とするエラーが偶に生じてしまいます。このような不運なエラーが起きる原因は排他制御など様々ありますが、Azure Automation の観点から可用性を高めるためには、Runbook 内でのリトライ処理が非常に有効となります。本記事で紹介するリトライ処理を実装すると、成功するまで Runbook を繰り返し実行するので、上記のようなエラーを回避することが出来ます。

リトライ処理の実装には、PowerShell v2.0 からサポートされている try / catch 構文を使用します。try / catch (/ finally) に関する詳細は、MS 公式ブログ An Introduction to Error Handling in PowerShell (英語) などのサイトを参考にしていただけたらと思います。

リトライ処理のテンプレート


早速、リトライ処理を実現する PowerShell スクリプトのテンプレートを紹介したいと思います。
このスクリプトは、Azure Automation 環境だけでなく、通常の PowerShell 環境でも使用することが出来る汎用的なものです。次のセクションで、このテンプレートを組み込んだ PowerShell Runbook のサンプルを紹介しますので、Runbook として使いたい場合はそちらもご覧ください。

PowerShell スクリプト

# リトライ間隔(秒)
$RetryIntervalInSeconds = 10
# リトライ回数
$NumberOfRetryAttempts = 2
# 実行完了したかのフラグ
$CmdOk = $False

do {
    try {

        # Some code here.
        # ---------------

        # 実行完了
        $CmdOk = $True
    }
    catch {
        # エラー(例外)の詳細を表示
        write-output "Exception Caught..."

        $ErrorMessage = $_.Exception.Message
        $StackTrace = $_.Exception.StackTrace
        Write-Output "Error Occurred: Message: $ErrorMessage, stack: $StackTrace."
        Write-Output "Retry attempts left: $NumberOfRetryAttempts"
        Write-Output "---------------------------------------------------------"

        # リトライ
        $NumberOfRetryAttempts--
        Start-Sleep -Seconds $RetryIntervalInSeconds
    }
} while (-not $CmdOk -and $NumberOfRetryAttempts -ge 0)

概要

try / catch のリトライ処理を制御するために 3 つの変数を使用しています。

  • $RetryIntervalInSeconds で、次の処理を開始するまで何秒の間隔を空けるのかを指定します。このパラメータは Start-Sleep 関数のために使われます。
  • $NumberOfRetryAttempts はリトライ処理を何回実行するかを表しています。
  • $CmdOk は終了判断のための真偽値で、実行したい処理が終了した場合にのみ $True となります。

そして、以下の箇所に目的のスクリプト コードを記述してください。

# Some code here.
# ---------------

リトライ処理を組み込んだ Runbook の例


上のセクションで紹介したテンプレートを、実際に Runbook に組み込んだサンプルを作成しました。参考にしていただければと思います。

RetrySample

インポート方法

  1. Azure ポータルにログインし、Azure Automation アカウントの [Runbook] ブレードを開きます。
  2. [Runbook の追加] を押します。
  3. [既存の Runbook をインポートしています] から 上記ファイルをインポートしてください。

それでもダメな場合は?


上記で紹介したリトライ処理を組み込んでもエラーが発生し続けるという場合には、些細なことでも結構ですので、気兼ねなく Azure サポートまでお問合せください。
お問い合わせの際には、以下の情報もご提供いただけるとスムーズに調査を行うことができます。ご協力の程、宜しくお願い致します。

  1. ジョブ ID
  2. エラー内容
    上記の2つは、[Automation アカウント] ブレード >> アカウントの選択 >> [ジョブ] ブレード の画面で、ジョブを選択した状態から確認することが出来ます。
  3. Runbook 本体
    PowerShell Runbook のスクリプ トコード、グラフィカル Runbook のキャプチャなど

本情報の内容(添付文書、リンク先などを含む)は、作成日時点でのものであり、予告なく変更される場合があります。


The HoloLens and the Future of Crime Scene Investigation

$
0
0

By Asavin Wattanajantra

In the 2010 video game, Heavy Rain, an FBI agent called Norman Jayden hunts a killer with the use of a tool called an 'Added Reality Interface'. This consists of a pair of augmented reality glasses and a glove which allows him to reconstruct crime scenes and process evidence.

At the time, it seemed like we were many years away from seeing technology like this, as we were still getting used to smartphone-type technology. But with the Microsoft HoloLens, we might see these types of tools used in law enforcement sooner than we think.

One of Microsoft's Partner agencies, Black Marble, has been working on a proof-of-concept for the HoloLens that allows the capturing of content at the scene of a crime. Running on its Universal Windows Platform (UWP) application tuServ, they built a proof-of-concept Scene of Crime Application that allows users to explore the scene of a crime, placing virtual markers and gathering multimedia evidence without disturbing and tainting evidence.

It can even work long after a crime scene has been cleared.  Through the Scene of Crime Application, investigators can return virtually to the scene of the crime, viewing the virtual markers that had been placed, as well as the evidence already gathered. Everything is placed where it was, allowing an investigator to get an understanding of the crime scene as if it hadn’t been cleaned up. Media can even be captured, with police officers able to playback footage at a police station.

HoloLens Command and Control

Black Marble has also been working on running tuServ on the HoloLens as a portable command and control unit. The augmented glasses can be used to view active officers and incidents in real-time on a map interface, allowing them to have an overview of everything going on in an area. Through the Command and Control app, officers can assign each other to incidents, communicate with each other on the ground, and build a live picture of what’s going on in real-time.

Nick Lyall of Bedfordshire Police, which tested the applications, said, “As a public order and firearms commander I can say that without doubt the use of HoloLens, through tuServ, will revolutionise policing for years to come.

“As a detective I can also say that its ability to scan crime scenes and create a mapped 3D version will allow for a reduction in cross-contamination issues and allow for investigators to visualise in real time the scenes of major crime.”

Black Marble has been working on Windows apps with Microsoft for some time, and the HoloLens apps were possible due to the previous work the agency did in building the Windows application tuServ. The aim of tuServ was to reduce dependencies on paper, as the app could be equipped on a device such as a mobile phone or tablet computer by officers in the field, as well as staff at a police station.

HoloLens and UWP

Any developer using the UWP can create a HoloLens app. Whether they are looking to create a game, consumer or business app, they can create one universal app which targets a family of Windows devices.

This ability is at the heart of what makes tuServ possible. The Black Marble team are developing for one codebase, for any Window device they choose. For the police using tuServ, this means the officer using a smartphone will have the same experience to one using a tablet, meaning that they won’t have the problem of out-of-date software. Working with UWP also means that developers can make use of Cortana integration and other useful features, such as translation and facial recognition.

If you think that development for the HoloLens is for you, there are many resources available consisting of documentation, step-by-step tutorials and case studies.

---

For more information on the HoloLens, including in-depth development guides and tutorials, visit the developers section of the official HoloLens website.

NativeCountersCollection.ps1 – The story of long running script

$
0
0

Hi SCOM guys and IT Guys,

I want to share something interesting about the Performance Collection done by the Microsoft Exchange Server 2013 Management Pack.

On one of my customer, we had lots of "Operations Manager failed to start a process" alerts randomly coming from different Exchange Servers. This is a generic alert, indicating that a given script was killed by SCOM itself because it runs over the allowed configured timeout. Even if it is not an issue as per se, a high recurrence of this alert or a high Repeat Count value must not be underestimated since it denotes that the monitoring is not carried on as it should.

As you can see from the screenshot, the failing script was the "NativeCountersCollection.ps1" (which is used by nearly 46 different rules that collects performance data for reporting purpose) that went over the 600 seconds.

For those of you which are not familiar with the Exchange Management Pack, and in particular with this script, it retrieves the configured Exchange counters (for all roles and all instances based on the installed culture) and perform a point-in-time collection using the Get-Counter PowerShell cmdlet. It is normally configured to run every 900 seconds and with a timeout value of 300 seconds.

Since my customer' environment is a big one, it could have been reasonable to have the script running for more than 300 seconds. So, I started the troubleshooting by increasing the timeout value to 400, then 500 and then 600. At that point, even considering the environment size, I thought that it was not acceptable to have a script that collects point-in-time performance counter values, to run for more than 10 minutes.

I went ahead with the troubleshooting by running the script manually (and outside of SCOM) on the impacted servers, taking note of the starting and finishing time. No doubt; the script took around 10 mins and sometimes even more. Together with the customer, with the assumption that it was not a capacity problem or Exchange related problem since there were neither clear symptoms of CPU, Memory or Disk sufferings nor evident leaks, we decided to investigate the server configuration. We did some checks and, surprisingly, we found out that the Power Plan was configured to "Balanced (recommended)"

To prove that this setting could have been the root cause, we changed it to "High Performance" on one server, tested the script once again and . The script execution time went down from the previous 9-10 minutes to roughly 3-4. Yes, you got it right: 60-70% faster. We repeated the test on 3 other servers and same result, so the decision was made: we must change the Power Option setting to use the "High Performance" power plan.

Setting the power plan to High Performance is not negatively impacting the overall system performance, so you can set it on every server. More information about performance issue related to the "Balanced (recommended)" power plan can be found in the article Slow Performance on Windows Server when using the "Balanced" Power Plan at https://support.microsoft.com/en-us/help/2207548/slow-performance-on-windows-server-when-using-the-balanced-power-plan

How can you do it on scale? Well, there are different options:

  1. Manually on selected servers only
  2. Using a PowerShell script (find a sample for Exchange servers attached below) on selected servers
  3. On all server using a Group Policy

I leave the decision on how to change this setting up to you. Nobody knows your infrastructure better than you

Hope that helps.

Thanks!

SettingPowerPlanToHighPerf.ps1.zip

Fixes or workarounds for recent issues in Access

$
0
0

I wanted to let everyone know of a new content page which highlights recent issues for Access. Going forward, we will utilize this page over the support blog for trending issues.

Please check it out here:

Fixes or workarounds for recent issues in Access

You may also navigate to the site from https://support.office.com by locating the Troubleshooting section and clicking Recent Office fixes or workarounds

Microsoft Premier Workshop: Exchange Server High Availability

$
0
0

Beschreibung
Der 4-tägige Workshop Exchange Server High Availability vermittelt den Teilnehmern die nötigen Fertigkeiten, um Exchange Server 2016 in einer hochverfügbaren Infrastrukturen aufsetzen und konfigurieren zu können.
Neben der theoretischen Einführung in die technischen Grundlagen werden Übungen durchgeführt, um das erworbene Wissen praxisnah anwenden zu können.

Agenda
Module 1: High Availability Overview:
This module provides an overview of High Availability Concepts and Exchange 2016High Availability features.

Module 2: Understanding Client Access Server role in High Availability:
This module will cover the architecture components required to achieve High Availability for Client Access Role in Exchange 2016, describes load balancing options and client connectivity.

Module 3: Understanding Transport High Availability:
This module will cover the architecture of Transport services and different High Availability features (throttling and resource management, load balancing and fault tolerance options with message routing, options for third party servers and devices, Shadow Redundancy, SafetyNet).

Module 4: Understanding Mailbox High Availability:
This module provides information about the mailbox high availability features, including concepts around Continuous Mailbox Availability and Database Availability Groups.

Module 5: Managed Availability:
This module will cover Managed Availability  feature - integration of built-in monitoring and recovery actions with the Exchange high availability platform.

Module 6: Planning, Deployment and Management of Exchange 2016 High Availability Features:
This module will cover the Planning Deployment and Management tasks to successfully implement a Database Availability Group (DAG) with Exchange 2016 and provides understanding of various Failover and Switchover Scenarios, Exchange Native Data Protection and Preferred Architecture, corresponding troubleshooting options and recommendations.

Zielgruppe
IT-Mitarbeiter, die für Design, Deployment und Betrieb einer Exchange Server Umgebung verantwortlich sind.

Level 300
(Level Skala: 100= Strategisch/ 200= technischer Überblick/ 300=tiefe Fachkenntnisse/  400= technisches Expertenwissen)

Anmeldung
Zur Anmeldung wenden Sie sich bitte direkt an Ihren Microsoft Technical Account Manager oder besuchen Sie uns im Web auf Microsoft Premier Education. Dort finden Sie eine Gesamtübersicht aller offenen Workshops, für die Sie sich dort auch gleich anmelden können. Wir verwenden Ihre Daten ausschließlich zur Anmeldung für den Workshop.

5-minute guide to getting qualified registrations for your next event

$
0
0

Events and webinars are great for engaging customers in your industry. But often, prospects who initially show interest become disengaged before the big day. Why? Because they aren't qualified event registrations.

To change this, you need to focus on attracting people who are genuinely interested in your services and point of view. Here are a few tips to get the right people talking about your events.

1. Target the right audience

43% of event attendees don't feel events marketed to them are relevant to their industry - Eventbrite.
It may seem obvious, but you need to make sure you're only targeting prospects who are likely to become future customers. Everyone else is irrelevant. For example, if you're a Systems Integrator (SI) attempting to connect enterprise-level businesses with Microsoft Azure, you want to avoid targeting SMEs in your event promotion.
Proper segmentation allows you to tailor events towards specific buyer personas. Find out what industry challenges your target audience face and run events that will address these issues. At this stage, you're looking to engage prospects with your services, not directly sell to them. The aim is to generate new leads for your marketing team that will eventually turn into sales.
Event-related content
Producing content is a great way to direct qualified prospects to your event. You could:
  • Create an engaging press release that introduces your event guests.
  • Write articles that relate to your keynote topic and link to your event sign up page with CTAs.
  • Use video marketing to produce a highlight reel from previous events.
Whichever method you choose, make sure it is clear, concise and relevant to the audience you're targeting.

2. Promote your event in the right places

Once you've created your persona-specific content, it's a good idea to promote it on the platforms your target audience use regularly. The more relevant exposure you gain for your event, the more likely you are to acquire qualified event registrations. Use the following methods to attract the right people:
Social media. Social media is a ready-made promotional tool. Use it to engage an audience that already has an interest in your brand. Images, videos and personalised messaging will help drive the right people to your sign up page.
Email. Email is another useful way of engaging those already on your radar. These prospects are already more qualified than most because they have willingly offered their contact details. Keep copy light and engaging, focusing on the value your event can offer their business.
Influencers. Is there someone in your niche who already commands the audience you're after? Reach out to these influencers via email or social media and find out whether they'll promote your event.
Pay-per-click advertising. Pay-per-click advertising (PPC) is ideal for quick wins such as event registrations. Platforms such as LinkedIn enable you to target specific demographics, only charging you for the clicks you receive. Read our 5-minute guide to effective PPC on LinkedIn to find out how you can optimise your adverts for improved ROI.

3. Optimise your sign-up page

Congratulations! You've succeeded in driving traffic to your sign-up page. Now you need to ensure your prospects know exactly what they're signing up for. Optimising the content on this page can improve your sign-up rate and ensure you receive as many qualified event registrations as possible.
Here's a quick checklist of things you'll need:
  • Clear and concise text - reiterate the main benefits of your event and what value your attendees can expect to draw from it.
  • Relevant, personalised information - make it clear who this event is for. It's important to remind people who you're targeting and why this event is exclusive to their challenges and objectives.
  • Appealing visuals - show people enjoying previous events. Images help sell the event to interested prospects.
  • Obvious contact details - many people will avoid signing up if they don't feel they can get in touch with you after. Make contact details easily accessible.
Top tip: make sure your sign-up page is mobile-friendly. You will lose a lot of potential prospects if your site is slow to load or difficult to use.

4. Don't forget to confirm and follow up

Your confirmation page/email should include the same elements. This is also a great place to answer FAQs and include details that weren't necessary to guests pre-registration. Again, recap the value of this event and be as descriptive as possible - if attendees feel underprepared they are less likely to show.
Launch an email campaign in the days leading up to the event that reminds attendees to participate and keeps them engaged. A countdown on social media is also a good way to sustain audience interest (just make sure you don't overdo it).

5. Turn excited attendees into excited customers

Your marketing efforts shouldn't end when the event does. Follow-up with your attendees to turn qualified registrations into qualified leads. A personalised email can lead your guests to the next stage of the purchase journey and inspire them to form a lasting relationship with your company.
If you've followed the best practices above and staged a relevant and engaging marketing event, you should soon see interest in your brand soar.
For more information on how to upgrade your marketing strategies, check out our Microsoft Partner Transformation Toolkit

 

Monitoring temperature sensors in remote locations with Twilio, Azure Automation and Log Analytics – Part 1

$
0
0

This blog series is based on a private project and not customer related. But it might show you how easy it is to solve real world problems with Azure services.
As a short teaser, this is my current custom OMS solution dashboard that I will explain throughout this blog series:

  • Part 1 describes the initial situation and my decision criteria for the devices, tools and services I used to build the solution.
  • Part 2 describes the overall technical architecture, some technical details and some lessons learned
  • Part 3 describes how I created the dashboard tiles and the alarmin based on the data I collected

If you are not interested in neither the background of my solution nor the technical details and just want to see the solution at work, please jump straight to part 3 🙂

My initial situation

I have several remote locations (storage or cellar rooms) that I need to monitor regarding temperature, power loss/outage and humidity (optional).

Objectives for my solution

Precisely I wanted to achieve these objectives:

  • Get an alert as quick as possible, when the room temperature fall out of a defined range
  • Get an alert as quick as possible, when there is a power outage in one of the locations
  • Get an overview (dashboard) of the current state of all locations from wherever I might be
  • Get an alert whenever one of the monitoring devices in a location dies (malfunction or other issue)
  • [Optional] Track the temperature on a very basic level
  • [Optional] Track and alert the humidity in the room
  • Spent as little time and as little money as possible 🙂

Sensor devices

Some guys will say: this is a perfect job for a raspberry PI, isn’t it? Unfortunately, I don’t have a clue about the PI and I do not have any kind of internet or Wi-Fi connectivity at the remote location. Just some very basic 2G connectivity with rather poor reception (cellar, you know…).
Because I did not want to re-invent the wheel I searched for temperature monitoring sensors with GSM and SMS capabilities. Finally I came up with a device called DRH-3015 Temperature Guard. It has an external temperature sensor that can be programmed and queried via SMS text messages:

I bought a couple of these devices. They work fine by sending text messages in case of a power outage (there is a battery backup module built in) or if the temperature falls out of the defined range. Unfortunately, this device cannot send status messages on a defined regular interval. You will always have to trigger the status message by sending a text message from your mobile phone. So for every measured value you have to pay twice: One trigger message and one message with the value as an answer!

I hoped that there were more intelligent devices availabe and after a while I found another device called KKMoon RTU5023 (available from a large US/international online retailer).

It provides some very nice features in addition to the DRH-3015:

  • External antenna connector for better 2G quality of reception
  • Allows to monitor temperature and humidity
  • Can send status messages (reports) on a scheduled base
  • You can call the device from an authorized mobile phone number. The device will not answer the call, but cancel it immediately so the call is free of charge. But it will trigger a status message sent to the caller. So querying the status of the device will cost you just one text message instead of two!

Currently I have three DRH-3015 and three RTU-5023 in place in four different locations.

Designing the solution

Now that I had my devices, how could I achieve my objectives? I quickly realized, that OMS Log Analytics is the perfect place for collecting all sensor information, creating nice dashboards and sending alerts based on the collected data. But how do I get the data into OMS?
Yes, I know. Would I have used a raspberry PI I would be able to send the sensor information straight up to OMS via our REST-API.
But as I just have devices sending text messages, I needed to find a way to:

  • Send text messages to the device automatically to trigger a status message (for the DRH-3015 devices)
  • Receive text messages and transfer them into a custom OMS log record (for out-of-band messages from both device types and periodic status messages from RTU-5023 devices)
  • Convert the text message into a standardized custom OMS log record

While playing a bit with the devices I also quickly noticed, that after I manage to receive these text messages automatically somehow, I have to transform and standardize them. And this can be challenging:
Unfortunately the devices have different kinds of formats for regular status messages and out-of-band alert messages. Even within the same type of device they can use different formats depending on the firmware used:

Receiving, parsing and extracting text is a perfect job for a PowerShell script! Running PowerShell scripts automatically in the cloud is offered by Azure Automation runbooks! So I will need at least one Azure Automation runbook.
But how do I trigger and receive text messages? By searching for the terms “azure automation text message” I quickly came up with a service called Twilio. Twilio offers virtual mobile phone numbers in almost any country and a REST-API for sending and receiving text messages via this specific phone number.

Based on these thoughts and findings this is my plan for the technical solution and the upcoming next steps:

  • Trigger and receive text messages via Twilio
  • Get data from Twilio with an Azure Automation runbook
  • Send modified data to a custom log in OMS Log Analytics via an Azure Automation runbook
  • Built a dashboard based on the custom log records in OMS
  • Create E-Mail alerts based on the custom log records in OMS

I will describe the overall architecture and some technical details in part two of this series.

Monitoring temperature sensors in remote locations with Twilio, Azure Automation and Log Analytics – Part 2

$
0
0

This is part two of my little three part series on how to monitor sensors in remote locations with Twilio, Azure Automation and OMS. It describes the overall solution architecture, some technical challenges I faced, solutions I found and lessons I learned.
For the background of this blog series, please read part one. If you just want to see the result, please go straight to part three.

Overview solution architecture

Based on the backgorund and ideas from part 1, this is an overview of the whole solution architecture that I want to create:

Handling text messages with Twilio

Let's start with the text message handling. You have to subscribe to the Twilio service to use it. Once you have a subscription, you have to buy a SMS capable mobile phone number. Once you've created a subscription, you will get an AccountSID and an AuthToken to authenticate against the Twilio service.

Sending text messages

Ok, how can we send a text message to an arbitrary receipient? Twilio provides a REST-API for this task and has a very good documentation of this REST-API. That makes translating this into PowerShell quiet easy:

function send-twiliosms
{
    param($RecipientNumber,
            $SMSText  )

    $TwilioSID = Get-AutomationVariable -Name "DirkBri-Twilio-AccountSID"
    $TwilioToken = Get-AutomationVariable -Name "DirkBri-Twilio-AuthToken"
    $TwilioNumber = Get-AutomationVariable -Name "DirkBri-Twilio-Number"

    $secureAuthToken = ConvertTo-SecureString $TwilioToken -AsPlainText -force
    $TwilioCredential = New-Object System.Management.Automation.PSCredential($TwilioSID,$secureAuthToken)  

    $TwilioMessageBody = @{
       From = $TwilioNumber
       To = $RecipientNumber
       Body = $SMSText
    }

    $TwilioApiEndpoint= "https://api.twilio.com/2010-04-01/Accounts/$TwilioSID/Messages.json"
    $Result = Invoke-RestMethod -Uri $TwilioApiEndpoint -Body $TwilioMessageBody  -Credential $TwilioCredential -Method "POST" -ContentType "application/x-www-form-urlencoded"  

    return $Result
}

$Result = send-twiliosms -RecipientNumber $Device -SMSText $dicDevicesToTrigger.item($Device)
write-host $Result

As you can see, I have stored the information of my number, the AccountSID and the AuthToken along with my OMS workspace information in an (encrypted) Azure Automation variable for better and safer usage.

Receiving text messages

At the beginning of the project I was worried how I could receive the text messages send by the devices. Do I have to write some kind of polling mechanism? But then I checked the capabilities of my Twilio service and noticed this:



Twilio can call a Webhook once a message arrives! Yessss! And Azure Automation runbooks can be triggered by a Webhook 🙂
So receiving text messages is the easiest part. Twilio will call the Azure runbook by invoking the Webhook and will commit the text message as a JSON structure within the Webhook. The RequestBody property contains the text message information:

 

Azure Runbooks

I decided to create two runbooks:

  • One runbook that actively triggers the DRH-3015 devices to send their status message
  • One runbook to receive all text messages, parse them and create an OMS custom log record out of it

Trigger text message runbook

The trigger runbook consists mainly of the send-twiliosms function explained above. It gets triggered once per day (or as often as you want and would like to pay for the text messages) and sends a trigger to all DRH-3015 devices.

Receive and process text message runbook

This runbook is a bit more complex and performs these tasks:

  • Converts the URL encoded Webhook data from JSON to a PowerShell object with multiple properties like Sender, Message body, SMS ID etc.
  • Checks, if the sender is valid and approved. The runbook will not accept data from unknown numbers (devices) to prevent SPAM or malicious data.
  • Parses the different text message formats and transforms them into a single, standardized Hash table.
    If for any reason the device sends a message that cannot be transformed properly (e.g. due to an unknown format), consider this as an alert and set the properties of the record accordingly.
    This is an important fallback, in case my script has a flaw and cannot handle some specific kind of text message correctly.
  • Converts the Hash table into a JSON object and sends the JSON object to my OMS workspace using the REST API.
    I use a custom OMS log called DirkBriClimateSensorDataV2_CL
    You can find detailed information on how to send data to OMS via PowerShell and the REST-API here and here

All these steps are relatively simple to realize with the incredible power of PowerShell. The most complicated part was designing the layout of the Hash Table, meaning what kind of properties I should send to OMS!

Properties for OMS custom log

Lesson learned
One of the most important lessons I learned during this project was:
You NEED to know, what you want to achieve in OMS, BEFORE you create the custom log record!!! That might sound simple, but I guess I have created over 5 different custom logs, before it finally contained all data that I needed to build my OMS dashboards and create my alerts.

I ended up with this JSON data structure created by the runbook:

A brief explanation of the properties specified:

  • HealthState
    Allows me to show the overall health state of each device by showing the current (latest) HealthState value per device.
  • HumidityValue,TemperatureValue,PowerValue
    Number values that will allow me to draw charts in OMS
  • HumidityState, TemperatureState, PowerState
    State values that allow me to show the individual component state of each device.
  • IsAlertMessage
    Very important property that will be needed to create out-of-band alerts in OMS.
  • DeviceType,PhoneNumer,Location
    Content fields to identify the different devices.
  • TwilioSMSMessageID, RunbookInvokeTime, TwilioSMSBody, RunbookScriptVersion
    Additional information that will help troubleshooting in case of any issue.
    Lesson learned
    This is a lesson I learned when I started creating SCOM Management Packs: You can never have too much data for debugging an troubleshooting purpose :). By including this data in the OMS log record I can easily track e.g. a text message throughout the whole process, determine the time difference between sending the data from Azure Automation to OMS and its availability for searching in Log Analytics and so on.

This is how the same JSON data will look like in Azure Log Analytics once transformed into a custom log record:

 

Based on these properties I will hopefully be able to create the necessary dahsboards and alerts in OMS. Let's see this  in part three...


Monitoring temperature sensors in remote locations with Twilio, Azure Automation and Log Analytics – Part 3

$
0
0

This is part three of my little three part series on how to monitor temperature sensors in  remote locations with Twilio, Azure Automation and OMS. It describes my current OMS solution and the costs associated with the solution, in case that you want to build something similar.
For the background of this post, please read part one. Part two describes some of the technical aspects and lessons learned.

Let me show you once again the complete custom OMS solution dashboard:

 

OMS solutions and alarming overview

For those of you, who are not yet aware of the general data flow in OMS regarding solutions and alarming:

Everything is based upon the Log Analytics records stored in the OMS workspace. You can define queries by leveraging the powerful Azure Log Analytics query language. Based on these queries you can either create Alerts or specific vizualisation items called tiles. A tile could be a chart or a list for example. Multiple tile can form a custom OMS solution.

All of the dashboard tiles in my custom solution are based on specific queries against my custom log type DirkBriClimateSensorDataV2_CL:

Showing the health state of my devices is just a query that retrieves the HealthState property of the last record written per device:

DirkBriClimateSensorDataV2_CL 
| where DeviceLocation_s matches regex "Lager*" 
| summarize arg_max(TimeGenerated, *) by DeviceLocation_s 
| project Location=DeviceLocation_s,HealthState=HealthState_s 
| order by Location asc

And the last access time tile query is based on the TimeGenerated property of the latest record written by each device.

The line charts for temperature and humidity are based on a query like this:

DirkBriClimateSensorDataV2_CL 
| where DeviceLocation_s matches regex "Lager*" and IsAlertMessage_b==false 
| summarize avg(TemperatureValue_d) by DeviceLocation_s,bin(TimeGenerated, 1h) 
| project Location=DeviceLocation_s,Temperature=avg_TemperatureValue_d,TimeGenerated 

 

Alarming

One of my main objectives was to get an alert, when a threshold is exceeded, the device lost power or anything else unexpected happens. The logic behind this is covered within my "text message receive and process" runbook in Azure Automation. The runbook decides, if something is good (HealthState = "OK"), unknown (someState = "Unknown") or an out-of-band alert (IsAlertMessage = TRUE).
In OMS I simply need to write the right query and configure an alert based on this query:

To get an alert, when a specific device in a location is unhealthy (HealthState <> "OK"), I query for HealthState <> "OK" within the last 24h. OMS will execute this query every 5 minutes and will raise an alert, if the query returns 1 or more results:

DirkBriClimateSensorDataV2_CL
| where DeviceLocation_s matches regex "Lager*"
| summarize  arg_max(TimeGenerated, *) by DeviceLocation_s
| where HealthState_s <> "OK"

The alert E-mail created by this alert looks like this:

The mail displays the query and the values of the matching result.

And of course: The dashboard will also display the current health state for easy detection:

Costs of the solution

Ok, now its time to speak about costs. How much does this solution it cost?
Everything depends upon the amount of devices and how frequent you whish to collect data. The more frequent, the more text messages you need to send and pay for.

  • One time costs
    One time costs occur only for purchasing the devices. The DRH-3015 costs about 70 EUR, the RTU-5023 with antenna and power suply around 90 EUR
  • Re-occuring (monthly) costs
    • Twilio
      You need a mobile phone number from Twilio. That will cost 5 USD per Month. Every text message received will charge 0,0075 USD/message. Every text message send will be charged 0,085 USD/message.
    • Azure Automation
      You can use the free version for this solution. 500 min per month should be enough -> 0 EUR
    • OMS
      You can use the free tier for this solution. 500 mb data and 7 days of storage should be enough -> 0 EUR
    • Text messages from devices
      These costs are hard to calculate, as they depend on how frequently you want to get status messages and/or trigger these. And it depends on your provider. I am using a provider that charges me 4 EUR/month per device (SIM card) for 300 text messages. That would allow me to send 8 status messages every day (every 3h) and some buffer for out-of-band messages.

I calculate with roughly 6 EUR per month and device for in- and outbound text messages and 5 USD for the Twilio number.

 

Results realized?

So it is time to have a look at my objectives and what I had achieved with this solution:

  • Get an alert as quick as possible, when the room temperature fall out of a defined range
    The device will send an out-of-band alert message to Twilio that gets translated into a record with HealthState <> "OK" and IsAlertMessage = TRUE. That will trigger an OMS alert.
    Additionally I could have created a query that checks, if the TempValue_d property is above/below a certain value.
    Succeeded!!!
  • Get an alert as quick as possible, when there is a power outage in one of the locations
    The device will send an out-of-band alert message to Twilio that gets translated into a record with HealthState <> "OK" and IsAlertMessage = TRUE. I also created a query that checks for devices not sending any data in a certain time frame. So either way I will get an alert, if a device dies.
    Succeeded!!!
  • Get an overview (dashboard) of the current state of all locations from wherever I might be
    See the dashboard above.
    Succeeded!!!
  • Get an alert whenever one of the monitoring devices in a location dies (malfunction or other issue)
    I also created a query that checks for devices not sending any data in a certain time frame. So either way I will get an alert, if a devices dies.
    Succeeded!!!
  • [Optional] Track the temperature on a very basic level
    See the dashboard above.
    Succeeded!!!
  • [Optional] Track and alert the humidity in the room
    See the dashboard above.
    Succeeded!!!
  • Spent as little time and as little money as possible
    Well, that is an interesting question. I spend around 3 days total for testing and developing. From a cost perspective, it is within my personal limits. But to be honest: this solution does not scale well regarding costs. The more frequent I want to collect data from the devices, the more I have to pay...

Summary

I really had some fun creating this solution. It is not perfect and I still have ideas for improvement. But hey, we are living in times of agile development, Dev-Ops and SCRUM, so this is my first working release after Sprint 1 🙂

Using cloud based services like Azure Automation, OMS and Twilio can create powerful solutions even for guys like me, that are IT-Pro's and not Dev's. I wanted to show you a private use case for these technologies and I hope you liked it.

Feedback is always welcome and I might continue this series, once I implemented more feature. PowerBI integration is already on the horizon 🙂

 

(Cloud) Tip of the Day: Online training for Azure Data Lake

Updated Microsoft Cloud Storage for Enterprise Architects poster

$
0
0

The Microsoft Cloud Storage for Enterprise Architects poster has been updated with the latest information and links to resources.

It can help you understand how Microsoft’s cloud offerings support storage using a building construction analogy:

  • Move-in ready cloud storage options are bundled with existing cloud-based services, which you can use immediately and with minimal configuration.
  • Some assembly required storage options can be used as a starting point for the storage of your solution with additional configuration or coding for a custom fit.
  • Build it from the ground up storage options can be used to create your own storage solution or storage for your apps from scratch.

You can download this multi-page poster in PDF or Visio format and get this poster in eleven languages here.

There is also a new article version of this poster’s content here.

The Microsoft Cloud Architecture Series
for Enterprise Architects

The Microsoft Cloud Storage for Enterprise Architects poster is just one in a series that provides detailed architectural advice from choosing the right Microsoft cloud offerings to designing IT elements such as security, networking, identity, mobility, and storage.

This poster set shows the breadth and depth of the Microsoft cloud, the industry’s most complete cloud solution, and how it can be used to solve your IT and business problems.

Browse through the whole set of posters at http://aka.ms/cloudarchseries. For these posters in eleven languages, see this blog post.

 

To join the CAAB, become a member of the CAAB space in the Microsoft Tech Community and send a quick email to CAAB@microsoft.com to introduce yourself. Please feel free to include any information about your experience in creating cloud-based solutions with Microsoft products or areas of interest. Join now and add your voice to the cloud adoption discussion that is happening across Microsoft and the industry.

 

Announcing Three New EMS Security Consultations for Presales & Deployment Assistance

$
0
0

Overcome security challenges during any stage of your EMS Security projects through a suite of new technical consultations. Whether you are just getting started or need presales or deployment guidance for security scenarios, start leveraging the new technical consultations available as part of your MPN benefit.

 
EMS Security Starter Kit Consultation 
(L100-200 – Cost 5 Partner Advisory Hours)
Kick start your Enterprise Mobility Suite (EMS) security practice by learning the fundamentals, configuration and set-up of EMS Security. In this one-on-one consultation, you’ll receive personalized guidance and introductory training, helping you better sell and deploy EMS Security solutions. You’ll receive a detailed overview of EMS Security, and walk away understanding how to create an EMS security assessment.

 

EMS Security Presales Consultation (L100-200 – No Cost)
Receive technical guidance as you design the security details for your customers solutions during this one-on-one consultation. Dive deeper into the functionality of EMS and learn how to prepare a security assessment as well as set-up and configure EMS Security. You’ll walk away with a better understanding of how to propose and plan the right customer solutions and mitigate potential issues along the way.

 

EMS Security Deployment Consultation (L200 – Cost 5 Partner Advisory Hours)
Ensure a smooth cloud deployment with personalized EMS Security configuration, design and deployment guidance from a Microsoft expert. During this one-on-one consultation, we’ll help you successfully implement the proposed solution by reviewing technical blockers and teaching you best practices to follow. By preventing potential deployment issues and avoiding common implementation pitfalls, you’ll be able to reduce deployment timelines and keep your projects on track.


Jump start your mobility and security practice through remote, interactive trainings. Explore the full suite of MPN Technical Presales and Deployment services within the mobility and security tab at aka.ms/MobilitySecurity.

Customizing, Exporting and Importing Azure VM Diagnostic Settings

$
0
0

Azure VM Diagnostics on Windows VMs can track more than just the basic performance counters. Through the Azure portal UI as in the image below you can add performance counters that are available to the operating system. Inside of windows run "Typeperf -q > counters.txt" at a command prompt to get a txt file of all the available counter names, then add the seem as seen here:
Performance Counters

Once you have added all of the custom performance counters you want, you can use the powershell below to export the configuration to a JSON file, then apply to existing or new VMs. The PowerShell below assumes an existing virtual machine with no azure diagnostics extension installed. It would be fairly simple to add a check to see if extensions are installed and remove/re-add with the imported config as well as use powershell to update the json file for new diagnostic storage account locations and resourceIDs.


Login-AzureRmAccount
#source resource group and virtual machine to take the diagnostic settings from.
$rg = 'resourcegroup1
$vm = 'vm1'
$diag = (Get-AzureRmVMDiagnosticsExtension -ResourceGroupName $rg -VMName $vm).publicsettings
$path = 'c:usersjoknightdiag.json'
$diag | out-file $path
$targetVM='targetvm'
$targetRG='targetrg'
#you may want to edit the diagnostics storage account in the diag.json file
#be sure to edit the resourceID in the diag.json file to match the target VM
Set-AzureRmVMDiagnosticsExtension -ResourceGroupName $targetrg -VMName $targetvm -DiagnosticsConfigurationPath $path

Viewing all 34890 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>