Quantcast
Channel: TechNet Blogs
Viewing all 34890 articles
Browse latest View live

Support Tip: Change in iOS passcode compliance may affect email access for some end users

$
0
0

You may have noticed that some of your users cannot access email after they have an iOS compliance policy assigned to them. One of the reasons could be because they may have deferred setting a PIN or a passcode after an iOS passcode policy is applied.

When users on an iOS device are targeted with a passcode compliance policy, they will be considered ‘not compliant’ until they set a PIN. Any company resources protected by Conditional Access policies requiring a compliant device will be blocked until the user makes their device compliant for the assigned policies. If they choose to defer setting a PIN, they will be prompted every 15 minutes to set it, until a PIN is set.

Note that devices that are locked after being marked not compliant will lose email access until they are unlocked and a PIN is entered. End users on these devices may experience a delay of a few minutes until their email is updated again.

We hope this helps! Let us know if you have any questions or feedback.


SAP on Azure クラウドワークショップ(パートナー様向け)のご案内【3/23 更新】

$
0
0

このたび、SAP on Azure パートナー様の技術者向けに協業ビジネスを一層推進すべく、下記の日程でクラウドワークショップを開催する運びとなりました。

グローバル ブラックベルト(GBB) アジア タイムゾーン SAP テクノロジーのメンバーによる1日のコースで、Azureインフラストラクチャ基礎からSAP をクラウド上に実装する上で必要となる知識、検討すべき制約と対応方法、SAP on Azureの具体的なアーキテクチャまで、深くご理解頂きSAPのクラウド化プロジェクトを進めるために必要なスキルを習得いただきます。

ご参加につきましては、SAPアーキテクチャについてのスキルを有しかつ、当日すべてのセッションへのご参加をお願い申し上げます。
※すべて日本語でのセッションです

また全国各地で開催を予定しておりますので、東京以外のパートナー様もぜひご参加を検討いただければ幸いです。

 

日 程 :2018/02/26 ~ 2018/06/21 (会場およびお申し込み先は下のリストをご参照ください)

標準アジェンダ

 

参加希望の方は、以下の会場のリンクより参加登録をお願いいたします。定員に達し次第締め切りますので、お早めにお申し込みください!

開催日__ 開催地 会場 (お申込みリンク)
4/11(水) 東京 マイクロソフト品川本社
4/16(月) 福岡 TKP博多駅前シティセンター
5/10(木) 東京 マイクロソフト品川本社
5/25(金) 名古屋_ TKPガーデンシティPREMIUM名古屋新幹線口_
6/6(水) 東京 マイクロソフト品川本社
6/21(木) 大阪 TKPガーデンシティPREMIUM大阪駅前

 

 

Easy Configuration of the Azure Information Protection Scanner

$
0
0

The Scenario:

The EU General Data Protection Regulation (GDPR) is taking effect on May 25, 2018 and marks a significant change to the regulatory landscape of data privacy.  ​The aim of the GDPR is to protect all EU citizens from privacy and data breaches in an increasingly data-driven world.  Organizations in breach of GDPR can be fined up to 4% of annual global turnover or €20 Million (whichever is greater).  Needless to say, this has motivated organizations worldwide to better classify and protect sensitive personal data to protect against breach.  One of the ways to accomplish this is to protect everything sensitive using Azure Information Protection.

Azure Information Protection allows data workers to classify and optionally protect documents as they are created.  There are also options for automatically classifying/protecting emails as they are sent through your Exchange server or Exchange Online, and SharePoint Online can be protected using Microsoft Cloud App Security AIP integration.  These options go a long way to protect newly created data and data migrated to the cloud, but what about the terabytes of data sitting on File Shares and On-Premises SharePoint 2013/2016 servers? That is where the AIP Scanner comes in.

The Solution:

The Azure Information Protection Scanner is the solution for classifying and protecting documents stored on File Shares and On-Premises SharePoint servers. The overview below is from the official documentation at https://docs.microsoft.com/en-us/information-protection/deploy-use/deploy-aip-scanner.  This blog post is meant to assist customers with deploying the AIP Scanner, but if there is ever a conflict, the official documentation is authoritative.

Azure Information Protection scanner overview

The AIP Scanner runs as a service on Windows Server and lets you discover, classify, and protect files on the following data stores:

  • Local folders on the Windows Server computer that runs the scanner.
  • UNC paths for network shares that use the Common Internet File System (CIFS) protocol.
  • Sites and libraries for SharePoint Server 2016 and SharePoint Server 2013.

The scanner can inspect any files that Windows can index, by using iFilters that are installed on the computer. Then, to determine if the files need labeling, the scanner uses the Office 365 built-in data loss prevention (DLP) sensitivity information types and pattern detection, or Office 365 regex patterns. Because the scanner uses the Azure Information Protection client, it can classify and protect the same file types.

You can run the scanner in discovery mode only, where you use the reports to check what would happen if the files were labeled. Or, you can run the scanner to automatically apply the labels.

Note that the scanner does not discover and label in real time. It systematically crawls through files on data stores that you specify, and you can configure this cycle to run once, or repeatedly.

Prerequisites:

To install the AIP Scanner in a production environment, the following items are needed:

  • A Windows Server 2012 R2 or 2016 Server to run the service
    • Minimum 4 CPU and 4GB RAM physical or virtual
    • Internet connectivity necessary for Azure Information Protection
  • A SQL Server 2012+ local or remote instance (Any version from Express or better is supported)
    • Sysadmin role needed to install scanner service
  • Service account created in On Premises AD and synchronized with Azure AD (I will call this account AIPScanner in this document)
    • Service requires Log on locally right and Log on as a service right (the second will be given during scanner service install)
    • Service account requires Read permissions to each repository for discovery and Read/Write permissions for classification/protection
  • AzInfoProtectionScanner.exe available on the Microsoft Download Center (future versions will be included in the AIP client)
  • Labels configured for Automatic Classification/Protection
    • NOTE: This is an AIP Premium P2/EMS E5 feature 
    • https://docs.microsoft.com/en-us/information-protection/deploy-use/configure-policy-classification

Installation:

Here is where the Easy part from the title gets started.  Installation of the AIP Scanner service is incredibly simple and straight-forward.

  1. Log onto the server where you will install the AIP Scanner service using an account that is a local administrator of the server and has permission to write to the SQL Server master database.
  2. Right-click on the Windows Windows button in the lower left-hand corner and click on Command Prompt (Admin)
    Start Menu
  3. Type PowerShell and hit Enter
    PowerShell
  4. At the PowerShell prompt, type the following command and press Enter:
    Install-AIPScanner
  5. When prompted, provide the credentials for the scanner service account (YourDomainAIPScanner) and password
  6. When prompted for SqlServerInstance, enter the name of your SQL Server and press Enter
    You should see a success message like the one below
    Message
  7. Right-click on the Windows Windows button in the lower left-hand corner and click on Run
    run
  8. In the Run dialog, type services.msc and click OK
    Services
  9. In the Services console, double-click on the Azure Information Protection Scanner service
  10. On the Log On tab of the Azure Information Protection Scanner Service Properties, verify that Log on as: is set to the YourDomainAIPScanner service account
    logon

See, told you it was easy to install.  Luckily, configuring the service is only slightly more challenging. 🙂

Scanner Configuration:

OK, this next part is not super simple but it isn't terrible either as long as you don't miss anything.  Luckily, you can follow my steps to make it as easy as possible.

Authentication Token:

  1. On the server where you installed the scanner, create a new text document on the desktop and name it Set-AIPAuthentication.txt
    • In this document, paste the line of PowerShell code below and saveSet-AIPAuthentication -webAppId <ID of the "Web app / API" application> -webAppKey <key value generated in the "Web app / API" application> -nativeAppId <ID of the "Native" application >
  2. Open Internet Explorer and browse to https://portal.azure.com
  3. At the Sign in to Microsoft Azure page, enter the your tenant admin credentials
  4. In the Microsoft Azure portal, click on Azure Active Directory in the left-hand pane
  5. Under Manage, click on App registrations

  6. In the App registrations blade, click the + New application registration button
  7. In the Create blade, use the values in the table below to create the registration
    Name AIPOnBehalfOf
    Application type Web app / API
    Sign-on URL http://localhost

  8. Click the Create button to complete the app registration
  9. Select the AIPOnBehalfOf application
  10. In the AIPOnBehalfOf blade, hover the mouse over the Application ID and click on the Click to copy icon when it appears
  11. Minimize (DO NOT CLOSE) Internet Explorer and other windows to show the desktop
  12. On the desktop, return to Set-AIPAuthentication.txt and replace <ID of the "Web app / API" application> with the copied Application ID valueWARNING: Ensure there is only a single space after the Application ID before -webAppKey
  13. Return to the browser and click on the Settings button
  14. In the Settings blade, under API ACCESS, click on Keys

  15. In the Keys blade, add a new key by typing AIPClient in the Key description field and your choice of duration (1 year, 2 years, or never expires)
  16. Select Save and copy the Value that is displayedWARNING: Do not dismiss this screen until you have saved the value as you cannot retrieve it later
  17. Go back to the txt document and replace <key value generated in the "Web app / API" application> with the copied key valueWARNING: Ensure there is only a single space after the Application Key before -nativeAppId
  18. Repeat steps 6-10 to create a Native Application using the values in the table below
    Name AIPClient
    Application type Native Application
    Sign-on URL http://localhost

  19. Replace <ID of the "Native" application > in the txt document with the copied Application ID value

  20. Return to the browser and in the AIPClient blade, click on Settings
  21. In the Settings blade, under API ACCESS, select Required permissions

  22. On the Required permissions blade, click Add, and then click Select an API

  23. In the search box, type AIPO and click on AIPOnBehalfOf, and then click the Select button
  24. On the Enable Access blade, check the box next to AIPOnBehalfOf, click the Select button
  25. Click Done

  26. Return to the PowerShell window and paste the completed command from Set-AIPAuthentication.txt and press Enter
  27. When prompted, enter the user AIPScanner@yourdomain.onmicrosoft.com and the passwordNOTE: Replace tenantname with the your tenant

  28. You should see a prompt like the one below. Click Accept

  29. You will see the message below in the PowerShell window once complete

Configuring Repositories:

Now that the scanner is happy and fully authenticated, it is time to put it to work scanning repositories.  These can be on-premises SharePoint 2013 or 2016 document libraries or lists and any accessible CIFS based share.  Keep in mind that in order to do discovery, classification, and protection, the scanner service pulls the documents to the server, so having the scanner server located in the same LAN as your repositories is recommended. You can deploy as many servers as you like in your domain, so putting on at each major site is probably a good idea.

  1. To add a file share repository, open a PowerShell window and run the command below
    Add-AIPScannerRepository -Path \fileserverdocuments
  2. To add a SharePoint 2013/2016 document library run the command below
    Add-AIPScannerRepository -Path http://sharepoint/documents
  3. To verify that the repositories that are configured, run the command below
    Get-AIPScannerRepository
  4. Run the command below to run an initial discovery cycle
    Set-AIPScannerConfiguration -Schedule OneTime 
    NOTE: Although the scanner will discover documents to protect, it will not protect them as the default configuration for the scanner is Discover only mode
  5. Start the AIP Scanner service using the command below
    Start-Service AIPScanner
  6. Right-click on the Windows Windows button in the lower left-hand corner and click on Event Viewer

  7. Expand Application and Services Logs and click on Azure Information Protection

  8. You will see an event like the one below when the scanner completes the cycle

    NOTE: You may also browse to %localappdata%MicrosoftMSIPScannerReports and review the summary txt and detailed csv files available there
  9. At the PowerShell prompt type the command below to enforce protection and have the scanner run once
    Set-AIPScannerConfiguration -ScanMode Enforce -Schedule OneTime -Type Full
    NOTE: After testing, you would use the same command with the -Schedule Continuous command to have the AIP Scanner run continuously
    NOTE: The -Type Full switch forces the scanner to review every document. 
  10. Start the AIP Scanner service using the PowerShell command below
    Start-Service AIPScanner
  11. In the Event Log, you will now see an event that looks like the one below

And that's all there is to setting up the AIP Scanner! There are many more options to consider about how to classify files and what repositories you want to configure, but I would say that it is fairly simple to set up a basic scanner server that can be used to protect a large amount of data easily.  I highly recommend reading the official documentation on deploying the scanner as there are some less common caveats that I have left out and they cover performance tips and other nice additional information.

I hope this was helpful. Please let me know if I missed anything or if anything is not clear in the comments below.

Kevin

Microsoft Office 365 в образовании. Организуем обучение с помощью Microsoft Sway. Примеры

$
0
0

Автор статьи - Виталий Веденев.

Продолжаю разбирать, как с помощью Sway создать электронный учебный курс (ЭУК) [1], как организовать обучение с помощью Sway [2] на конкретных примерах.

Что вы будете знать и уметь после прочтения этой статьи?

  • Как экспортировать электронный учебник, созданный в Sway, в pdf и word – файлы?
  • Как создать электронный учебник в Sway из файлов pdf и word?
  • Как создать электронный учебник для универсального использования, включая мобильные устройства?

Активное использование облачных служб позволяет подобрать различные источники для предоставления своих материалов.

Sway – современный сервис, который использует блочный дизайн Microsoft, оптимизированный под сенсорный ввод и управление, поддерживает перетаскивание элементов.

Sway автоматически адаптирует учебные материалы под мобильные устройства и является универсальным средством для организации образовательной среды.

Сценарий 1. Сохранение Sway в форматы PDF и Word

Достаточно часто возникает необходимость представления Sway-учебника в виде твердой копии на основе, например, pdf или word-файлов или сохранения содержимого учебника в виде файла в библиотеках OneDrive, SharePoint или на локальных устройствах для автономного обучения [3].

Для этого надо воспользоваться возможностью экспорта содержимого учебника в форматы «Word» или «PDF». Порядок экспорта и представление содержимого в разных форматах можно просмотреть в видеоролике «Электронные учебники Sway. Экспорт в pdf и word» (https://youtu.be/Jq51uL2FDrQ ).

На схеме отображена последовательность экспорта содержимого конкретного учебника Sway в Word и PDF форматы. Видео в формате Word преобразуется аналогично Sway в виде встроенного объекта. Более подробно о порядке создания Sway из PDF рассмотрено в сценарии 2.

Порядок создания Sway из сохраненных PDF и Word форматов рассмотрен в видеоролике «Создание электронного учебника в Sway из pdf или word». (https://youtu.be/qtMpzWSGKwA)

Сценарий 2. Создание электронного учебника из PDF

Рассмотрим, как создать электронный учебник из методической инструкции в формате PDF с добавлением мультимедиа: видео, опросов и тестов Microsoft Forms.

На домашней странице Sway воспользуемся кнопкой в верхней части окна «Начать с документа», выберем на локальном устройстве нужный для публикации в Sway файл PDF (в примере – это методическая инструкция преподавателю).

Инструкция автоматически преобразуется (подробности см. в видеоролике «Порядок создания электронного учебника из PDF-файла» (https://youtu.be/tQmw8vWIFjg ) в Sway. Далее необходимо внести некоторые изменения:

  • Отредактировать название учебника.
  • Проверить структуру и расположение изображений, полученных из PDF-документа.
  • Добавить «Мультимедиа» в нужных местах текста для более наглядного отображения учебного материала:
    • Для этого необходимо выбрать «Видео» в панели «Мультимедиа» (панель появляется после нажатия знака «+» в режиме редактирования).
    • В примере видео выбрано с общедоступного канала «Microsoft Office 365 в образовании» путем «Поиск в источниках».
    • Далее выбираем нужные видеоролики, формируем текст подписи и «задаем среднее выделение на этой карточке».
    • Нажимаем «Воспроизвести» и при необходимости после просмотра редактируем повторно все элементы.
  • В конце каждого раздела учебника добавляем опрос (или тест) Microsoft Forms. В конце учебника добавляем итоговый контрольный тест.
    • Для этого в панели «Мультимедиа» нажимаем «Внедренный объект» и
    • В карточке добавляем код внедрения Microsoft Forms.
    • Код внедрения необходимо скопировать непосредственно на странице опроса (теста) Forms в «Поделиться» - «Внедрить» [4].
  • В электронный учебник могут быть добавлены ссылки на модули с методическими материалами, размещенными в OneDrive для бизнеса [1], которые позволят размещать более компактно текст в Sway-учебнике и добавлять больше мультимедиа-компонентов.
  • В Sway могут быть задействованы постоянно дополняемые и обновляемые каналы видео Stream Office 365, как учебного заведения в целом, так и отдельных групп обучаемых. Это позволяет вносить минимальные изменения в Sway-учебник при обновлении учебного видео или постоянно дополнять канал видео какими-либо материалами без изменения учебника [1].

Сценарий 3. Отображение Sway на мобильном устройстве и мобильное обучение

Для мобильной работы с учебниками необходимо «Поделиться» ссылкой на конкретный учебник. При формировании ссылки можно затребовать пароль для просмотра или редактирования Sway, поэтому при переходе по ссылке, возможно, потребуется вводить пароль Office 365 учебного заведения. Данные опросов и тестов будут зафиксированы в Forms под вашим именем.

В ходе изучения Вам доступен просмотр видеороликов и прочие учебные материалы, которые размещены в электронном учебнике.

Использованные источники:

  1. Microsoft Office 365 в образовании. Содержание образовательных программ и Microsoft Sway https://vedenev.livejournal.com/19936.html
  2. Microsoft Office 365 в образовании. Организуем обучение с помощью Microsoft Sway https://blogs.technet.microsoft.com/tasush/2017/02/28/organizuem-obuchenie-s-pomoshhju-microsoft-sway/
  3. Microsoft Office 365 в образовании. Автономное обучение в Office 365 http://blogs.technet.com/b/tasush/archive/2016/02/12/avtonomnoe-obuchenie-v-office-365.aspx
  4. Microsoft Office 365 в образовании. Организация контроля знаний в Office 365 с помощью Microsoft Forms https://blogs.technet.microsoft.com/tasush/2016/06/10/organizacija-kontrolja-znanij-v-office-365-s-pomoshhju-microsoft-forms/

Олимпиада по управлению проектами среди студентов

$
0
0

АНО «Центр оценки и развития проектного управления» приглашает студентов 3-4 курсов специальностей в области управления проектами ВУЗов России и стран СНГ принять участие в Студенческой олимпиаде по управлению проектами.

Олимпиада — это возможность для студентов проверить свои знания и навыки в области проектного управления. Олимпиада пройдет в два этапа:

  1. Отборочный этап проводится заочно в онлайн формате, участники проходят компьютерное тестирование, которое проверяет уровень базовых знаний участников по проектному менеджменту.
  2. Второй очный этап пройдет в формате деловой игры по управлению проектами в Москве 24 апреля 2018 года.

Для участия необходимо сформировать команду из 3 человек. Заявки принимаются до 25 марта 2018 года. Подробности по тел. 8-929-005-44-48, cert@isopm.ru.   

Рейтинги отвечающих на форумах TechNet в феврале

$
0
0

Команда инженеров, сопровождающая русскоязычные форумы TechNet, наконец-то реанимировала утилиту сбора статистики. Надеемся теперь выкладывать отчёты о лучших отвечающих каждый месяц.

За Февраль 2018 рейтинг лучших отвечающих выглядит таким образом:

1    Vector BCO 
2    Dmitriy Razbornov 
3    Ilya Tumanov 
4    M.V.V. _ 
5    Антонов Антон 
6    Mikhail Efimov 
7    Ivan.Basov 
8    Sergey Ya 
9    MSBuy.ru 
10    Kaplin Vladimir 
11    Alexander Surbashev 
12    Artem S. Smirnov 
13    Alexey Klimenko 
14    Denis Dyagilev 
15    Svolotch

 

投機的実行に関する報奨金プログラムの開始

$
0
0

本記事は、Microsoft Security Response Center のブログSpeculative Execution Bounty Launch” (2016 3 14 日 米国時間公開) を翻訳したものです。


本日、マイクロソフトは投機的実行のサイドチャネルの脆弱性に関する期間限定の報奨金プログラムの開始を発表します。この新しい脆弱性のクラスは 2018 1 月に公開され、この分野における研究の大きな進歩を象徴するものとなりました。その脅威環境の変化を受けて、新しい脆弱性のクラスの研究と、この課題のクラスを軽減するためにマイクロソフトが発表した緩和策を推進するために、報奨金プログラムを開始します。

概要:

レベル 報奨金の範囲 (米国ドル)
レベル 1: 投機的実行の攻撃で新しいカテゴリとなるもの 最高で $250,000
レベル 2: Azure の投機的実行の緩和策バイパス 最高で $200,000
レベル 3: Windows の投機的実行の緩和策バイパス 最高で $200,000
レベル 4: Windows 10 または Microsoft Edge における既知の投機的実行の脆弱性(CVE-2017-5753など) のインスタンス。この脆弱性は、信頼の境界を越えた機密情報の漏えいを可能にするものでなければならない。 最高で $25,000

 

投機的実行は真に新しい脆弱性のクラスであり、新規の攻撃手法に関する研究は既に進められています。この報奨金プログラムは、そのような研究とこれらの課題に関連する協調的な脆弱性の公開を発展させる 1 つの方法になることを意図しています。レベル 1 は、投機的実行のサイドチャネルに関与する新しい攻撃のカテゴリに注目します。現在、業界内で既知の内容については、Security Research & Defense チームがブログ (英語情報) で追加情報を発表しています。レベル 2 および 3 は、すでに識別されている攻撃から防御するために Windows Azure に追加された緩和策のバイパスが対象です。レベル 4 は、CVE-2017-5753 または CVE-2017-5715 を悪用できるインスタンスで、存在する可能性のあるものを取り扱います。

投機的実行のサイドチャネルの脆弱性には、業界としての対応が必要です。そのため、影響を受けた関係者がこれらの脆弱性に関するソリューションにおいて協業できるよう、マイクロソフトは本プログラム下で公開された研究を協調的な脆弱性の公開の原則のもとで共有します。私たちはセキュリティ研究者とともに、お客様の環境の安全性をさらに高めていきます。

 

Phillip Misner Principal Security Group Manager Microsoft Security Response Center

 

■ご報告時の注意点

マイクロソフトの報奨金プログラムへご参加される場合は、脆弱性報告はすべて、報奨金プログラムのガイドラインに沿って米国 secure@microsoft.com へ直接ご報告いただく必要があります。この際、英語でのご報告が困難な場合は日本語の併記・記載でも構いません。これは、報奨金受賞者選定において、公平性の観点で重要となります。皆様のご参加をお待ちしています!

Windows Defender オフラインご利用時の注意について

$
0
0

こんにちは、日本マイクロソフト セキュリティプロダクト サポート担当の若狭です。

本日は Windows Defender オフライン を USB ブートでご利用いただく際の注意事項についてご案内いたします。

Windows Defender オフラインは、通常のマルウェア検出に加え Windows Defender の実行で対処できない、ルートキット等のマルウェアに対する脅威の検出と対処を行うことができる機能です。

この機能は、Windows 10 の場合には Windows Defender のユーザーインターフェースより、その他の OS の場合には光学ディスクや USB メモリー等にイメージを書き込み、そこからブートしてご利用いただくことができます。

ただし、既知の不具合がございますため、誠にご不便をおかけいたしますが、ご利用に際しては次の注意事項をご確認の上ご利用くださいますようお願いいたします。

 

==========================
- 注意事項
==========================

USB ブートでご利用いただく際、正常にスキャンを開始することができないことがある、という既知の不具合がございます。

この事象が発生すると、Windows Defender オフラインがブートされている USB ドライブ上のマルウェア定義ファイルを読み込むことができず、以下のような画面が表示されます。

これを回避するためには、事前にスキャン対象のオペレーティングシステムの C ドライブに対して、USB メモリから最新の定義ファイルをコピーします。

定義ファイルは、Windows Defender オフラインが展開された USB メモリのルートフォルダにファイル名 "mpam-*.exe" として保存されています。

この実行ファイルを対象マシンのローカルディスクのルートディレクトリにコピーすることで、Windows Defender オフラインは正常に定義ファイルを見つけることができるようになります。

 

この対処策を実施いただく場合、USB メモリ内の定義ファイルを更新されました際には必ず、ローカルディスクにコピーされた実行ファイルを新しいものに置き換えるようにします。

置き換えずに Windows Defender オフラインのスキャンを実行した場合、以前マシンにコピーされた古い定義ファイルのみが利用され、最新のマルウェアに対応できない可能性がございます。

現在、この動作は修正が予定されておりますが、実際に修正されたモジュールのリリース時期は現時点では未定です。

お手数をおかけいたしますが、当面は上記対処策を実施ください。

 

※Windows Defender オフラインのご利用方法の詳細は以下の参考情報をご参照ください。

<Windows Defender オフラインを使って PC を保護する>
https://support.microsoft.com/ja-jp/help/17466/windows-defender-offline-help-protect-my-pc

 

なお、本情報の内容 (添付文書、リンク先などを含む) は、作成日時点でのものであり、予告なく変更される場合があります。

何卒ご留意いただけますようお願い申し上げます。


Internet Explorer および Microsoft Edge での Flash の今後の対応予定について

$
0
0

こんにちは。
今日はたまにお問い合わせとしていただく Internet Explorer および Microsoft Edge での Flash の今後の対応についてご紹介します。

 

Adobe Flash のサポート終了のロードマップは、下記の資料に記載の予定のとおりです。
本記事では、下記の資料からポイントとなる点について抽出しています。

The End of an Era – Next Steps for Adobe Flash
https://blogs.windows.com/msedgedev/2017/07/25/flash-on-windows-timeline/

 

■ 2017 年末から 2018 年にかけて
Windows 10 Creators Update (v1703) 以降の Microsoft Edge では、初めて訪れる Web サイトでの Flash コンテンツの実行の許可を求め、許可した Web サイトの再訪問時には求められません。
また、Internet Explorer では Flash の実行に関しては特別な制御は行われておりません。

 

■ 2018 年後半にかけて
Microsoft Edge 上で Flash が含まれる Web サイトを閲覧するたびに実行の許可を求める動作となります。
Internet Explorer においては引き続き Flash の実行は許可され、特別な制御は行われません。

 

■ 2019 年後半にかけて
Microsoft Edge および Internet Explorer 上での Flash が既定で無効となります。
ただし、Flash を実行できるよう構成を変更することも可能です。
Flash を実行できるよう構成したい場合、Microsoft Edge については、[2018 年後半にかけて] と同様に、Flash が含まれる Web サイトを閲覧するたびに実行の許可が求められる動作となります。

 

■ 2020 年末
サポートされるすべての Windows 上の Microsoft Edge および Internet Explorer で Flash を実行することができなくなります。
Flash を再び実行できるように構成することもできなくなる予定です。

 

本日の記事は以上となります。
本情報はあくまでも現時点での予定となりますため、今後何らかの影響により本予定も変更となる可能性が十分にありえます。
そのため、Web サイト側での対応を計画されている場合には、十分に余裕をもった計画とされることをお勧めいたします。

 

Quick Tip – Download .NET Framework 4.5 Offline Installer

$
0
0

This post will attempt to resolve some download frustration if your are looking for an older version of the .NET Framework.  This is an issue with Exchange 2010 as the support position for that platform has not been updated, whereas  .NET Framework support in Exchange 2013 and 2016 has been updated.  This us because Exchange 2010 is in extended support and is almost at the end of its support lifecycle.

As always check the supported version information in the Exchange Support Matrix Article before updating .NET on an Exchange server.

A separate instance where an older .NET Framework may be needed is for the Azure AD Module.  Currently Azure AD version 2 is being worked on though many customers still leverage the 1.* version of the module.  Depending on the environment you may run into the issue which is described here Azure AD Module – This assembly is built by a runtime newer than the currently loaded runtime and cannot be loaded.

Below are the recent locations used to download the various versions of .NET.  Note that if the download is retired from the Download Centre please do not ask me for a copy.  I am blocked from distributing software and you will need to create a support case to explore the options available at that time.

.NET Framework Versions and Dependencies lists the details for .NET versions.

Personally I never browse the Internet or do any downloads on servers.  This is why my preference is to use the offline/standalone installer when possible.

.NET Framework 4.0 Standalone Installer

Download .NET Framework 4.0 Standalone Installer

.NET Framework 4.5 Standalone Installer

Download .NET Framework 4.5 Standalone Installer

.NET Framework 4.5.1 Standalone Installer

Download .NET Framework 4.5.1 Standalone Installer

.NET Framework 4.5.2 Standalone Installer

image

Cheers,

Rhoderick

Microsoft Premier Workshop: System Center Operations Manager: Configuration and Administration

$
0
0

Beschreibung
Der dreitägige Workshop vermittelt neuen Administratoren die Grundkonzepte des Microsoft System Center 2016 Operations Manager. Dies geschieht über eine Kombination aus Vorträgen und praktischen Übungen.

Agenda
Module 1: Architecture Overview.
This module provides an overall introduction to Operations Manager architecture and its general features.

Module 2: Basic Concepts.
This module covers the basic concepts and terminology used within Operations Manager

Module 3: Navigating the console.
This module covers the basic concepts and terminology used within Operations Manager console to enable full access to console functionality

Module 4: Management Pack Tuning.
This module will guide you through all the steps needed to tune management packs for a better monitoring experience.

Module 5: Maintenance Mode and Schedules.
This module will explain one of the new features of Operations Manager 2016 and how to integrate for better operations on your hybrid cloud

Module 6: Notifications.
This module will explain how you can configure and use notifications within Operations Manager.

Module 7: RBAC.
This module will illustrate how implement and maintain the Role Based Authentication in Operations Manager environment.

Module 8: Maintaining Operations Manager.
This module will explain steps needed to maintain an healthy monitoring environment.

Module 9: Authoring.
This module will guide on the powerful authoring features that enable endless monitoring capabilities of Operations Manager.

Module 10: Visualization.
This module will guide you to all the way you can visualize data that Operations Manager collect.

Module 11: Linux Monitoring.
Linux and Unix are integral part of the monitoring capabilities of Operations Manager. This module will show you how to integrate seamlessly these platforms into your monitoring infrastructure

Zielgruppe
Die Teilnehmer sollten folgende Fähigkeiten mitbringen:
• Erfahrung mit Standard-Computing-Systemen wie Dateiablage, Netzwerke und Internet-Technologien
• Allgemeine Kenntnis der Kerntechnologien von Microsoft

Level 300
(Level Skala: 100= Strategisch/ 200= technischer Überblick/ 300=tiefe Fachkenntnisse/  400= technisches Expertenwissen)

Anmeldung
Zur Anmeldung wenden Sie sich bitte direkt an Ihren Microsoft Technical Account Manager oder besuchen Sie uns im Web auf Microsoft Premier Education. Dort finden Sie eine Gesamtübersicht aller offenen Workshops, für die Sie sich dort auch gleich anmelden können.

SharePoint Conference North America has it all, and MORE!

$
0
0

SharePoint Conference North America has it all, and MORE!

Get more by registering NOW! http://tiny.cc/SPCNA_REG

 

There are 4 main reasons why people attend technical conferences and the SharePoint Conference North America (SPCNA) has all of them, and MORE!

  1. With the constantly changing world of technology, people need to know what's new before the competition does. SPCNA has the sessions and workshops to keep you ahead of the curve.
  2. Learning with the best of the best from Microsoft and top industry thought leaders from engineering and marketing. Attendees want to hear the practical solutions from the people who actually designed, built and integrated today's current technologies. SPCNA has the best speakers.
  3. Network and connect with peers and business technology gurus with an opportunity to share, collaborate and understanding the creation of real world solutions.
  4. Location! The host hotel is the world renowned MGM Grand. When you aren't engaged with sessions, receptions and parties, there is an endless line-up of shows, restaurants and activities for every taste.

BONUS, When you register for one of our workshop packages, take home an Xbox One X, an Xbox One S or an Invoke by Harman Kardon, FREE.

It's pretty simple, SPCNA has it all. WE want you to BE THERE!

KVA Shadow: Mitigating Meltdown on Windows

$
0
0

On January 3rd, 2018, Microsoft released an advisory and security updates that relate to a new class of discovered hardware vulnerabilities, termed speculative execution side channels, that affect the design methodology and implementation decisions behind many modern microprocessors. This post dives into the technical details of Kernel Virtual Address (KVA) Shadow which is the Windows kernel mitigation for one specific speculative execution side channel: the rogue data cache load vulnerability (CVE-2017-5754, also known as “Meltdown” or “Variant 3”). KVA Shadow is one of the mitigations that is in scope for Microsoft's recently announced Speculative Execution Side Channel bounty program.

It’s important to note that there are several different types of issues that fall under the category of speculative execution side channels, and that different mitigations are required for each type of issue. Additional information about the mitigations that Microsoft has developed for other speculative execution side channel vulnerabilities (“Spectre”), as well as additional background information on this class of issue, can be found here.

Please note that the information in this post is current as of the date of this post.

Vulnerability description & background

The rogue data cache load hardware vulnerability relates to how certain processors handle permission checks for virtual memory. Processors commonly implement a mechanism to mark virtual memory pages as owned by the kernel (sometimes termed supervisor), or as owned by user mode. While executing in user mode, the processor prevents accesses to privileged kernel data structures by way of raising a fault (or exception) when an attempt is made to access a privileged, kernel-owned page. This protection of kernel-owned pages from direct user mode access is a key component of privilege separation between kernel and user mode code.

Certain processors capable of speculative out-of-order execution, including many currently in-market processors from Intel, and some ARM-based processors, are susceptible to a speculative side channel that is exposed when an access to a page incurs a permission fault. On these processors, an instruction that performs an access to memory that incurs a permission fault will not update the architectural state of the machine. However, these processors may, under certain circumstances, still permit a faulting internal memory load µop (micro-operation) to forward the result of the load to subsequent, dependent µops. These processors can be said to defer handling of permission faults to instruction retirement time.

Out of order processors are obligated to “roll back” the architecturally-visible effects of speculative execution down paths that are proven to have never been reachable during in-program-order execution, and as such, any µops that consume the result of a faulting load are ultimately cancelled and rolled back by the processor once the faulting load instruction retires. However, these dependent µops may still have issued subsequent cache loads based on the (faulting) privileged memory load, or otherwise may have left additional traces of their execution in the processor’s caches. This creates a speculative side channel: the remnants of cancelled, speculative µops that operated on the data returned by a load incurring a permission fault may be detectable through disturbances to the processor cache, and this may enable an attacker to infer the contents of privileged kernel memory that they would not otherwise have access to. In effect, this enables an unprivileged user mode process to disclose the contents of privileged kernel mode memory.

Operating system implications

Most operating systems, including Windows, rely on per-page user/kernel ownership permissions as a cornerstone of enforcing privilege separation between kernel mode and user mode. A speculative side channel that enables unprivileged user mode code to infer the contents of privileged kernel memory is problematic given that sensitive information may exist in the kernel’s address space. Mitigating this vulnerability on affected, in-market hardware is especially challenging, as user/kernel ownership page permissions must be assumed to no longer prevent the disclosure (i.e., reading) of kernel memory contents from user mode. Thus, on vulnerable processors, the rogue data cache load vulnerability impacts the primary tool that modern operating system kernels use to protect themselves from privileged kernel memory disclosure by untrusted user mode applications.

In order to protect kernel memory contents from disclosure on affected processors, it is thus necessary to go back to the drawing board with how the kernel isolates its memory contents from user mode. With the user/kernel ownership permission no longer effectively safeguarding against memory reads, the only other broadly-available mechanism to prevent disclosure of privileged kernel memory contents is to entirely remove all privileged kernel memory from the processor’s virtual address space while executing user mode code.

This, however, is problematic, in that applications frequently make system service calls to request that the kernel perform operations on their behalf (such as opening or reading a file on disk). These system service calls, as well as other critical kernel functions such as interrupt processing, can only be performed if their requisite, privileged code and data are mapped in to the processor’s address space. This presents a conundrum: in order to meet the security requirements of kernel privilege separation from user mode, no privileged kernel memory may be mapped into the processor’s address space, and yet in order to reasonably handle any system service call requests from user mode applications to the kernel, this same privileged kernel memory must be quickly accessible for the kernel itself to function.

The solution to this quandary is to, on transitions between kernel mode and user mode, also switch the processor’s address space between a kernel address space (which maps the entire user and kernel address space), and a shadow user address space (which maps the entire user memory contents of a process, but only a minimal subset of kernel mode transition code and data pages needed to switch into and out of the kernel address space). The select set of privileged kernel code and data transition pages handling the details of these address space switches, which are “shadowed” into the user address space are “safe” in that they do not contain any privileged data that would be harmful to the system if disclosed to an untrusted user mode application. In the Windows kernel, the usage of this disjoint set of shadow address spaces for user and kernel modes is called “kernel virtual address shadowing”, or KVA shadow, for short.

In order to support this concept, each process may now have up to two address spaces: the kernel address space and the user address space. As there is no virtual memory mapping for other, potentially sensitive privileged kernel data when untrusted user mode code executes, the rogue data cache load speculative side channel is completely mitigated. This approach is not, however, without substantial complexity and performance implications, as will later be discussed.

On a historical note, some operating systems previously have implemented similar mechanisms for a variety of different and unrelated reasons: For example, in 2003 (prior to the common introduction of 64-bit processors in most broadly-available consumer hardware), with the intention of addressing larger amounts of virtual memory on 32-bit systems, optional support was added to the 32-bit x86 Linux kernel in order to provide a 4GB virtual address space to user mode, and a separate 4GB address space to the kernel, requiring address space switches on each user/kernel transition. More recently, a similar approach, termed KAISER, has been advocated to mitigate information leakage about the kernel virtual address space layout due to processor side channels. This is distinct from the rogue data cache load speculative side channel issue, in that no kernel memory contents, as opposed to address space layout information, were at the time considered to be at risk prior to the discovery of speculative side channels.

KVA shadow implementation in the Windows kernel

While the design requirements of KVA shadow may seem relatively innocuous, (privileged kernel-mode memory must not be mapped in to the address space when untrusted user mode code runs) the implications of these requirements are far-reaching throughout Windows kernel architecture. This touches a substantial number of core facilities for the kernel, such as memory management, trap and exception dispatching, and more. The situation is further complicated by a requirement that the same kernel code and binaries must be able to run with and without KVA shadow enabled. Performance of the system in both configurations must be maximized, while simultaneously attempting to keep the scope of the changes required for KVA shadow as contained as possible. This maximizes maintainability of code in both KVA shadow and non-KVA-shadow configurations.

This section focuses primarily on the implications of KVA shadow for the 64-bit x86 (x64) Windows kernel. Most considerations for KVA shadow on x64 also apply to 32-bit x86 kernels, though there are some divergences between the two architectures. This is due to ISA differences between 64-bit and 32-bit modes, particularly with trap and exception handling.

Please note that the implementation details described in this section are subject to change without notice in the future. Drivers and applications must not take dependencies on any of the internal behaviors described below without first checking for updated documentation.

The best way to understand the complexities involved with KVA shadow is to start with the underlying low-level interface in the kernel that handles the transitions between user mode and kernel mode. This interface, called the trap handling code, is responsible for fielding traps (or exceptions) that may occur from either kernel mode or user mode. It is also responsible for dispatching system service calls and hardware interrupts. There are several events that the trap handling code must handle, but the most relevant for KVA shadow are those called “kernel entry” and “kernel exit” events. These events, respectively, involve transitions from user mode into kernel mode, and from kernel mode into user mode.

Trap handling and system service call dispatching overview and retrospective

As a quick recap of how the Windows kernel dispatches traps and exceptions on x64 processors, traditionally, the kernel programs the current thread’s kernel stack pointer into the current processor’s TSS (task state segment), specifically into the KTSS64.Rsp0 field, which informs the processor which stack pointer (RSP) value to load up on a ring transition to ring 0 (kernel mode) code. This field is traditionally updated by the kernel on context switch, and several other related internal events; when a switch to a different thread occurs, the processor KTSS64.Rsp0 field is updated to point to the base of the new thread’s kernel stack, such that any kernel entry event that occurs while that thread is running enters the kernel already on that thread’s stack. The exception to this rule is that of system service calls, which typically enter the kernel with a “syscall” instruction; this instruction does not switch the stack pointer and it is the responsibility of the operating system trap handling code to manually load up an appropriate kernel stack pointer.

On typical kernel entry, the hardware has already pushed what is termed a “machine frame” (internally, MACHINE_FRAME) on the kernel stack; this is the processor-defined data structure that the IRETQ instruction consumes and removes from the stack to effect an interrupt-return, and includes details such as the return address, code segment, stack pointer, stack segment, and processor flags on the calling application. The trap handling code in the Windows kernel builds a structure called a trap frame (internally, KTRAP_FRAME) that begins with the hardware-pushed MACHINE_FRAME, and then contains a variety of software-pushed fields that describe the volatile register state of the context that was interrupted. System calls, as noted above, are an exception to this rule, and must manually build the entire KTRAP_FRAME, including the MACHINE_FRAME, after effecting a stack switch to an appropriate kernel stack for the current thread.

KVA shadow trap and system service call dispatching design considerations

With a basic understanding of how traps are handled without KVA shadow, let’s dive into the details of the KVA shadow-specific considerations of trap handling in the kernel.

When designing KVA shadow, several design considerations applied for trap handling when KVA shadow were active, namely, that the security requirements were met, that performance impact on the system was minimized, and that changes to the trap handling code were kept as compartmentalized as possible in order to simplify code and improve maintainability. For example, it is desirable to share as much trap handling code between the KVA shadow and non-KVA shadow configurations as practical, so that it is easier to make changes to the kernel’s trap handling facilities in the future.

When KVA shadowing is active, user mode code typically runs with the user mode address space selected. It is the responsibility of the trap handling code to switch to the kernel address space on kernel entry, and to switch back to the user address space on kernel exit. However, additional details apply: it is not sufficient to simply switch address spaces, because the only transition kernel pages that can be permitted to exist (or be “shadowed into”) in the user address space are only those that hold contents that are “safe” to disclose to user mode. The first complication that KVA shadow encounters is that it would be inappropriate to shadow the kernel stack pages for each thread into the user mode address space, as this would allow potentially sensitive, privileged kernel memory contents on kernel thread stacks to be leaked via the rogue data cache load speculative side channel.

It is also desirable to keep the set of code and data structures that are shadowed into the user mode address space to a minimum, and if possible, to only shadow permanent fixtures in the address space (such as portions of the kernel image itself, and critical per-processor data structures such as the GDT (Global Descriptor Table), IDT (Interrupt Descriptor Table), and TSS. This simplifies memory management, as handling setup and teardown of new mappings that are shadowed into user mode address spaces has associated complexities, as would enabling any shadowed mappings to become pageable. For these reasons, it was clear that it would not be acceptable for the kernel’s trap handling code to continue to use the per-kernel-thread stack for kernel entry and kernel exit events. Instead, a new approach would be required.

The solution that was implemented for KVA shadow was to switch to a mode of operation wherein a small set of per-processor stacks (internally called KTRANSITION_STACKs) are the only stacks that are shadowed into the user mode address space. Eight of these stacks exist for each processor, the first of which represents the stack used for “normal” kernel entry events, such as exceptions, page faults, and most hardware interrupts, and the remaining seven transition stacks represent the stacks used for traps that are dispatched using the x64-defined IST (Interrupt Stack Table) mechanism (note that Windows does not use all 7 possible IST stacks presently).

When KVA shadow is active, then, the KTSS64.Rsp0 field of each processor points to the first transition stack of each processor, and each of the KTSS64.Ist[n] fields point to the n-th KTRANSITION_STACK for that processor. For convenience, the transition stacks are located in a contiguous region of memory, internally termed the KPROCESSOR_DESCRIPTOR_AREA, that also contains the per-processor GDT, IDT, and TSS, all of which are required to be shadowed into the user mode address space for the processor itself to be able to handle ring transitions properly. This contiguous memory block is, itself, shadowed in its entirety.

This configuration ensures that when a kernel entry event is fielded while KVA shadow is active, that the current stack is both shadowed into the user mode address space, and does not contain sensitive memory contents that would be risky to disclose to user mode. However, in order to maintain these properties, the trap dispatch code must be careful to push no sensitive information onto any transition stack at any time. This necessitates the first several rules for KVA shadow in order to avoid any other memory contents from being stored onto the transition stacks: when executing on a transition stack, the kernel must be fielding a kernel entry or kernel exit event, interrupts must be disabled and must remain disabled throughout, and the code executing on a transition stack must be careful to never incur any other type of kernel trap. This also implies that the KVA shadow trap dispatch code can assume that traps arising in kernel mode already are executing with the correct CR3, and on the correct kernel stack (except for some special considerations for IST-delivered traps, as discussed below).

Fielding a trap with KVA shadow active

Based on the above design decisions, there is an additional set of tasks specific to KVA shadowing that must occur prior to the normal trap handling code in the kernel being invoked for a kernel entry trap events. In addition, there is a similar set of tasks related to KVA shadow that must occur at the end of trap processing, if a kernel exit is occurring.

On normal kernel entry, the following sequence of events must occur:

  1.  The kernel GS base value must be loaded. This enables the remaining trap code to access per-processor data structures, such as those that hold the kernel CR3 value for the current processor.
  2. The processor’s address space must be switched to the kernel address space, so that all kernel code and data are accessible (i.e., the kernel CR3 value must be loaded). This necessitates that the kernel CR3 value must be stored in a location that is, itself, shadowed. For the purposes of KVA shadow, a single per-processor KPRCB page that contains only “safe” contents maintains a copy of the current processor’s kernel CR3 value for easy access to the KVA shadow trap dispatch code. Context switch between address spaces, and process attach/detach update the corresponding KPRCB fields with the new CR3 value on process address space changes.
  3. The machine frame previously pushed by hardware as a part of the ring transition from user mode to kernel mode must be copied from the current (transition) stack, to the per-kernel-thread stack for the current thread.
  4. The current stack must be switched to the per-kernel-thread stack. At this point, the “normal” trap handling code can largely proceed as usual, and without invasive modifications (save that the kernel GS base has already been loaded).

Roughly speaking, the inverse sequence of events must occur on normal kernel exit; the machine frame at the top of the current kernel thread stack must be copied to the transition stack for the processor, the stacks must be switched, CR3 must be reloaded with the corresponding value for the user mode address space of the current process, the user mode GS base must be reloaded, and then control may be returned to user mode.

System service call entry and exit through the SYSCALL/SYSRETQ instruction pair is handled slightly specially, in that the processor does not already push a machine frame, because the kernel logically does not have a current stack pointer until it explicitly loads one. In this case, no machine frame needs be copied on kernel entry and kernel exit, but the other basic steps must still be performed.

Special care needs to be taken by the KVA shadow trap dispatch code for NMI, machine check, and double fault type trap events, because these events may interrupt even normally uninterruptable code. This means that they could even interrupt the normally uninterruptable KVA shadow trap dispatch code itself, during a kernel entry or kernel exit event. These types of traps are delivered using the IST mechanism onto their own distinct transition stacks, and the trap handling code must carefully handle the case of the GS base or CR3 value being in any state due to the indeterminate state of the machine at the time in which these events may occur, and must preserve the pre-existing GS base or CR3 values.

At this point, the basics for how to enter and exit the kernel with KVA shadow are in place. However, it would be undesirable to inline the KVA shadow trap dispatch code into the standard trap entry and trap exit code paths, as the standard trap entry and trap exit code paths could be located anywhere in the kernel’s .text code section, and it is desirable to minimize the amount of code that needs be shadowed into the user address space. For this reason, the KVA shadow trap dispatch code is collected into a series of parallel entry points packed within their own code section within the kernel image, and either the standard set of trap entry points, or the KVA shadow trap entry points are installed into the IDT at system boot time, based on whether KVA shadow is in use at system boot. Similarly, the system service call entry points are also located in this special code section in the kernel image.

Note that one implication of this design choice is that KVA shadow does not protect against attacks against kernel ASLR using speculative side channels. This is a deliberate decision given the design complexity of KVA shadow, timelines involved, and the realities of other side channel issues affecting the same processor designs. Notably, processors susceptible to rogue data cache load are also typically susceptible to other attacks on their BTBs (branch target buffers), and other microarchitectural resources that may allow kernel address space layout disclosure to a local attacker that is executing arbitrary native code.

Memory management considerations for KVA shadow

Now that KVA shadow is able to handle trap entry and trap exit, it’s necessary to understand the implications of KVA shadowing on memory management. As with the trap handling design considerations for KVA shadow, ensuring the correct security properties, providing good performance characteristics, and maximizing the maintainability of code changes were all important design goals. Where possible, rules were established to simplify the memory management design implementation. For example, all kernel allocations that are shadowed into the user mode address space are shadowed system-wide and not per-process or per-processor. As another example, all such shadowed allocations exist at the same kernel virtual address in both the user mode and kernel mode address spaces and share the same underlying physical pages in both address spaces, and all such allocations are considered nonpageable and are treated as though they have been locked into memory.

The most apparent memory management consequence of KVA shadowing is that each process typically now needs a separate address space (i.e., page table hierarchy, or top level page directory page) allocated to describe the shadow user address space, and that the top level page directory entries corresponding to user mode VAs must be replicated from the process’s kernel address space top level page directory page to the process’s user address space top level page directory page.

The top level page directory page entries for the kernel half of the VA space are not replicated, however, and instead only correspond to a minimal set of page table pages needed to map the small subset of pages that have been explicitly shadowed into the user mode address space. As noted above, pages that are shadowed into the user mode address space are left nonpageable for simplicity. In practice, this is not a substantial hardship for KVA shadow, as only a very small number of fixed allocations are ever shadowed system-wide. (Remember that only the per-processor transition stacks are shadowed, not any per-thread data structures, such as per-thread kernel stacks.)

Memory management must then replicate any updates to top level user mode page directory page entries between the two process address spaces, as any updates occur, and access bit handling for working set aging and other purposes must logically OR the access bits from both user and kernel address spaces together if a top level page directory page entry is being considered (and, similarly, working set aging must clear access bits in both top level page directory page if a top level entry is being considered). Similarly, memory management must be aware of both address spaces that may exist for processes in various other edge-cases where top-level page directory pages are manipulated.

Finally, no general purpose kernel allocations can be marked as “global” in their corresponding leaf page table entries by the kernel, because processors susceptible to rogue data cache load cannot observe any cached virtual address translations for any privileged kernel pages that could contain sensitive memory contents while in user mode, for KVA shadow protections to be effective, and such global entries would still be cached in the processor translation buffer (TB) across an address space switch.

Booting is just the beginning of a journey

At this point, we have covered some of the major areas involved in the kernel with respect to KVA shadow. However, there’s much more that’s involved beyond just trap handling and memory management: For example, changes to how Windows handles multiprocessor initialization, hibernate and resume, processor shutdown and reboot, and many other areas were all required in order to make KVA shadow into a fully featured solution that works correctly in all supported software configurations.

Furthermore, preventing the rogue data cache load issue from exposing privileged kernel mode memory contents is just the beginning of turning KVA shadow into a feature that could be shipped to a diverse customer base. So far, we have only touched on the basics of the highlights of an unoptimized implementation of KVA shadow on x64 Windows. We’re far from done examining KVA shadowing, however; a substantial amount of additional work was still required in order to reduce the performance overhead of KVA shadow to the absolute minimum possible. As we’ll see, there are a number of options that have been considered and employed to that end with KVA shadow. The below optimizations are already included with the January 3rd, 2018 security updates to address rogue data cache load.

Performance optimizations

One of the primary challenges faced by the implementation of KVA shadow was maximizing system performance. The model of a unified, flat address space shared between user and kernel mode, with page permission bits to protect kernel-owned pages from access by unprivileged user mode code, is both convenient for an operating system kernel to implement, and easily amenable to high performance user/kernel transitions.

The reason why the traditional, unified address space model allows for fast user/kernel transitions relates to how processors handle virtual memory. Processors typically cache previously fetched virtual address translations in a small internal cache that is termed a translation buffer, (or TB, for short); some literature also refers to these types of address translation caches as translation lookaside buffers (or TLBs for short). The processor TB operates on the principle of locality: if an application (or the kernel) has referenced a particular virtual address translation recently, it is likely to do so again, and the processor can save the costly process of re-walking the operating system’s page table hierarchy if the requisite translation is already cached in the processor TB.

Traditionally, a TB contains information that is primarily local to a particular address space (or page table hierarchy), and when a switch to a different page table hierarchy occurs, such as with a context switch between threads in different processes, the processor TB must be flushed so that translations from one process are not improperly used in the context of a different process. This is critical, as two processes can, and frequently do, map the same user mode virtual address to completely different physical pages.

KVA shadowing requires switching address spaces much more frequently than operating systems have traditionally done so, however; on processors susceptible to the rogue data cache load issue, it is now necessary to switch the address space on every user/kernel transition, which are vastly more frequent events than cross-process context switches. In the absence of any further optimizations, the fact that the processor TB is flushed and invalidated on each user/kernel transition would substantially reduce the benefit of the processor TB, and would represent a significant performance cost on the system.

Fortunately, there are some techniques that the Windows KVA shadow implementation employs to substantially mitigate the performance costs of KVA shadowing on processor hardware that is susceptible to rogue data cache load. Optimizing KVA shadow for maximum performance presented a challenging exercise in finding creative ways to make use of existing, in-the-field hardware capabilities, sometimes outside the scope of their original intended use, while still maintaining system security and correct system operation, but several techniques have been developed to substantially reduce the cost.

PCID acceleration

The first optimization, the usage of PCID (process-context identifier) acceleration is relevant to Intel Core-family processors of Haswell and newer microarchitectures. While the TB on many processors traditionally maintained information local to an address space, and which had to be flushed on any address space switch, the PCID hardware capability allows address translations to be tagged with a logical PCID that informs the processor which address space they are relevant to. An address space (or page table hierarchy) can be tagged with a distinguished PCID value, and this tag is maintained with any non-global translations that are cached the processor’s TB; then, on address space switch to an address space with a different associated PCID, the processor can be instructed to preserve the previous TB contents. Because the processor requires that the current address space’s PCID to match that of any cached translation in the TB for the purposes of matching any translation lookups in the TB, address translations from multiple address spaces can now be safely represented concurrently in the processor TB.

On hardware that is PCID-capable and which requires KVA shadowing, the Windows kernel employs two distinguished PCID values, which are internally termed PCID_KERNEL and PCID_USER. The kernel address space is tagged with PCID_KERNEL, and the user address space is tagged with PCID_USER, and on each user/kernel transition, the kernel will typically instruct the processor to preserve the TB contents when switching address spaces. This enables the preservation of the entire TB contents on system service calls and other high frequency user/kernel transitions, and in many workloads, substantially mitigates almost all of the cost of KVA shadowing. Some duplication of TB entries between user and kernel mode is possible if the same user mode VA is referenced by user and kernel code, and additional processing is also required on some types of TB flushes, as certain types of TB flushes (such as those that invalidate user mode VAs) must be replicated to both user and kernel PCIDs. However, this overhead is typically relatively minor compared to the loss of all TB entries if the entire TB were not preserved on each user/kernel transition.

On address space switches between processes, such as context switches between two different processes, the entire TB is invalidated. This must be performed because the PCID values assigned by the kernel are not process-specific, but are global to the entire system. Assigning different PCID values to each process (which would be a more “traditional” usage of PCID) would preclude the need to flush the entire TB on context switches between processes, but would also require TB flush IPIs (interprocessor-interrupts) to be sent to a potentially much larger set of processors, specifically being all of those that had previously loaded a given PCID, which in and of itself is a performance trade-off due to the cost involved in TB flush IPIs.

It’s important to note that PCID acceleration also requires the hypervisor to expose CR4.PCID and the INVPCID instruction to the Windows kernel. The Hyper-V hypervisor was updated to expose these capabilities with the January 3rd, 2018 security updates. Additionally, the underlying PCID hardware capability is only defined for the native 64-bit paging mode, and thus a 64-bit kernel is required to take advantage of PCID acceleration (32-bit applications running under a 64-bit kernel can still benefit from the optimization).

User/global acceleration

Although many modern processors can take advantage of PCID acceleration, older Intel Core family processors, and current Intel Atom family processors do not provide hardware support for PCID and thus cannot take advantage of that PCID support to accelerate KVA shadowing. These processors do allow a more limited form of TB preservation across address space switches, however, in the form of the “global” page table entry bit. The global bit allows the operating system kernel to communicate to the processor that a given leaf translation is “global” to the entire system, and need not be invalidated on address space switches. (A special facility to invalidate all translations including global translations is provided by the processor, for cases when the operating system changes global memory translations. On x64 and x86 processors, this is accomplished by toggling the CR4.PGE control register bit.)

Traditionally, the kernel would mark most kernel mode page translations as global, in order to indicate that these address translations can be preserved in the TB during cross-process address space switches while all non-global address translations are flushed from the TB. The kernel is then obligated to ensure that both incoming and outgoing address spaces provide consistent translations for any global translations in both address spaces, across a global-preserving address space switch, for correct system operation. This is a simple matter for the traditional use of kernel virtual address management, as most of the kernel address space is identical across all processes. The global bit, thus, elegantly allows most of the effective TB contents for kernel VAs to be preserved across context switches with minimal hardware and software complexity.

In the context of KVA shadow, however, the global bit can be used for a completely different purpose than its original intention, for an optimization termed “user/global acceleration”. Instead of marking kernel pages as global, KVA shadow marks user pages as global, indicating to the processor that all pages in the user mode half of the address space are safe to preserve across address space switches. While an address space switch must still occur on each user/kernel transition, global translations are preserved in the TB, which preserves the user TB entries. As most applications primarily spend their time executing in user mode, this mode of operation preserves the portion of the TB that is most relevant to most applications. The TB contents for kernel virtual addresses are unavoidably lost on each address space switch when user/global acceleration is in use, and as with PCID acceleration, some TB flushes must be handled differently (and cross-process context switches require an entire TB flush), but preserving the user TB contents substantially cuts the cost of KVA shadowing over the more naïve approach of marking no translations as global.

Privileged process acceleration

The purpose of KVA shadowing is to protect sensitive kernel mode memory contents from disclosure to untrusted user mode applications. This is required for security purposes in order to maintain privilege separation between kernel mode and user mode. However, highly-privileged applications that have complete control over the system are typically trusted by the operating system for a variety of tasks, up to and including loading drivers, creating kernel memory dumps, and so on. These applications effectively already have the privileges required in order to access kernel memory, and so KVA shadowing is of minimal benefit for these applications.

KVA shadow thus optimizes highly privileged applications (specifically, those that have a primary token which is a member of the BUILTINAdministrators group, which includes LocalSystem, and processes that execute as a fully-elevated administrator account) by running these applications only with the KVA shadow “kernel” address space, which is very similar to how applications execute on processors that are not susceptible to rogue data cache load. These applications avoid most of the overhead of KVA shadowing, as no address space switch occurs on user/kernel transitions. Because these applications are fully trusted by the operating system, and already have (or could obtain) the capability to load drivers that could naturally access kernel memory, KVA shadowing is not required for fully-privileged applications.

Optimizations are ongoing

The introduction of KVA shadowing radically alters how the Windows kernel fields traps and exceptions from a processor, and significantly changes several key aspects of memory management. While several high-value optimizations have already been deployed with the initial release of operating system updates to integrate KVA shadow support, research into additional avenues of improvement and opportunities for performance tuning continues. KVA shadow represents a substantial departure from some existing operating system design paradigms, and with any such substantial shift in software design, exploring all possible optimizations and performance tuning opportunities is an ongoing effort.

Driver and application compatibility

A key consideration of KVA shadow was that existing applications and drivers must continue to work. Specifically, it would not have been acceptable to change the Windows ABI, or to invalidate how drivers work with user mode memory, in order to integrate KVA shadow support into the operating system. Applications and drivers that use supported and documented interfaces are highly compatible with KVA shadow, and no changes to how drivers access user mode memory through supported and documented means are necessary. For example, under a try/except block, it is still possible for a driver to use ProbeForRead to probe a user mode address for validity, and then to copy memory from that user mode virtual address (under try/except protection). Similarly, MDL mappings to/from user mode memory still function as before.

A small number of drivers and applications did, however, encounter compatibility issues with KVA shadow. By and large, the majority of incompatible drivers and applications used substantially unsupported and undocumented means to interface with the operating system. For example, Microsoft encountered several software applications from multiple software vendors that assumed that the raw machine instructions in certain, non-exported Windows kernel functions would remain static or unchanged with software updates. Such approaches are highly fragile and are subject to breaking at even slight perturbations of the operating system kernel code.

Operating system changes like KVA shadow, that necessitated a security update which changed how the operating system manages memory and trap and exception dispatching, underscore the fragility of depending on highly unsupported and undocumented mechanisms in drivers and applications. Microsoft strongly encourages developers to use supported and documented facilities in drivers and applications. Keeping customers secure and up to date is a shared commitment, and avoiding dependencies on unsupported and undocumented facilities and behaviors is critical to meeting the expectations that customers have with respect to keeping their systems secure.

Conclusion

Mitigating hardware vulnerabilities in software is an extremely challenging proposition, whether you are an operating system vendor, driver writer, or an application vendor. In the case of rogue data cache load and KVA shadow, the Windows kernel is able to provide a transparent and strong mitigation for drivers and applications, albeit at the cost of additional operating system complexity, and especially on older hardware, at some potential performance cost depending on the characteristics of a given workload. The breadth of changes required to implement KVA shadowing was substantial, and KVA shadow support easily represents one of the most intricate, complex, and wide-ranging security updates that Microsoft has ever shipped. Microsoft is committed to protecting our customers, and we will continue to work with our industry partners in order to address speculative execution side channel vulnerabilities.

Ken Johnson, Microsoft Security Response Center (MSRC)

Gestão de Updates – Parte 5 – Distribuição e disponibilização de updates

$
0
0

Dando continuidade no processo de update, agora que temos nosso WSUS e SUP instalados e funcionando, precisamos fazer com que os clientes façam uso do SUP para identificar os updates requeridos e realizar o a disponibilização desses updates.

Vamos inicialmente configurar uma política habilitando o Software Update dos clientes SCCM para fazerem uso do SUP.

No exemplo estamos usando a Default Settings, que é a política padrão do SCCM e que todos os clientes recebem, porém você pode configurar uma política customizada somente para um collection.

Enable software updates on clients - Habilita ou não o cliente SCCM utilizar o SUP para update. Essa política adiciona o endereço do WSUS na GPO local do cliente. Para que isso funcione, o cliente não deve ter nenhuma política de domínio que aponte para um WSUS, mesmo que o WSUS seja o SCCM, deve-se deixar que o cliente SCCM cuide de adicionar essa informação.

Software update scan schedule - Agenda o processo de Scan dos clientes, recomendado o padrão de 7 dias, a Microsoft não lança tantos updates por semanas que justifique alterar esse valor. O SCCM pode realizar o processo no horário configurado, ou em até 2 hora depois, evitando que todas as máquinas acessem o WSUS ao mesmo tempo.

Schedule deployment re-evaluation - Agenda a validação no cliente, se todos os updates que são requeridos estão instalados, se alguém por algum motivo removeu um update requerido, esse processo vai realizar a instalação novamente garantindo a conformidade do update.

When any software update deployment deadline is reached, install all other software update deployments with deadline coming within a specified period of time - Vai aproveitar o deadline de um update para instalar outros updates que estavam com agenda posterior, de acordo com o tempo configurado na opção Period of time for which all pending deployments with deadline in this time will also be installed

 

Criação de Software Update Group(SUG)

O SCCM pode realizar o deploy de um update ou de um grupo, recomendamos a utilização de grupos, para melhor organização. Para criação de grupos, vá em Software Library -> Software Updates -> All Software Updates.

Será listado uma série de updates, selecione os updates da qual deseja realizar o deploy, clique com o botão direito e escolha a opção “Create Software Update Group”

Digite o nome do grupo que deseja criar. Para melhor organização, siga um padrão de nome para os grupos.

Vá até Software Library -> Software Updates -> Software Update Group, veja que o novo grupo é listado.

Download de updates

Clique com o botão direito no grupo criado(também pode-se clicar nos updates individualmente caso necessário). Clique em Download.

Você pode criar um pacote de updates, informando um caminho UNC, ou utilizar um pacote já existente. Lembrando que diferentemente de pacote de aplicação, o cliente não vai realizar o download de todo o pacote, somente dos updates apontados como requeridos pelo processo de Scan do cliente.

Em Distribution Points, selecione o DP ou grupo de DP, da qual irá receber o pacote.

Em Distribution Settings, escolha a prioridade de distribuição, a não ser que a distribuição seja extremamente urgente, mantenha o default.

Escolha de onde será feito o download dos updates. Dê preferência a download via Internet.

Escolha o idioma do update. Next.

Confirme as informações do sumário e Next.

Dentro de Deployment Packages, você verá o recém-criado pacote, verifique que a distribuição para o DP foi com sucesso.

Deploy de updates

Voltando ao Software Update Group, clique com o botão direito e clique em Deploy.

Escolha a Collection que vai receber os updates. Opcionalmente você pode alterar o nome do Deployment para algo mais amigável.

Selecione o tipo de deployment. Required é obrigatório de instalação e o usuário não tem escolha, já Available é disponível para que o usuário inicie a instalação.

Escolha o horário em que o update ficará disponível para o usuário e o deadline que é a data limite para o início do deploy, chegando no deadline o update vai ser forçado na máquina.

Em User Experience, dois pontos de atenção, primeiro é no User Notifications, se o usuário vai ser informado da instalação ou não, todo update é aplicado em background para o usuário, porém existem notificações sobre update disponível, restart, etc.

Outro ponto é o Device Restart Behavior, marque as opções se desejar que as máquinas não sofram restart.

Se seu objetivo é que o usuário receba a mensagem de restart, lembre-se de deixar a notificação para mostrar a tela de restart e desmarque a opção 'Workstations" .

Na tela de alerta, deixe como padrão. Next.

Na tela de Download Settings, você vai informar se o cliente vai poder realizar o download de uma boundary group vizinha ou da Default Boundary Group caso o pacote de update não esteja disponível no DP utilizado pelo cliente. Recomendo quer o cliente utilize somente sua boundary, sendo assim seu DP para download, assim evita tráfego de rede entre sites.

Confirme o deploy.

Deploy realizado, é hora de monitorar. Vá em Monitoring -> Deployments e selecione o deploy desejado.

Com isso concluímos o básico para disponibilizar updates em seu ambiente, na próxima semana vamos explorar o fluxo de Software Update, esse passo vai ser fundamental na hora de resolver problemas no processo de Update.

 


Conteúdo editado e publicado por:
Richard Juhasz
Microsoft PFE
Configuration Manager

Support-Info: (CONNECTORS): Supported Active Directory (AD) Version for Active Directory Management Agent (AD MA)

$
0
0

All,

I recently fielded a question concerning the Active Directory Management Agent and the lowest supported Active Directory Version.  I felt that this would be some good information to share here as well.

NOTE It is important to note that Windows Server 2008 R2 Sp1 that as of March 2018, it is out of Mainstream support.  Yes.  It is in Extended Support, so it will still work to connect to a Windows Server 2008 R2 SP1 Active Directory, but not recommended due to support lifecycle for long term identity management strategies.
https://support.microsoft.com/en-us/lifecycle/search?alpha=Windows%20Server%202008%20R2%20Service%20Pack%201

Forefront Identity Manager 2010 R2

Refer to the Supported Platforms for FIM 2010 R2 SP1 document: https://blogs.technet.microsoft.com/iamsupport/forefront-identity-manager-2010-r2-sp1-supported-platforms/

Active Directory for user provisioning, PCNS and GAL
Sync (optional)
Windows Server 2008 R2 SP1

Windows Server 2012

Microsoft Identity Manager 2016:

The Supported Platforms for MIM 2016 document: https://docs.microsoft.com/en-us/microsoft-identity-manager/microsoft-identity-manager-2016-supported-platforms

Active Directory for User Provisioning, PCNS and GalSync (Optional)
  • Windows Server 2008 R2 SP1 (Please refer to the note above)
  • Windows Server 2012
  • Windows Server 2012 R2, R2 SP1
  • Windows Server 2016 * (You must be at MIM 2016 SP1)

The Connect to your directories document: https://docs.microsoft.com/en-us/microsoft-identity-manager/supported-management-agents

Active Directory Domain Services Active Directory 2012, 2016

 


New Microsoft 365 Training Video

$
0
0

Nick Johnson, PTS

Greetings Partners!

Be sure to check out the new Microsoft 365 Security Training video featuring Brad Anderson our CVP of Enterprise Mobility.

In this comprehensive overview of the Microsoft 365 Security offering, Brad shares how he talks to customers about the unique and powerful M365 Security story. He offers an in-depth look at identity-driven security, information protection, threat protection, and security management.

Brad also speaks at length about how he describes M365, use cases, and he shows over two dozen demos in great detail. These demos include scenarios for Azure AD Identity Protection, Azure Active Directory MFA, Windows Hello, Intune enrollment, accessing/labeling/classifying/tracking sensitive content, Conditional Access, Cloud App Security, Azure ATP, threat remediation/mitigation with Office 365, and Windows Defender – just to name a few.

If you only watch one video on M365 Security, this is the one to watch.

Exploring the Identity & Access dashboard in Azure Security Center

$
0
0

In Azure Security Center you can use the Identity & Access dashboard to explore more details about your identity posture. In this dashboard you have a snapshot of your identity related activities as shown in the example below:

Just by looking at this dashboard you can draw some conclusions, for example, all failed logons were due an invalid username or password. However, by looking at the accounts under Failed logons section, I can see that none of these accounts exist in my environment (off course, you need knowledge of the environment to conclude that). This can be an indication that there was attempt to brute force the authentication by trying different username and passwords. But what if this was a large organization, and you just don't know all accounts? The follow up question may be: is it possible to know if it was just the username that was wrong? Yes, there is! Follow the steps below to find out:

1. In the Identity & Access dashboard, click the Failed Logon Reasons chart.

2. Log analytics search will open with the result for the following query:

SecurityEvent | where AccountType == 'User' and EventID == 4625 and (FailureReason has '2313')

3. Below you have an example of the query result:

4. Click show more in one of the records.

For this example, the FailureReason field is %%2313, which means Unknown user name or bad password. The %% numbers you see in the FailureReason field are replacement string placeholders used to put localized values in when generating the display text for messages. Microsoft-Windows-Security-Auditing uses the msobjs.dll for storing those localized values. To find out what this code means you can use the open source tool msgdump and dump the message table from msobjs.dll to a txt file (big thanks to Ted Hardy for showing me how to make this conversion):

msgdump %windir%system32msobjs.dll > msobjsdll.txt

After that open the file and search for the code, in my case 2313:

This is cool, but still not showing if it was only the user name that was wrong! True, and for that you should look at the SubStatus field:

 

The status for this one is 0xc0000064, which means: User logon with misspelled or bad user account. How do I know? Well, this is the easy part, is documented here, and if you don’t find in there, you can find it here. With that we can conclude that it was really a bad username, which increases the likelihood that this was a dictionary attack (which is basically a brute force). Keep in mind that this is just one step in the investigation, in the same record make sure to review the source of this authentication (computer, IP, process). Also take advantage of other Security Center capabilities, such as Security Alerts, Security Incidents, and use the Investigation feature to find all the entities involved in the attack.

パートナー様によるすばらしい社会貢献をご紹介【3/24 更新】

$
0
0

(この記事は2017 年12月18日にMicrosoft Partner Network blog に掲載された記事 Giving back is better with great partners の翻訳です。最新情報についてはリンク元のページをご参照ください。)

 

 

企業の社会への責任感が高まるのはすばらしいことですが、一時的でなく、1 年を通してこのような高い意識を維持することができたら、ビジネス運営の理想的なモデルになるでしょう。

多くの消費者や従業員も、こうしたアプローチに賛同しています。ミレニアル世代に関する調査 (英語) では、次のような結果が出ています。

  • 目的意識のあるブランドを好む: 91%
  • 給与が低くても責任感のある企業で働きたい: 62%

 

マイクロソフトは、社会貢献に熱心に取り組む企業と提携し、サポートを行っています。マイクロソフト パートナーの Synergy Technical (英語)Pickit (英語) では、社会的責任を自社ブランドのミッションとして掲げています。

 

より良い世界のために、協力する

2011 年創立の Synergy Technical は「クラウド生まれ」の企業で、「製品を売るのではなく、問題を解決する」というモットーを掲げています。CEO を務める Rohana Meade 氏 (英語) は、従業員が世の中の意識を変えるようなプロジェクトに意欲的に取り組んでいると話します。

同社の最初の顧客は、自閉症患者を支援する小規模な非営利団体で、Office 365 の導入と次のような問題解決のサポートを実施しました。

  • 支出の削減
  • 障害復旧
  • 各地で働く従業員のコミュニケーションの円滑化

 

数年後、Meade 氏はあるネットワーキング イベントで、この団体を通じて自閉症の子供を養子に迎えたという女性に出会いました。そのときに、Synergy がこの取り組みに貢献できたことに誇りを感じたと言います。社会に貢献するという使命感を共有することが、企業にとって強い力となることを実感したのです。自社がサポートした組織が成長し、そこから独自の社会貢献が行われたと知れば、従業員にとってもモチベーションが生まれます。

 

「チームの結束力を高めるには、社会に貢献するという使命を共有するのが一番です」

– Rohana Meade 氏 (Synergy (英語)、CEO)

 

 

アーティスト、教師、学生に力を

Mathias Björkholm 氏 (英語)Henrik Bergqvist 氏 (英語) が共同で設立したスウェーデンの Pickit も、Synergy と同様に社会への高い責任感を持って、Microsoft Office Suite を使用した市場の問題解決や画像をベースとした製品開発を行っています。

多くの人が Google Bing などの検索エンジンから画像を入手していますが、その画像に著作権があることを認識している人はそれほど多くありません。Pickit は、世界規模の写真家ネットワークや小規模のイメージ バンクと連携して、著作権を侵害せずに使用できる画像と厳選したコレクションを Microsoft Office ユーザーに提供しています。Pickit では、画像の収益の 60% が写真家に支払われます。また、Björkholm 氏の教育に対する信念に基づき、学生や教師が安全でクリエイティブな画像を自由に使えるように、無料の Pickit Edu も提供しています。

 

組織で社会貢献の意識を高める方法

まずは、どんな社会貢献ができるか、従業員の意見を聞いてみましょう。アイデアを集めるだけでなく、社会貢献に向けた機運を高める意味もあります。時間、お金、割引、もっと抽象的なものなど、さまざまな形が考えられますが、従業員の意見を基に目的や経費を全体で話し合って決めます。Synergy では、何人かの従業員が休暇を返上して、慈善団体向けの技術ソリューションを構築しました。Pickit ではもっと概念的に、新しいソリューションを構築したいチームや未来志向の従業員を集めて、画像に関する世間の意識を変えるような取り組みを進めています。

企業ごとに社会的責任を果たす方法は違いますが、組織の目的意識を高めて、チーム全体で社会をより良くするために働きかけることは、従業員、企業、そして世界にとっての大きなメリットとなります。

マイクロソフト パートナー コミュニティ (英語) に参加してご意見をお聞かせください。

 

 

 

Connect Your IoT Devices to Cloud with Azure IoT Hub

$
0
0

So IoT is not an extreme new, in fact it's been something companies have been doing since before it was named IoT. The Microsoft Azure IoT Hub is an IoT suite in the Azure cloud, which offers several services for connecting IoT devices with Azure services, processing incoming messages or sending messages to the devices. From a device perspective, the functionalities of the Microsoft Azure IoT Hub enable simple and safe connection of IoT devices with Azure services by facilitating bidirectional communication between the devices and the Azure IoT Hub.

This has many features which you need not any much effort to work the IoT with cloud.

  • Provides multiple device-to-cloud and cloud-to-device communication options. These options include one-way messaging, file transfer, and request-reply methods.
  • Provides built-in declarative message routing to other Azure services.
  • Provides a queryable store for device metadata and synchronized state information.
  • Enables secure communications.
  • Provides extensive monitoring for device connectivity and device identity management events.
  • Includes device libraries for the most popular languages and platforms.

For the IoT services, there are two kind of Hubs in Azure. IoT Hub and Event Hub. So they have some differences in the connectivity and the usage with the IoT device it self.  So IoT hub supports two way communication where Event hub mostly used only device-to-cloud service. And when we take the file uploading scenario. event hub will not supports. IoT Hub Provides device SDKs for a large variety of platforms and languages, in addition to direct MQTT, AMQP, and HTTPS APIs. And in Event Hub, it supports on .NET, Java, and C, in addition to AMQP and HTTPS send interfaces. 

You may wonder why we have to use this Azure IoT Hub for the IoT device connectivity?

So in this IoT hub, we can create our devices with a unique device name(known as device Id) which allows you to send messages from your raspberry pi or Arduino. So the IoT Hub contains all the information of the device which registered. As a example, the state of the device, configuration and all needy informations saved. Also this has a high secure connectivity between the device and the cloud. So each of the device registered in the cloud has own authentication and secured connectivity. This will allow you to send messages from device easily. No dozens of lines of codes needed. And the best thing is the support of various languages with the support of SDKs. So this supports C#, Java, JavaScript and Python. 

So as a scenario which we need to get sensor data from 1000 devices all over over the country. You ever knew that this supports 500,000 devices simultaneously. So we can send data from 500,000 devices per second. 

So this is pretty awesome. So wait till next blog which will show how to send and receive data from Azure IoT Hub with a real example. 

References: https://docs.microsoft.com/en-us/azure/iot-hub/iot-hub-get-started-simulated 

GDPR (EU一般データ保護規則) 施行まであと2か月!お役立ちリソースまとめ【3/25 更新】

$
0
0

2018年5月25日より、GDPR (EU 一般データ保護規則) が施行となります。GDPRは個人のプライバシーの権利の保護と確立を目的としており、個人 データを管理および保護する方法を制御するためのさまざまな要件を定めた法律です。EU居住者に対してサービス・製品を提供する組織が対象となるため、日本の企業も適用対象となる可能性があります。GDPR準拠という観点からもお客様をご支援できるチャンスですので、改めてGDPRについて必要なリソースをまとめました。ぜひ、ご活用ください。


先手必勝!基礎から学ぶGDPR--基本的な内容と対応が必要なケースを解説

本冊子では “先手必勝でいち早く GDPR を基礎から学ぶ” ために、具体的な適用事例やFAQ、チェックポイントを分かりやすくまとめています。 マイクロソフト製品の話によらない、一般的な GDPR の基礎について解説しています。

 

ダウンロードはこちらから


GDPR が与える影響と準拠に向けた4つのステップ

GDPR に準拠するために組織が行うべき対策とは何か。マイクロソフトのクラウドサービスを活用して取れる対策を4つのステップで分かりやすくご紹介しています。 Microsoft 365 E5に含まれるソリューションを中心に、お客様の GDPR 準拠に向け活用できる機能をご紹介した eBook です。

ダウンロードはこちらから


「施行まで残り100日 EU一般データ保護規則(GDPR)対応への近道」セミナーレポート

GDPR 対応の本質となるポイントを理解し、準拠に向けどのようなソリューションを活用していくのか見極める時期に来ています。こちらの e-Book は、この 2点を中心に構成し、2018年 2月 14日に開催した GDPR セミナーのサマリーレポートです。

 

ダウンロードはこちらから


Microsoft Azure に関する GDPR 準拠に向けたガイド

AzureなどのMicrosoftクラウドサービス(その他のクラウドサービスや本書の対象範囲外であるオンプレミスソリューションも同様)は、システム内の個人データを識別してカタログ化し、より安全な環境を構築し、GDPRコンプライアンスの管理を簡素化するのに役立ちます。

ダウンロードはこちらから (英語)


参考情報

 

Viewing all 34890 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>