Quantcast
Channel: TechNet Blogs
Viewing all 34890 articles
Browse latest View live

Office 365 Advanced Threat Protection for SharePoint, OneDrive and Microsoft Teams now available

$
0
0

When moving your organization to cloud services, security concerns add another layer of consideration; one of trust.

Security and compliance is an ongoing process, not a steady state. It is constantly maintained, enhanced, and verified by highly-skilled, experienced and trained personnel. We strive to keep software and hardware technologies up to date through robust processes. To help keep Office 365 security at the top of the industry, we use processes such as the Security Development Lifecycle; we also employ techniques that throttle traffic and prevent, detect, and mitigate breaches.

At Microsoft we continue systematic approach to disrupting attacks through eliminating weaknesses by eliminating the vectors of attack themselves by implementing architectural changes some of which leverage virtualization, containers, and other types of technologies.

In April 2015 we launched Office 365 Advanced Threat Protection to help customers secure their environment from evolving security threats providing protection against unknown malware and viruses, real time, time-of-click protection against malicious URLs, and rich reporting and URL trace capabilities.

In our continued effort to address the modern threat landscape, today we’re announcing General Availability of Office 365 Advanced Threat Protection for SharePoint, OneDrive, and Microsoft Teams.

Office 365 Advanced Threat Protection SharePoint, OneDrive, and Microsoft Teams uses signals and smart heuristics as quality indicators to identify the files within your tenant that may contain malicious content, which includes correlating the file activity signals from SharePoint, OneDrive, and Microsoft Teams within your tenant with the Microsoft Security Intelligence Graph threat feeds.

Examples of file activity signals include anonymous, company wide or explicit sharing, or activity from guest users. Threat feeds that Office 365 Advanced Threat Protection leverages include known malware in email or SharePoint, Windows Defender/Defender ATP detections, suspicious or risky logins or other indicators of irregular file activity within your tenant.

Getting Started

Office 365 Advanced Threat Protection SharePoint, OneDrive, and Microsoft Teams can be configured in the Office 365 Security and Compliance Center.

Learn more on configuring Office 365 Advanced Threat Protection for SharePoint, OneDrive, and Microsoft Teams at https://support.office.com/en-us/article/Office-365-ATP-for-SharePoint-OneDrive-and-Microsoft-Teams-26261670-db33-4c53-b125-af0662c34607?ui=en-US&rs=en-US&ad=US.

Resources

Office 365 Advanced Threat Protection overview [https://support.office.com/en-us/article/Office-365-Advanced-Threat-Protection-overview-e100fe7c-f2a1-4b7d-9e08-622330b83653?ui=en-US&rs=en-US&ad=US]

Advanced Threat Protection safe attachments in Office 365 [https://support.office.com/en-us/article/ATP-safe-attachments-in-Office-365-6E13311E-92AE-495E-A619-56D770199170]

FaQ

Can I block download of infected files in Office 365?

There is a tenant level configuration that allows or blocks the download of an infected file. This configuration is leveraged by the different native user experiences that are triggered within SPO, ODB and Teams. Tenant admins can be updated using a PowerShell script. Refer to https://technet.microsoft.com/en-us/library/fp161390.aspx and the DisallowInfectedFileDownload parameter for additional details.

Is there a licensing requirement for ATP?

ATP is included in Office 365 Enterprise E5 and Office 365 Education A5. You can add ATP to the following Exchange and Office 365 subscription plans:

  • Exchange Online Plan 1
  • Exchange Online Plan 2
  • Exchange Online Kiosk
  • Exchange Online Protection
  • Office 365 Business Essentials
  • Office 365 Business Premium
  • Office 365 Enterprise E1
  • Office 365 Enterprise E3
  • Office 365 Enterprise F1
  • Office 365 Education A1
  • Office 365 Education A3

To buy Office 365 Advanced Threat Protection, see Office 365 Advanced Threat Protection.

To compare features across plans, see Compare Office 365 for Business plans.


Office 365: Enabling and creating a distribution list for first release users.

$
0
0

I had a customer present an inquiry to us on the management and communication of release release settings and changes.  The customers goal was to:

 

  • Manage first release settings through PowerShell to enable and disable users in bulk.
  • Create a mail enabled security group for mail distribution and permissions to other applications in the service.
  • Create a method to add and remove users to the distribution group based on their first release settings.

 

The challenges here were immediately noted.  At this time first release settings cannot be managed through the azure ad PowerShell either version 1 or version 2.  Our only options are to manage the user addition through the portal GUI.  In addition to managing powershell settings the ability to create mail enabled security groups is only available within the Exchange Online portal – you cannot at this time use Exchange Online PowerShell to provision a mail enabled security group.  Even though there are some challenges present I believe they are not insurmountable – let us take a look….

 

To begin we must ensure that the first release settings of the tenant are established correctly.  You can utilize this reference for first release settings in Office 365.  https://support.office.com/en-us/article/Set-up-the-Standard-or-First-Release-options-in-Office-365-3B3ADFA4-1777-4FF0-B606-FB8732101F47.  In our instance we are going to enable the first release option only on a subset of users chosen.  This will allow us to control who sees first release options while letting the other user population on standard release. 

 

image

 

Please note that it may take a while for the portal wizard to complete this transformation.

 

The next step is the process is the creation of the mail enabled security group.  Logging into the Exchange Online portal we can select the groups management option.  Here we will find the option to create a mail enabled security group.  I am recommending that this group be a cloud only group and have assigned a domain.onmicrosoft.com address – although this concept could be modified for a group that has directory synchronization.  A cloud only group will allow us to modify and manage membership directly through Office 365.  If the group is sourced on premises you would have to modify where you execute the group management commands to occur on premises.

 

clip_image002

 

In our example I am creating a mail enabled security group called FirstRelease.

 

PS C:> Get-DistributionGroup FirstRelease

Name         DisplayName  GroupType                  PrimarySmtpAddress                             
----         -----------  ---------                  ------------------                             
FirstRelease FirstRelease Universal, SecurityEnabled FirstRelease@contoso.onmicrosoft.com

 

With the first release settings adjusted and the mail enabled security group in place we can begin the process of populating the first release settings for our first user set.  The first release settings allow for a BULK ADD option utilize a CSV file.  The CSV file contains a list of user principal names where we want to apply the first release settings to.  The CSV file row has no header – the first entry is the first user to add.

 

image

 

With the CSV file created and populated with the initial user set the portal can be utilized to load the file.  Under our first release settings we can select ACTIONS –> + BULK ADD.

 

clip_image002[4]

 

The BULK ADD wizard will start.

 

clip_image002[6]

 

The browse button can be utilized to select and locate the CSV file to import.

 

clip_image002[8]

 

Once the CSV file has been selected the verify option can be utilized to identity any potential errors.

 

clip_image002[10]

 

With conformation that no errors have been found the next button will complete the changes.

 

clip_image002[12]

 

The wizard in this case has updated three users to have first release applied.

 

It is important to note at this point that there is no BULK REMOVE option.  If you need to remove users it must be done via the portal and selecting ACTIONS –> MANAGE PEOPLE FOR FIRST RELEASE.  Users can be removed from the first release option by selecting the X next to their name.

 

image

 

image

 

clip_image002[16]

 

It may take sometime for the first release settings to provision to users.  I recommend allowing an hour to ensure that the changes are appropriately applied and replicated prior to proceeding. 

 

With the first release settings established on the users the initial distribution list population can begin.  The files created in this process will be important to the maintenance process I will outline below.

 

To begin the distribution list population we will create a file of all users with the first release setting set and their objectIDs. 

 

PS C:> $firstReleaseUsers=Get-MsolUser -All | where {$_.releasetrack -eq "StagedRolloutOne"} | Select-Object userPrincipalName,objectID

PS C:> $firstReleaseUsers

UserPrincipalName          ObjectId                           
-----------------          --------                           
bmoran@contoso.org         2f7416c5-682c-46b4-b8f8-40b8ee03079e
cjohnson@contoso.org       3b5a9963-7fa1-4094-8a0b-f4219d8ecfe0
tmcmichael@contoso.org     61425db0-7812-49dd-b6aa-1a732bdec569

 

The users proxy addresses are then gathered from their objectIDs.  Using this method we can remove any ambiguity about the recipients class – for example mailbox within the service verses a mailbox that has yet to be migrated (mail user).

 

PS C:> $firstReleaseSMTP=$firstReleaseUsers | % { $recipientID=$_.objectID.toString() ; Get-Recipient -Identity $recipientID } | Select-Object primarySMTPAddress

 

PS C:> $firstReleaseSMTP

PrimarySmtpAddress       
------------------       
bmoran@domain.org   
cjohnson@domain.org 
tmcmichael@domain.org

With the list of proxy addresses we should have the appropriate recipients to add to the distribution list.

 

PS C:> $firstReleaseSMTP | % { Add-DistributionGroupMember -Identity FirstRelease@domain.onmicrosoft.com -Member $_.primarySMTPAddress -Verbose }

VERBOSE: Adding distribution group member "bmoran@domain.org" on distribution group "FirstRelease@domain.onmicrosoft.com".
VERBOSE: Adding distribution group member "cjohnson@domain.org" on distribution group "FirstRelease@domain.onmicrosoft.com".
VERBOSE: Adding distribution group member "tmcmichael@domain.org" on distribution group "FirstRelease@domain.onmicrosoft.com".

 

The new distribution list has now been populated with our first set of first release users.

 

PS C:> Get-DistributionGroupMember -Identity FirstRelease@domain.onmicrosoft.com

Name              RecipientType
----              -------------
Timothy McMichael UserMailbox 
Bill Moran        UserMailbox 
Courtney Johnson  UserMailbox
 

 

The final step of this process is to establish the list of proxy addresses that served as the original population of the distribution list.  This CSV file will service as the basis of comparison for automated management moving forward. 

 

$firstReleaseSMTP | Export-Csv -Path z:FirstReleaseMembers.CSV

The CSV file should be populated with the addresses previously contained in the variable.

 

image

 

==========================================================================================================================

 

I am going to make an assumption that the list of first release users will change over time.  What I wanted to try to address here was a method where we could automate the updating of the distribution list associated with first release.  We have already covered that the first release settings in the portal for removing users can only be done manually – we cannot utilize a CSV file to remove a user.  The bulk add option could be utilized if multiple additions were required.  The script outlined below will:

 

  • Take a CSV file that represents the users that were previously first released enabled and digest it.
  • Take a CSV file that represents the newly enabled and currently enabled users and digest it.
  • Remove the users from the distribution group that were removed.
  • Add the users to the distribution group that were removed.
  • Save the updated user state to the CSV file that will serve as comparison moving forward.

 

In the example above we had bmoran and cjohnson that were enabled for first release.  They currently exist in the FirstReleaseMembers.csv file that was populated from the initial load of the distribution group above.  Using the portal we will remove bmoran and cjohnson.  We will then add Heather and Ray.  This should generate the following actions in the script:

 

  • Remove two users.
  • Add two users.

 

Let’s take a look.

 

Here is the distribution group membership before modifying the list and running the script.

 

PS C:> Get-DistributionGroupMember -Identity FirstRelease@domain.onmicrosoft.com

Name              RecipientType
----              -------------
Timothy McMichael UserMailbox 
Bill Moran        UserMailbox 
Courtney Johnson  UserMailbox
 

 

Here is the distribution list membership after running the script.

 

PS C:> Get-DistributionGroupMember -Identity FirstRelease@domain.onmicrosoft.com

Name              RecipientType
----              -------------
Timothy McMichael UserMailbox 
Ray Bleau         UserMailbox 
Heather Egner     UserMailbox 

In this case the distribution list now reflects the updates that were performed via the portal. 

 

==========================================================================================================================

 

The script can be found below for your reference.

 

#===========================================================
#
# Script to automate some management of first release.
#
# Timothy McMichael
# Microsoft
#
# The script assumes that a CSV file of users was initially created to load the first release settings.
# Once the intial first release settings are loaded - the script will dump first release users and compare to previous.
# The users are then added to a distribution list for communications purposes and colloaboration on first release settings.
#
#===========================================================

#Set the variables to their values.

$firstReleaseOriginalPath="z:" #Path where the original users CSV file is stored.
$firstReleaseNewPath="z:" #Path where the new users CSV file is stored.
$firstReleaseOriginalFile="FirstReleaseMembers.csv" #File name for the original users CSV file.
$firstReleaseNewFile="FirstReleaseNewMembers.csv" #File name for the updated users CSV file.
$firstReleaseOriginalCSV=$firstReleaseOriginalPath+$firstReleaseOriginalFile #Full file path to original users CSV file.
$firstReleaseNewCSV=$firstReleaseNewPath+$firstReleaseNewFile #Full path to the updated users CSV File.
$logFilePath="z:" #Path where the log file for the script should go.
$logFileName="ChangeLog.txt" #Name of log file for the script.
$logFile=$logFilePath+$logFileName #Full path of log file.
$firstReleaseDistributionGroupName="FirstRelease@domain.onmicrosoft.com" #Name of distribution group expressed as primary SMTP address of the group.
$currentDate=get-date #Variable of current date.
$currentDate=$currentDate.tostring('MM-dd-yyyy_hh-mm-ss') #Current date converted to a formate usable for file names.
$firstReleaseOriginalCSVRename=($currentDate+"_Original.csv") #New file name to rename the original file to to preserve information.

 

#Begin creation of log file and write out all initial variable states

Add-Content -Path $logFile -Value "======================================================================================="
Add-Content -Path $logFile -Value $currentDate
Add-Content -Path $logFile -Value ("First Release Original Path: "+$firstReleaseOriginalPath)
Add-Content -Path $logFile -Value ("First Release New Path: "+$firstReleaseNewPath)
Add-Content -Path $logFile -Value ("First Release Original File Name: "+$firstReleaseOriginalFile)
Add-Content -Path $logFile -Value ("First Release New File Name: "+$firstReleaseNewFile)
Add-Content -Path $logFile -Value ("First Release Original CSV: "+$firstReleaseOriginalCSV)
Add-Content -Path $logFile -Value ("First Release New CSV: "+$firstReleaseNewCSV)
Add-Content -Path $logFile -Value ("Log File Path: "+$logFilePath)
Add-Content -Path $logfile -Value ("Log File Name: "+$logFileName)
add-content -Path $logFile -Value ("Log File: "+$logFile)

#Begin Processing by capturing all users that are now enabled for first release.
#Users with release track StagedRolloutOne are enabled for first release.

$firstReleaseUsers = Get-msolUser -all | where {$_.releaseTrack -eq "StagedRolloutOne"} | select-object userprincipalName,ObjectID

Add-Content -Path $logFile -Value "+++++++++++++++++++++++++++++++++++++++++++"
Add-Content -Path $logFile -Value "The following users were returned as enabled for first release:"
Add-Content -Path $logFile -Value $firstReleaseUsers
Add-Content -Path $logFile -Value "+++++++++++++++++++++++++++++++++++++++++++"

#Capture the SMTP addresses of all users that are now enabled for first release.

$firstReleaseSMTP = $firstReleaseUsers | % { $recipientID=$_.objectID.tostring() ; get-recipient -Identity $recipientID } | Select-Object primarySMTPAddress

#Export the SMTP addresses of all users that are now enabled for first release.

$firstReleaseSMTP | Export-Csv -Path $firstReleaseNewCSV

Add-Content -Path $logFile -Value "+++++++++++++++++++++++++++++++++++++++++++"
Add-Content -Path $logFile -Value "The following users were returned as enabled for first release (proxy addresses):"
Add-Content -Path $logFile -Value $firstReleaseSMTP
Add-Content -Path $logFile -Value "+++++++++++++++++++++++++++++++++++++++++++"

#Import the CSV files generated into working variables.

$firstReleaseOriginalSMTP=Import-Csv -Path $firstReleaseOriginalCSV
$firstReleaseNewSMTP=import-csv -Path $firstReleaseNewCSV

#Perform file compares.
#For each user removed compare-object will log side indicator <= as it appears int he left file not the right file.
#For each user added compare-object will log side indicator => as it appears in the right file not the left file.

$firstReleaseChanges=Compare-Object $firstReleaseOriginalSMTP $firstReleaseNewSMTP -Property PrimarySMTPAddress

Add-Content -Path $logFile -Value "+++++++++++++++++++++++++++++++++++++++++++"
Add-Content -Path $logFile -Value "The following is the change matrix: "
Add-Content -Path $logFile -Value $firstReleaseChanges
Add-Content -Path $logFile -Value "+++++++++++++++++++++++++++++++++++++++++++"

$firstReleasedRemoved = $firstreleasechanges | where {$_.sideindicator -eq "<="}

Add-Content -Path $logFile -Value "+++++++++++++++++++++++++++++++++++++++++++"
Add-Content -Path $logFile -Value "The following users are to be removed: "
Add-Content -Path $logFile -Value $firstReleasedRemoved
Add-Content -Path $logFile -Value "+++++++++++++++++++++++++++++++++++++++++++"

$firstReleasedAdded = $firstreleasechanges | where {$_.sideindicator -eq "=>"}

Add-Content -Path $logFile -Value "+++++++++++++++++++++++++++++++++++++++++++"
Add-Content -Path $logFile -Value "The following users are to be added: "
Add-Content -Path $logFile -Value $firstReleasedAdded
Add-Content -Path $logFile -Value "+++++++++++++++++++++++++++++++++++++++++++"

#Record distribution group members.

$members=Get-DistributionGroupMember -Identity $firstReleaseDistributionGroupName
Add-Content -Path $logFile -Value "+++++++++++++++++++++++++++++++++++++++++++"
Add-Content -Path $logFile -Value $members
Add-Content -Path $logFile -Value "+++++++++++++++++++++++++++++++++++++++++++"

#Begin processing the distribution list removes.

Add-Content -Path $logFile -Value "+++++++++++++++++++++++++++++++++++++++++++"
Add-Content -Path $logFile -Value "BEGIN REMOVING USERS: "
$firstReleasedRemoved | % { Remove-DistributionGroupMember -Identity $firstReleaseDistributionGroupName -Member $_.primarySMTPAddress -Confirm:$FALSE -Verbose ; Add-Content -path $logFile -Value ("Removed User: "+$_.primarySMTPAddress) }
Add-Content -Path $logFile -Value "+++++++++++++++++++++++++++++++++++++++++++"

#Begin processing the distribution list adds.

Add-Content -Path $logFile -Value "+++++++++++++++++++++++++++++++++++++++++++"
Add-Content -Path $logFile -Value "BEGIN REMOVING USERS: "
$firstReleasedAdded | % { Add-DistributionGroupMember -Identity $firstReleaseDistributionGroupName -Member $_.primarySMTPAddress -Confirm:$FALSE -Verbose ; Add-Content -path $logFile -Value ("Added User: "+$_.primarySMTPAddress) }
Add-Content -Path $logFile -Value "+++++++++++++++++++++++++++++++++++++++++++"

#Record distribution group members.

$members=Get-DistributionGroupMember -Identity $firstReleaseDistributionGroupName
Add-Content -Path $logFile -Value "+++++++++++++++++++++++++++++++++++++++++++"
Add-Content -Path $logFile -Value $members
Add-Content -Path $logFile -Value "+++++++++++++++++++++++++++++++++++++++++++"

Rename-Item $firstReleaseOriginalCSV -NewName $firstReleaseOriginalCSVRename
Rename-Item $firstReleaseNewCSV -NewName $firstReleaseOriginalFile

#===========================================================

 

 

==========================================================================================================================

Learn How to submit a Technical/Professional Support Incident

$
0
0

This article explains How to submit a Technical/Professional Support Incident

 

Product Support Incidents
Use your Product Support Incidents to help resolve specific symptoms encountered while using Microsoft software (where there is a reasonable expectation that the problems are caused by Microsoft products). Product support incidents provide reactive support that focuses on a specific problem, error message, or functionality that is not working as intended.

For non-urgent requests, partners can save their product support incidents by using the Partner Support Community with a guaranteed initial response from Microsoft support professionals (see above.)

There are two types of Microsoft Partner Network product support incident benefits:

  1. Product Support Incidents, for hybrid and on-premises competency partners, can be used for all supported products
  2. Signature Cloud Support Incidents, for Cloud or Hybrid competency partners, can be used for Cloud products only

 

ACTION: Read the complete article

 

 

Office 365: Organization Queue Quota Exceeded

$
0
0

In Office 365 there are several throttling limits that administrators may encounter that impact mail flow.  One of these throttling limits is Organization Queue Quota Exceeded.

 

When organizations send a large amount of mail through the service – either by relay or direct submission – and the service is unable to relay (mail is deferred for some reason, transient failure, 4yz smtp response code, connectivity issues, etc) the mails are queued for delivery and retried until expiry in 48hrs.  For example maybe a large number of emails are addressed to a domain with a valid MX record but no one is answering at that name the messages will remain in queue.  As queues start to reach certain thresholds across our transport environment – we will begin to defer newly submitted mail with Queue Quota exceeded (Sender Queue quota if mail in our queues are from a specific sender, or Organization if there are multiple senders)

 

This throttling will continue to occur until the queues begin to drain.  The drain of queues may occur either through delivery of the messages to their intended targets or as the messages expire in the service.  In some cases it may be necessary to consult with product support services to help identify this condition and provide remediation. 

 

The throttling limits are in place to ensure that mail queued for one tenant do not start impacting on mail deliveries process for all other tenants on the shared server infrastructure.

 

The way to resolve during such an incident is to identify the bad mail (typically storm /large amount of auto-generated mail) which is getting stuck in queues in our service. Then resolve the reason for it being queued – address delivery issue or stop auto-generated mail from the source if it was sent to invalid recipients. The service will auto-recover and start allowing new mail to be submitted.  In some cases it maybe necessary to consult with product support services on the resolution.

VSAE support for 2017

Creating and Managing Security and Compliance Filters in the Real World [Part 2]

$
0
0

Picking up where I left of on part 1 of this post, I wanted go into what it would take to refine some roles for managing eDiscovery for larger organizations.

In this scenario, we're going to:

  • Remove users from any existing eDiscovery roles or groups
  • Create a security group to hold users that will perform eDiscovery searches
  • Create a custom role group that has the appropriate eDiscovery roles and add the security group as a member
  • Verify

If you didn't read the previous blog post on this topic, I'd encourage you to go back and do so, since I'm going to continue using the same users and compliance filters.

Connecting to Office 365

Since all of the configuration is performed from PowerShell, the first thing we need to do is connect to PowerShell for both the Security & Compliance Center as well as Exchange Online.  It's important to note that the Security & Compliance Center does have some cmdlets that overlap the Exchange Online cmdlets.  In my normal configuration, I use "-Prefix SCC" to differentiate the commands exported by the Security & Compliance Center so I know which ones I'm executing.  For the purposes of this session, we're going to rely on the -DisableNameChecking and -AllowClobber parameters to allow the Security & Compliance cmdlets to take precedence over the Exchange cmdlets with the same names.

To do this, I'm just using a simple connect script.  You can add this as a function to your PowerShell profile if that makes your life better.  I just saved this code as EXOSCC.PS1, saved a credential object, and then ran it.

param(
[System.Management.Automation.PSCredential]$UserCredential = (Get-Credential)
)
$Session = New-PSSession -ConfigurationName Microsoft.Exchange -ConnectionUri https://ps.outlook.com/powershell-liveid -Credential $UserCredential -Authentication Basic -AllowRedirection
Import-PSSession $Session -DisableNameChecking
$Session = New-PSSession -ConfigurationName Microsoft.Exchange -ConnectionUri https://ps.compliance.protection.outlook.com/powershell-liveid -Credential $UserCredential -Authentication Basic -AllowRedirection
Import-PSSession $Session -AllowClobber -DisableNameChecking
$Host.UI.RawUI.WindowTitle = $UserCredential.UserName + " (Exchange Online + Compliance Center)"

Once you're connected, verify that you have the cmdlets for both endpoints available to you. You can try something like Get-Mailbox to verify Exchange Online connectivity and Install-UnifiedCompliancePrerequisite to verify connectivity to the Security & Compliance Center:

We're going to be using the Management Roles cmdlets to create and modify the Security & Compliance Center cmdlets.  Because we've overwritten the default Exchange cmdlets, you'll want to make sure you have the Security & Compliance version of Get-ManagementRole and Get-RoleGroup:

If the Exchange version is loaded, this is what you'll see:

It's not what you need, so you'll have to backtrack and figure out how you got the wrong cmdlets loaded.  Do not pass go.  Do not collect $200.  Do not delegate eDiscovery.

However, if you see this, things are looking up and you're ready to go:

Remove Users from Existing Role Groups

If you're starting with the same users/configuration from the previous post, you'll want to clear their role group memberships and remove any existing Security & Compliance Roles.

First, take a look at the role groups.  The default role group that we used (that had the necessary permissions) is the eDiscovery Manager role group:

Get-RoleGroupMember eDiscoveryManager

Remove the test users from the role group with the Remove-RoleGroupMember cmdlet:

Remove-RoleGroupMember -Identity ediscoverymanager -Member searchuserstartswithd
Remove-RoleGroupMember -Identity ediscoverymanager -Member searchusercustserv
Remove-RoleGroupMember -Identity ediscoverymanager -Member searchusermarketing

Security Group

Create a mail-enabled security group--either on-premises and synchronized or cloud-based is fine.  It can't be a Dynamic Distribution Group, since DDGs don't have a security token associated with them.  Boo-fricken'-hoo.

To do this in my environment, I just added our members from the previous post into an array object, and then created a new mail-enabled security group with the array as the members property.

$Members = @('searchusercustserv@ems340903.onmicrosoft.com','searchuserstartswithd@ems340903.onmicrosoft.com','searchusermarketing@ems340903.onmicrosoft.com')
New-DistributionGroup -Name 'eDiscovery Delegated Users' -Type Security -Members $Members -DisplayName 'eDiscovery Delegated users'

Role Group

Once you have the security group created with members, you can create a role group and add the members.  When you create a role group, you must assign the roles at creation.  If you don't assign roles, you won't get any errors, but you also won't get any role groups.  It's quite the exercise in futility, if you're into that sort of thing.  Note:  You can also add your newly-created security group to an existing role group.

You'll want to identify the role permissions to add.  In this case, we're going to add:

  • Case Management
  • Compliance Search
  • Export
  • Hold
  • Preview
  • Review
  • RMS Decrypt
New-RoleGroup -Name "eDiscovery Delegated Users" -Description "eDiscovery Delegated Users" -DisplayName "eDiscovery Delegated Users" -Members eDiscoveryDelegatedUsers@EMS340903.onmicrosoft.com -Roles Export,Hold,'Case Management','Compliance Search','RMS Decrypt',Preview,Review

Verify

To verify, log into the Security & Compliance Center and follow the steps in the previous post to confirm that the searches work as expected.

Then, have a pizza party.

 

Microsoft Teams に追加予定の Skype for Business の機能のロードマップを公開

$
0
0

(この記事は 2017 10 24 日に Microsoft Teams Blog に投稿された記事 Roadmap for Skype for Business capabilities coming to Microsoft Teams now available の翻訳です。最新情報については、翻訳元の記事をご参照ください。)

9 月に開催された Microsoft Ignite において、マイクロソフトは新しい Intelligent Communications 構想を発表しました。この構想には、Microsoft Teams が Office 365 におけるコミュニケーションおよび共同作業支援機能の中心的なクライアントになるという内容が盛り込まれています。今回は、Teams に Microsoft Skype for Business の機能を追加するための、計画中のロードマップについて詳細をご紹介しますので、移行計画を策定するうえでご活用ください。

メッセージング: Teams では現在、常設チャット、1 対 1 のプライベート チャット、グループ チャットなど、豊富なインスタント メッセージング機能が提供されています。2018 年第 2 四半期末までに、Teams に追加のメッセージング機能を実装する予定です。追加予定の機能には、チャット中の画面共有、企業間のフェデレーションなどが含まれます。

会議: Teams では現在、画面共有、会議後のチャネルへの会議チャットの記録、電話会議のプレビューなど、共同作業を促進する会議機能が提供されています。2018 年第 2 四半期末までに、Microsoft Teams に追加の会議機能を実装する予定です。追加予定の機能には、Skype Room System での会議室のサポート、サードパーティ製の会議室用デバイスから Teams 会議に接続するためのクラウド ビデオとの相互運用性機能などが含まれます。

通話: Teams では現在、多数の通話機能が提供されています。今四半期後半には、Teams のボイスメールをリリースする予定です。また、2018 年第 2 四半期末までには、既存の音声通信回線を使用して Office 365 の通話サービスを有効にできるようになります。

Teams に既存の Skype for Business のコア機能を追加する以外に、Teams に向けて新しい Intelligent Communications 機能を実装する予定です。Ignite の Microsoft Teams および Skype for Business の一般セッション (英語) でもご紹介したように、お客様は会議を録画して Teams に保存したり、議事録を追加したり、会議から重要な用語を検索したりできるようになります。これらの機能は、2018 年第 2 四半期末にロールアウトされます。

Intelligent Communications 構想の一環として、この機会に有償の通信サービスの名称も簡素化します。今後、PSTN 会議は「電話会議」、クラウド PBX は「電話システム」、PSTN 通話は「通話プラン」に名称が変更されます。これらの名前には、コミュニケーション機能と共同作業支援機能のさらなる統合を進めるうえで、IT 部門とエンド ユーザーの両方が直観的に理解できるものにしたいという意図が込められています。

まだご利用になっていないお客様は、ぜひ今すぐ Teams をお試しください。スタンドアロンまたは Skype for Business と共存する形でご利用いただけます。
今後追加予定の機能の一覧については、Office 365 の公開ロードマップ (英語) をご覧ください。このロードマップ (英語) は、Intelligent Communications FastTrack (英語) ポータルからダウンロードすることもできます。Microsoft Teams への移行計画を策定するうえでお役立てください。
また、10 月 25 日午前 9 時 (太平洋夏時間)/10 月 26 日午前 1 時 (日本時間) には Microsoft Teams の "Ask Microsoft Anything" (英語) を開催し、10 月 27 日午前 9 時 (太平洋夏時間)/10 月 28 日午前 1 時 (日本時間) には Teams On Air (英語) がライブ配信されます。いずれも、Microsoft Teams チームがロードマップの詳細についてご説明すると共に、皆様からのご質問にお答えしますので、ぜひご確認ください。

※ 本情報の内容 (添付文書、リンク先などを含む) は、作成日時点でのものであり、予告なく変更される場合があります。

Microsoft クラウドプラットフォーム ニュースまとめ 2017年11月【12/6 更新】

$
0
0

サーバー&クラウド関連の製品やサービスの発表をお伝えする、マイクロソフト マーケティングチームの公式ブログ「Cloud and Server Product Japan Blog」およびそのソースの英語版ブログ「Cloud Platform News Bytes Blog (英語)」から、最新情報をピックアップしてご紹介します。

 

≪最近の更新≫

  • MS クラウド ニュースまとめ – Azure Reserved Virtual Machine Instances の一般提供開始 他 (2017/11/15)
    • Azure IoT Edge の新機能のパブリック プレビュー
    • Azure Cloud Shell の一般提供開始
    • Azure のセキュリティと運用管理に関する更新: Visual Studio Team Services の Release Management と Application Insights の統合
    • Azure Cosmos DB の Apache® Cassandra API のプレビュー
    • Azure Cosmos DB の 99.999% の読み取り可用性の一般提供開始
    • Azure Databricks のプレビュー
    • Azure Cosmos DB の MongoDB API: Aggregation Pipeline のプレビューと Unique Index 機能
    • Azure Cosmos DB の Spark コネクタの一般提供開始
    • Azure Cosmos DB の Table API の一般提供開始
    • Azure Time Series Insights の一般提供開始
    • SQL Operations Studio のプレビュー
    • Connect(); での Cognitive Services の更新に関する発表
    • Visual Studio App Center の一般提供開始
    • Visual Studio Team Services の Azure DevOps Projects のプレビュー
    • 新しい Visual Studio サブスクリプション特典の一般提供開始
    • 新しい Visual Studio Dev Essentials 特典の一般提供開始
    • Visual Studio Team Services のホスト型 Mac ビルド プールのプレビュー
    • Visual Studio Live Share の発表
    • Visual Studio Team Services のリリース承認機能のプレビュー
    • Team Foundation Server 2018 の一般提供開始
    • Team Foundation Server の Database Import Service (Visual Studio Team Services 用) の一般提供開始
    • Visual Studio Team Services のコマンドライン ツールのプレビュー
    • Azure Container Service と Azure Container Registry のコンテナー関連のイノベーションの発表
    • IoT Edge での Azure Functions のサポート
    • Linux での Azure Functions のサポート
    • Azure Functions Proxies の一般提供開始
    • Azure App Service の新しいサポート センター エクスペリエンスの一般提供開始
    • Azure Advisor: パーソナライズされたベスト プラクティス サービスの機能強化
    • Azure Database Migration Service のプレビュー
    • Azure Active Directory の Conditional Access の一般提供開始
    • Multi-factor Authentication Server の Event Confirmation API の廃止
    • Azure Reserved Virtual Machine Instances の一般提供開始
    • Muti-factor Authentication Server の Direct SDK の廃止
  • MS クラウド ニュースまとめ (2017/11/8)
    • Azure Analysis Services | Scale out
    • Power BI Desktop | Esri Plus—GA
    • Power BI Report Server | October 2017 update—GA
    • Azure API Management announces hourly billing and new basic pricing tier
    • App Service on Azure Stack—GA
    • Azure Batch | Low priority virtual machines (VMs)—GA
    • System Center Preview update
    • Azure Log Analytics | Monitor ExpressRoute connections—public preview
    • Azure Cosmos DB | Storage Explorer + Cosmos DB—public preview
    • Azure SQL DB learns and adapts | Query auto tuning—GA
    • Azure SQL Database | Transactional replication—GA

 

マイルストーンの略語解説:

  • GA (General Availability): 一般提供開始、正式提供開始
  • RTM (Release To Manufacture): 一般提供開始、正式提供開始 (ソフトウェア)
  • CTP (Community Technical Preview): 限定ベータ版、限定プレビュー版
  • TP (Technical Preview): 限定ベータ版、限定プレビュー版
  • EOS (End of Support): サービス提供終了

 

 

過去のまとめを見るには、Cloud and Server Product Japan Blog タグを参照してください。

製品についての最新情報まとめを見るには最新アップデートタグを参照してください。

 

 


Outlook のクイック検索で期待した検索結果が得られない、環境が変わったら検索結果が変わる

$
0
0

こんにちは。日本マイクロソフト Outlook サポート チームです。

Outlook と Exchange Server 環境において、期待した検索結果が得られないというお問い合わせや、環境が変わったら検索結果も変わったというお問い合わせをいただくことが多くあります。今回は、なぜそのような動作結果が発生するかについてご紹介します。

今回の記事で対象となるのは、クイック検索と呼ばれる以下の場所の検索機能です。

※ [高度な検索] など別画面での検索については、詳細な動作が異なるため、検索結果が変わる場合があるのでご注意ください。

ワードブレイクについて

日本語は英語のように単語ごとにスペースの区切りが入らず、句読点を除けば文章が一続きで記述されます。 このため日本語検索においては、まずこの一続きの文章から検索用のキーワードを区切って検知する必要があり、この動作はワードブレイクと呼ばれます。

例えば、「彼は営業畑の人間だ」という文章を考えてみた場合、"彼" "は" "営業" "畑" "の" "人間" "だ" という風に分割されるのが理想的でしょうか。"営業畑" は一続きでまとめてもいいかもしれません。ただ、このように分割する処理をプログラムでロジカルに行うことは大変高度な処理となり、例えば "彼" "は" "営" "業畑" "の" "人" "間" "だ" というように、期待されないワードブレイクの結果になってしまう場合も考えられます。

ワードブレイクは Windows や Outlook、Exchange Server などマイクロソフトの製品に限った話ではなく、日本語検索を行うテクノロジー全般で考慮されているものと考えられます。

というのも、ユーザーが入力したキーワードから検索対象のデータすべてにフルテキストで検索を行おうとすると、どうしても検索の処理が遅くなってしまうためです。この対処として、あらかじめ検索対象のデータにワードブレイクを行って検索用のインデックスを保持しておく方が遥かに検索速度の向上が見込めるということになります。

ワードブレイクが必要となるタイミング

では、どのような場合にワードブレイクが行われるのでしょう。

上述したように、検索対象となるデータ (Outlook、Exchange Server 環境においてはメールや会議アイテムの件名、本文、添付ファイルなど) から、検索用のインデックスを作成する場合が 1 つです。

Outlook と Exchange Server の環境においては、キャッシュ モード有効時と、キャッシュ モード無効時 (オンライン モード) で以下のように動作が変わります。

キャッシュ モード有効 : クライアント側に保持されたインデックスを元に検索が行われる
キャッシュ モード無効 : Exchange Server 側で保持されたインデックスを元に検索が行われる

これらの検索用インデックスを生成するために、クライアント、サーバー側では新着メールなどで新しいデータが増えるたびに Windows Search などの Windows OS の機能によりワードブレイクをおこなって検索用のインデックスに追加を行っています。

これらのインデックス作成時以外に、もう 1 つ、ワードブレイクが必要となるタイミングがあります。
それは、クイック検索のウィンドウにユーザーが検索文字列を入力した場合です。(※)

例えば、検索キーワード欄に "営業畑" と入力した場合に、"営業" と "畑" に分割したほうがいいのか、"営業畑" で一続きのキーワードなのか、あるいはそれ以外の "営" と "業畑" なのか、といった具合にワードブレイクによる判断が行われます。

そして、クイック検索に入力された文字列をワードブレイクする処理は、キャッシュ モード無効/有効に関わらず、クライアント側の Windows Search などの Windows OS の機能によって実行されます。(※)

(※) Outlook 2016 と Exchange Server 2016 以降の組み合わせでは Fast Search と呼ばれる検索に変更され、キャッシュ モードの場合にユーザーが入力した文字列をワードブレイクする処理は Exchange Server 側で行われるように動作が変わっています。

期待した検索結果が得られない原因

マイクロソフトの製品では、よりよい検索結果を提供するため、ワードブレイクの動作を常に更新しています。
そのため、Windows OS や Windows Search の詳細バージョンや言語環境によってワードブレイクの結果が異なる可能性があります。

そして、オンライン モードの Outlook を使用している場合、ユーザーが入力した検索ワードのワードブレイクの結果 (クライアント側の動作) と、Exchange Server 側のワードブレイクで作成されたインデックスの状態が一致しない状況が発生すると、期待した検索結果が得られない可能性があります。

また、キャッシュ モードが有効な場合も、ワードブレイクを行うのが同じクライアントの Windows Search などの OS の機能となるのでその点での差異は通常発生しませんが、ワードブレイクの結果が予想外のものだった場合に、ユーザーが期待しない検索結果となる場合もあります。
例えば、"営業畑" をクイック検索のウィンドウに入力した際に、"営" と "業畑" にワードブレイクされてしまい、"営" のみの漢字がメール本文や添付ファイルに含まれているアイテムも結果として表示されてしまうなどの状況です。

環境によって検索結果が異なる原因

前述したように、Windows OS や Windows Search の詳細バージョンや言語環境によって、ワードブレイクの結果が異なる可能性があり、これらの要因が検索結果に影響を与えます。
また、Outlook の詳細バージョンによっても、ワードブレイクした後の各キーワードを検索クエリにおいてどのように扱うかという内部動作が異なる場合もあるため、ユーザーが使用する環境が変わると検索結果が異なる可能性があります。

期待した検索結果を得るためには

ここまで書いたように、ワードブレイクは検索を扱うテクノロジーにおける大きなテーマです。
マイクロソフトでは、ユーザーが期待した検索結果が得られるよう、日々ワードブレイクの精度向上に注力しています。

ユーザー環境において、可能な限り期待した検索結果を得るためには、以下のようなご利用方法を推奨します。

・クライアント側の OS と Exchange Server の OS を可能な限り同時期にリリースされたものに揃え、OS、Windows Search、Outlook、Exchange Server の Update も定期的に適用する。

Outlook のクイック検索については以下もご覧ください。

- 参考
Title : クイック検索を使ってメッセージまたはアイテムを検索する
URL : <https://support.office.com/ja-jp/article/Find-a-message-or-item-with-Instant-Search-69748862-5976-47b9-98e8-ed179f1b9e4d>


本情報の内容 (添付文書、リンク先などを含む) は、作成日時点でのものであり、予告なく変更される場合があります。

Azure Migrate のプレビューをリリース

$
0
0

執筆者: Shon Shah (Principal Program Manager Lead, Azure Migrate)

このポストは、11 27 日に投稿された Launching preview of Azure Migrate の翻訳です。

 

Microsoft Ignite 2017 においてAzure への移行を支援するガイド、情報、メカニズムを提供する新たなサービスとして、Azure Migrate 発表されました。このサービスはこれまで限定プレビューとして提供され、アクセス申請をしていただいたお客様にお試しいただきながら、フィードバックを収集してまいりました。皆様からのご意見を真摯に受け止めると共に、貴重なお時間を割いてフィードバックをお寄せいただいたことに感謝申し上げます。

このたび、この Azure Migrate のプレビューが開始され、アクセス申請をしていただかなくてもどなたでもサービスを使用できるようになりました。

Azure Migrate では、VMware で仮想化された Windows や Linux の仮想マシン (VM) を、エージェントを使用せずに検出できます。また、エージェントを使用した検出にも対応しています。これにより、1 つの VM または VM グループの依存関係を視覚化し、多層アプリケーションを容易に識別することができます。

アプリケーションを中心とした検出は移行の足掛かりとしては有効な手法ですが、情報に基づいて意思決定を行うには不十分です。Azure Migrate なら、次のような 3 つの疑問をすばやく評価して解決することができます。

   対応状況: VM が Azure での実行に適しているかどうかを確認します。

   適切なサイズ選び: CPU、メモリ、ディスク (スループットおよび IOPS)、ネットワークの使用履歴から、Azure VM の適切なサイズを判定します。

   コスト: Azure Hybrid Benefit などの割引を考慮しつつ、Azure で発生する経常的なコストを計算します。

評価項目はこれだけではありません。サーバーについては Azure Site Recovery (ASR)、データベースについては Azure Database Migration Service (DMS) というように、ワークロードに特化した移行サービスを提示する機能もあります。ASR では、アプリケーションを考慮したサーバー移行によって移行時のダウンタイムを最小限に抑えることが可能で、移行テスト時にも環境に影響を与えません。DMS はオンプレミスの SQL データベースを Azure に移行するためのソリューションで、ガイドに従って簡単に使用できます。

移行完了後は、Azure Security CenterAzure Cost ManagementAzure Backup などの Azure サービスを使用すれば、VM を確実に保護すると共に適切に管理することができます。

Azure Migrate は追加料金が不要で、運用環境へのデプロイメントがサポートされ、米国中西部リージョンで提供されます。Azure Migrate が提供されていないリージョンでも、このサービスを使用して移行することができます。たとえば、移行プロジェクトを米国中西部で作成して、米国西部 2 や英国西部、東日本の VM を検出し評価することもできます。

まずは、Azure ポータルで移行プロジェクトを作成してください。

Azure Migrate (preview)

関連情報

·         最新の詳細情報については、こちらのドキュメント (英語) を参照してください。

·         サポートが必要な場合は、フォーラム (英語) に質問を投稿するか、Microsoft サポートまでお問い合わせください。

·         ご意見がありましたら、UserVoice (英語) までお寄せください。フォーラム内の意見への投票もお願いいたします。

 

Azure への移行作業に Azure Migrate をお役立ていただけますと幸いです。

 

[GDPR Demopalooza] Cloud App Security

$
0
0

Basierend auf der GDPR / DSGVO Demopalooza hier das Demo zu Cloud App Security (CAS).

Wie immer aufgeteilt in zwei Bereiche: Das "Why" und das "How"

 

Why

Grade bei dem Thema GDPR/DSGVO, wo es darum geht genau zu wissen was-wo-wann-von/durch-wen mit Daten passiert ist das Thema "Schatten IT" nicht zu vernachlässigen. Grade in dem Übergang von einer on-prem zu einer mobilen Arbeitsweise kommt es in 99,999% der Unternehmen vor, dass Daten über Cloud Dienste (OneDrive for consumer, Dropbox, etc) geshared werden, weil die Nutzer keine andere Möglichkeit gefunden haben um "mal eben" größere Dokumente mit Externen (idealer Weise zu Business Zwecken 😉 ) zu teilen.

D.h. durch die Zwangs-Limitierungen, die durch fehlende Anpassungen, Willen oder Können (z.B. das viel zu häufig vorkommende "Cloud ist böse" Vorurteil) entstanden sind, wurden User förmlich dazu gezwungen sich ihren eigenen Weg (aka ShadowIT) zu suchen. Dadurch hat die IT typischerweise sowohl die Übersicht, als auch die Steuerungsmöglichkeiten für Daten (zum Teil) verloren.

Genau hier setzt das Thema Cloud App Security an - zum einen, um einen Einblick in die [nicht IT freigegebenen] Cloud Dienste zu bekommen, aber auch um die Hoheit über diese zurückzuerlangen.

@Interessierte Kunden: wir unterstützen gerne dabei Partner zu finden, die dieses Thema ganzheitlich unterstützen. Bitte sprechen sie hierzu ihren dedizierten Microsoft Ansprechpartner an

@Interessierte Partner: wir unterstützen gerne dabei in dem Thema die notwendige Readiness aufzubauen, so dass ihr das Thema bei und mit Kunden umsetzen könnt. Bitte sprecht dazu euren PDM/PTS oder Andreas oder mich direkt an.

How

  1. Öffnen eines In-Privat Browsers
  2. Besuchen des Links: Cloud App Security
  3. Als erster Schritt möchten wir erfahren, welche Dienste aus unserem Netzwerk überhaupt genutzt werden. Hierfür benötigen wir Logfiles aus unserem Proxy oder Firewall - die bekanntesten Anbieterformate werden von CAS unterstützt.
    Für die Analyse muss im oberen Bereich auf "Discover"->Create Snapshot Report geklickt werden
  4. In diesem Schritt kann ein passendes Logfile zur Untersuchung eines "Snapshots" hochgeladen werden. Aus Zeitgründen zeigen wir nur, wie das Ergebnis aussehen könnte - sofern ein passendes Log und genügend Zeit vorliegt kann dies natürlich auch "in Echt" durchgeführt werden. Für die kurze Demo daher auf "View sample report" in der rechten, unteren Ecke auswählen.
    WICHTIG: insb. solange noch keine Zustimmung seitens CISO *und* Betriebsrat vorliegt unbedingt die Option "Anonymize private information" auswählen!
  5. Nachdem wir nun also die Dienste kennen [und außerhalb der Demo uns bei den externen Diensten angemeldet haben und unseren CAS mit den Management APIs verbunden haben - das ist in der Kürze der Demo nicht realisierbar, referenziere ich hier nur direkt angeschlossene Microsoft Dienste, dies könnten aber genauso gut supportete 3rd party Cloud Dienste sein], sollten wir auch Action übernehmen.
    Dazu im Header auf "Investigate" klicken und dort z.B. "Microsoft OneDrive for Business" auswählen
  6. Dort dann den Reiter "Files" auswählen
  7. Dort erhält man genauere Einblicke in die in dem ausgewählten Dienst gespeicherten Daten. Insb. Dateien, die gegen Policy verstoßen, also z.B. "everyone read" sollten als erstes analysiert werden. Hierzu unter "Access level" -> "External" bzw. "Public" auswählen
    CAS Filter Options
  8. Beim Click auf "x Collaborators" kann die aktuelle Sharingeinstellung offenbart werden
    CAS File Sharing overview
  9. Dies wollen wir nun ändern, da dieses Sharing gegen unsere Datapolicy verstößt. Dazu den Sharing Dialog schließen und die drei vertikalen Punkte oben rechts an der Datei drücken um den Managementdialog für die Datei einzublenden
    CAS File Action Dialog
  10. Nun noch durch click auf "Remove external users" den uneingeschränkten und nicht gemanagten Zugriff unterbinden.

 

Hinweis

Für das Thema "Shadow IT Discovery" bieten zahlreiche Microsoft Partner das sog. "Shadow IT Assessment" an - bitte sprechen sie bei Interesse hierzu ihren zuständigen Microsoft Ansprechpartner an.

 

Diese Demoanleitung bietet einen Überblick zur Nutzung von Azure Information Protection im Kontext von DSGVO und stellt keine rechtlich bindende Aussage dar!

クイック実行版 Outlook 2016 のバージョン 1706 ~ 1708 で Outlook アドインで導入したコンテキスト メニューが表示されない

$
0
0

こんにちは。日本マイクロソフト Outlook サポート チームです。

Office 365 ProPlus の Outlook 2016 (クイック実行版) の一部のバージョンで、右クリックでコンテキスト メニューを表示する Outlook アドインが動作しない事象が発生します。
※Office 2016 Professional Plus の Outlook 2016 (MSI 版) では、事象の発生は確認されていません。

なお、クイック実行版と MSI 版とを見分ける方法については以下のブログをご確認ください。
Title : クイック実行形式 (C2R) と Windows インストーラー形式 (MSI) を見分ける方法
URL : https://blogs.technet.microsoft.com/officesupportjp/2016/09/08/howto_c2r_or_msi/
※画面ショットでは Excel となっていますが、Outlook でも同じ方法でご確認いただけます。

 

事象の詳細

- 事象が発生していない時の状況
まず、下図に事象が発生しない時の状況を示します。
アイテムに含まれる連絡先の部分を右クリックした際に、[My Dynamic Menu] というコンテキスト メニューを表示する Outlook アドインを導入しているとします。
正しくメニューが表示され、クリックすることで実行できます。
下図の例では、実行することでポップアップが表示されています。


 

- 事象が発生する時の状況

次に、下図に事象が発生する時の状況を示します。
Outlook の不具合が原因で、同じ操作をしてもコンテキスト メニューの内容が表示されず、実行できません。

 

事象が発生するバージョンと確認方法

- 事象が発生するバージョン
以下が確認されている事象発生バージョンとなります。
 バージョン 1706 / 1707 / 1708
なお、バージョン 1705 以下、および 1709 以上では発生しないことを確認しています。

- バージョンの確認方法
Outlook 2016 クイック実行版のバージョンは以下の画面より確認できます。
[ファイル]-[Office アカウント]
下図に当該画面を示します。
この画面ではどのバージョン、ビルド、チャネルを利用しているかを確認できます。

[月次チャネル] [段階的提供チャネル] [半期チャネル (対象限定)]

以下も参考としてください。
Title : 所有している Outlook のバージョンが不明な場合
URL : https://support.office.com/ja-jp/article/b3a9568c-edb5-42b9-9825-d48d82b2257c

 

対処方法

設定などでは回避できないため、事象が発生しないバージョンに変更していただく必要があります。
現在ご利用のチャネルによって対処方法は異なります。

 

- [月次チャネル] (旧名称 : [現在のチャネル]) をご利用の場合

最新バージョンに更新します。

手順
~~~~
1. [ファイル]-[Office アカウント]-[更新オプション] をクリックします。
もし、[更新を有効にする] が表示されている場合はクリックします。表示されていない場合は次に進みます。
2. [今すぐ更新] をクリックします。インターネットから Office 365 ProPlus 当該チャネルの最新バージョンがダウンロードされ、インストールされます。

 

- [段階的提供チャネル] (※ 2018 年 1 月より、[半期チャネル] という名称に変更となります)
 および [半期チャネル (対象限定)] (旧名称 : [段階的提供チャネル向けの最初のリリース]) をご利用の場合

まだバージョン 1709 以上のリリース予定日は決まっていないため、事象が発生しないバージョン 1705 に変更します。

手順
~~~~
1. すべての Office アプリケーションを終了します。
2. コマンド プロンプトを管理者権限で開きます。
以下の手順で実行します。
a. [スタート] ボタンをクリックします。
b. "cmd" と入力します。検索ボックスに自動的に入力されます。
c. 最も一致する結果として、コマンド プロンプトが表示されますので、右クリックして [管理者として実行] をクリックします。
3. コマンド プロンプトで、以下のコマンドを入力して実行します。
cd %programfiles%Common FilesMicrosoft SharedClickToRun
4. 以下のコマンドを入力して実行します。
[段階的提供チャネル] の場合 :
officec2rclient.exe /update user updatetoversion=16.0.8201.2209
[半期チャネル (対象限定)] の場合 :
officec2rclient.exe /update user updatetoversion=16.0.8201.2171

5. 当該バージョンの Office のダウンロードと自動インストールが完了したら、Outlook を起動します。
6. [ファイル]-[Office アカウント]-[更新オプション] をクリックし、[更新を無効にする] をクリックします。
※この手順は非常に重要です。Office の最新バージョンが自動的に再度インストールされるのを防ぐため、確実に実行してください。

 

[段階的提供チャネル] および [半期チャネル (対象限定)] で バージョン 1709 以上がリリースされたら、以下の手順を実行してください。

手順
~~~~
1. [ファイル]-[Office アカウント]-[更新オプション] をクリックします。
[更新を有効にする] をクリックします。
2. [今すぐ更新] をクリックします。インターネットから Office 365 ProPlus 当該チャネルの最新バージョンがダウンロードされ、インストールされます。

- 参考情報
Title : Office 365 ProPlus 更新プログラム チャネルの概要
URL : https://support.office.com/ja-jp/article/9ccf0f13-28ff-4975-9bd2-7e4ea2fefef4

Title : Office 365 ProPlus 更新プログラムの管理に関する今後の変更の概要
URL : https://support.office.com/ja-jp/article/78b33779-9356-4cdf-9d2c-08350ef05cca

Title : 更新プログラム チャネル リリースのバージョン番号とビルド番号
URL : https://support.office.com/ja-jp/article/ae942449-1fca-4484-898b-a933ea23def7

 

本情報の内容 (添付文書、リンク先などを含む) は、作成日時点でのものであり、予告なく変更される場合があります。

Announcing Hybrid Modern Authentication for Exchange On-Premises

$
0
0

We’re very happy to announce support for Hybrid Modern Authentication (HMA) with the next set of cumulative updates (CU) for Exchange 2013 and Exchange 2016, that’s CU8 for Exchange Server 2016, and CU19 for Exchange Server 2013.

What is HMA?

HMA (not HAM, which Word keeps trying to correct it to for me) provides users the ability to access on-premises application using authorization tokens obtained from the cloud. For Exchange (that’s why you’re here right?), this means on-premises mailbox users get the ability to use these tokens (OAuth tokens specifically) for authentication to on-premises Exchange. Sounds thrilling I know, but what exactly are these tokens? And how do users get hold of them?

Rather than repeat many things here, I’m going to suggest you take a break and read the How Hybrid Authentication Really Works post, and if you really want to help boost my YouTube viewing numbers, watch this Ignite session recording too. They will respectively give you a pretty solid grounding in OAuth concepts and to help you understand what HMA is really all about.

See how much space we saved in this post by sending you somewhere else?

If you ignored my advice, the tl’dr version is this: HMA enables Outlook to obtain Access and Refresh OAuth tokens from Azure AD (either directly for password hash sync or Pass-Through Auth identities, or from their own STS for federated identities) and Exchange on-premises will accept them and provide mailbox access.

How users get those tokens, what they have to provide for credentials, is entirely up to you and the capabilities of the identity provider (iDP) – it could be simple username and password, or certificates, or phone auth, or fingerprints, blood, eyeball scanning, the ability to recite poetry, whatever your iDP can do.

Note that the user’s identity has to be present in AAD for this to work, and there is some configuration required that the Exchange Hybrid Configuration Wizard does for us. That’s why we put the H in HMA, you need to be configured Hybrid with Exchange Online for this feature.

It’s also worth knowing that HMA shares many of the same technology as the upcoming Outlook mobile support for Exchange on-premises with Microsoft Enterprise Mobility + Security feature, which as you’ll see from the blog post also requires Hybrid be in place. Once you have that figured out you’ll be able to benefit from both these features with very little additional work.

How Does HMA Work?

The video linked above goes into detail, but I’ll share some details here for anyone without the time to watch it.

Here’s a diagram that explains HMA when the identity is federated.

hma1

I think that picture is pretty clear, I spent a lot of time making it pretty clear so I don’t think I need to add much to it other than to say, if it’s not clear, you might want to try reading it again.

Why Should I Enable HMA?

Great question. There are a few good reasons, but mainly this is a security thing.

HMA should be considered ‘more secure’ than the authentication methods previously available in Exchange. That’s a nebulous statement if there ever was one (I could have said it’s more ‘Modern’ but I know you weren’t going to fall for that) but there are a few good arguments as to why that’s true.

When you enable HMA you are essentially outsourcing user authentication to your iDP, Exchange becomes the consumer of the resulting authorization tokens. You can enforce whatever authentication the iDP can do, rather than teach Exchange how to handle things like text messaged based MFA, blood analysis or retina scanning. If your iDP can do that, Exchange can consume the result. Exchange doesn’t care how you authenticated, only that you did, and came away with a token it can consume.

So it’s clearly ‘more secure’ if you choose to enforce authentication types or requirements stronger than those that come free with Exchange, but even if you stick to usernames and passwords it’s also more secure as passwords are no longer being sent from client to server once the user is authenticated (though of course that depends on whether you are using Basic, NTLM or Kerberos). It’s all token based, the tokens have specific lifetimes, and are for specific applications and endpoints.

One other interesting and important benefit to all this is that your auth flow is now exactly the same for both your cloud and on-premises users. Any MFA or Conditional Access policies you have configured are applied the same, regardless of the mailbox location. It’s simpler to stay secure.

HMA also results in an improved user experience as there will be less authentication prompts. Once the user logs in once to AAD they can access any app that uses AAD tokens – that’s anything in O365 and even Skype for Business on-premises configured for HMA (read more about Skype for Business’s HMA support here).

And don’t forget there’s the fact it’s more ‘Modern’. It’s newer and we put the word Modern on it. So it must be better, or at the very least, newer. Excellent, moving on.

Will It Cost Me?

Not if you just want to use free Azure ID’s or Federated identities and do MFA at your iDP. If you want to take advantage of advanced Azure features, then yes, you’ll have to pay for those. But to set this up the tenant admin needs only an Exchange and an Azure license assigned, to run the tools and enable the config.

What do I need to enable HMA?

There are some pre-requisites.

  1. The following Identity configurations with AAD are supported
  1. Federated Identity with AAD with any on-premises STS supported by Office 365
  2. Password Hash Synchronization
  3. Pass Through Authentication
  • In all cases, the entire on-premises directory must be synchronized to AAD, and all domains used for logon must be included in the sync configuration.
  • Exchange Server
    1. All servers must be Exchange 2013 (CU19+) and/or Exchange 2016 (CU8+)
    2. No Exchange 2010 in the environment
    3. MAPI over HTTP enabled. It is usually enabled or True for new installs of Exchange 2013 Service Pack 1 and above.
    4. OAuth must be enabled on all Virtual Directories used by Outlook (/AutoDiscover, /EWS, /Mapi, /OAB)
  • You must use clients that support ADAL (the client-side library that allows the client to work with OAuth tokens) to use the Modern Auth enabled features. Outlook 2013 requires the EnableADAL registry key be set, Outlook 2016 has this key set by default, Outlook 2016 for Mac works as it is, support for Outlook mobile (iOS and Android) is coming.
  • Ensure AAD Connect between on-premises AD and the O365 tenant has the “Exchange hybrid deployment” setting enabled in the Optional Features settings of Azure AD Connect.
  • Ensure SSL offloading is not being used between the load balancer and Exchange servers.
  • Ensure all user networks can reach AAD efficiently.
  • Let’s pick a few of those apart.

    No Exchange 2010 in the environment. That’s right, if you have E2010 you can’t enable HMA. Why? Because worst case is everyone with a mailbox on E2010 will be cut off from email. You don’t want that. It’s because OAuth happens anonymously upon initial connection. We send the user to AAD to get authenticated before we know where their mailbox is – and if that mailbox is on E2010, when they return with a token we’ll refuse to proxy from E2013/16 to E2010. Game over. Please insert coins.

    So we have drawn a line here and are stating no support for E2010, and the HCW won’t let you enable OAuth if E2010 exists. Don’t try and make it work, remember that scene from Ghostbusters, the whole crossing the streams thing? It’ll be like that, but worse.

    Next, MAPI/HTTP – you need to be using MAPI/HTTP not RPC/HTTP (Outlook Anywhere). This feature only works with MAPI/HTTP, and anyway, it’s time to get off RPC/HTTP. That’s very old code and as you might know we ended support for its use in O365, so it would be good to switch. It just works.

    Then there’s the ‘everyone should be in AAD’ thing. That’s because when you enable HMA, it’s Org wide. It affects every user connecting to Exchange. So, all users trying to access Exchange from a client that support Modern Auth will be sent to AAD. If you only have some users represented in AAD, only those users will be able to auth. The rest will come find you at lunch and make your life a misery. Unless you like misery, I wouldn’t recommend that route.

    Needing clients that support Modern Auth clearly, makes sense. And you need to make sure all the Exchange VDirs have OAuth enabled on them. Sounds obvious, and they are enabled by default, but some admins like to tinker… so it’s worth checking, and I’ll explain how later.

    SSL offloading works by terminating the SSL/TLS encryption on the load balancer and transmitting the request as HTTP. In the context of OAuth, using SSL offloading has implications because if the audience claim value specifies a HTTPS record, then when Exchange receives the decrypted request over HTTP, the request is considered not valid. By removing SSL offloading, Exchange will not fail the OAuth session due to a change in the audience claim value.

    Lastly, the ensuring all user networks can reach AAD comment. This change affects all connectivity from supported clients to Exchange, internal and external. When a user tries to connect to Exchange, whether that server is 10 feet away under the new guys desk or in a datacenter on the other side of the planet the HMA flow will kick in. If the user doesn’t have a valid token the traffic will include a trip to AAD. If you are one of those customers with complex networking in place, consider that.

    How do I Enable HMA?

    You’ve checked the pre-reqs, and you think you’re good to go. You can do a lot of this up front without impacting clients, I’ll point out where clients begin to see changes, so you can be prepared.

    We do recommend trying HMA in your test or lab environment if you can before doing it in production. You are changing auth, it’s something you need to be careful doing, as cutting everyone off from email is never a good thing.

    Here’s what to do. First, we have some Azure Active Directory Configuration to do.

    You need to register all the URL’s a client might use to connect to on-premises Exchange in AAD, so that AAD can issue tokens for those endpoints. This includes all internal and external namespaces, as AAD will become the default auth method for all connections, internal and external. Here’s a tip – look at the SSL certificates you have on Exchange and make sure all those names are considered for inclusion.

    Run the following cmdlets to gather the URL’s you need to add/verify are in AAD.

    Get-MapiVirtualDirectory | FL server,*url*
    Get-WebServicesVirtualDirectory | FL server,*url*
    Get-OABVirtualDirectory | FL server,*url*>

    Now you need to ensure all URL’s clients may connect to are listed as https service principal names (SPN’s):

    1. Connect to your AAD tenant using these instructions.
    2. For Exchange-related URL’s, execute the following command (note the AppId ends …02):

      Get-MsolServicePrincipal -AppPrincipalId 00000002-0000-0ff1-ce00-000000000000 | select -ExpandProperty ServicePrincipalNames

      The output will look similar to the following:

      [PS] C:WINDOWSsystem32> Get-MsolServicePrincipal -AppPrincipalId 00000002-0000-0ff1-ce00-000000000000 | select -ExpandProperty ServicePrincipalNames
      https://autodiscover.contoso.com/
      https://mail.contoso.com/
      00000002-0000-0ff1-ce00-000000000000/*.outlook.com
      00000002-0000-0ff1-ce00-000000000000/outlook.com
      00000002-0000-0ff1-ce00-000000000000/mail.office365.com
      00000002-0000-0ff1-ce00-000000000000/outlook.office365.com
      00000002-0000-0ff1-ce00-000000000000/contoso.com
      00000002-0000-0ff1-ce00-000000000000/autodiscover.contoso.com
      00000002-0000-0ff1-ce00-000000000000/contoso.mail.onmicrosoft.com
      00000002-0000-0ff1-ce00-000000000000/autodiscover.contoso.mail.onmicrosoft.com
      00000002-0000-0ff1-ce00-000000000000/mail.contoso.com
      00000002-0000-0ff1-ce00-000000000000

    3. If you do not already have your internal and external MAPI/HTTP, EWS, OAB and AutoDiscover https records listed (i.e., https://mail.contoso.com and https://mail.corp.contoso.com), add them using the following command (replacing the fully qualified domain names with the correct namespaces and/or deleting the appropriate addition line if one of the records already exists):

      $x= Get-MsolServicePrincipal -AppPrincipalId 00000002-0000-0ff1-ce00-000000000000
      $x.ServicePrincipalnames.Add("https://mail.corp.contoso.com/")
      $x.ServicePrincipalnames.Add("https://owa.contoso.com/")
      Set-MSOLServicePrincipal -AppPrincipalId 00000002-0000-0ff1-ce00-000000000000 -ServicePrincipalNames $x.ServicePrincipalNames

    4. Repeat step 2 and verify the records were added. We’re looking for https://namespace entries for all the URL’s, not <span class="consoletext"00000002-0000-0ff1-ce00-000000000000/namespace entries.
    5. For example,

      [PS] C:WINDOWSsystem32> Get-MsolServicePrincipal -AppPrincipalId 00000002-0000-0ff1-ce00-000000000000 | select -ExpandProperty ServicePrincipalNames
      https://autodiscover.contoso.com/
      https://mail.contoso.com/
      https://mail.corp.contoso.com
      https://owa.contoso.com
      00000002-0000-0ff1-ce00-000000000000/*.outlook.com
      00000002-0000-0ff1-ce00-000000000000/outlook.com
      00000002-0000-0ff1-ce00-000000000000/mail.office365.com
      00000002-0000-0ff1-ce00-000000000000/outlook.office365.com
      00000002-0000-0ff1-ce00-000000000000/contoso.com
      00000002-0000-0ff1-ce00-000000000000/autodiscover.contoso.com
      00000002-0000-0ff1-ce00-000000000000/contoso.mail.onmicrosoft.com
      00000002-0000-0ff1-ce00-000000000000/autodiscover.contoso.mail.onmicrosoft.com
      00000002-0000-0ff1-ce00-000000000000/mail.contoso.com
      00000002-0000-0ff1-ce00-000000000000

    Then we need to validate the EvoSts authentication provider is present using the Exchange using Exchange Management Shell (this is created by the Hybrid Configuration Wizard):

    Get-AuthServer | where {$_.Name -eq "EvoSts"}

    HMA2

    If it is not present, please download and execute the latest version of the Hybrid Configuration Wizard. Note that this authentication provider is not created if Exchange 2010 (this includes Edge Transport servers) is detected in the environment.

    Now let’s make sure OAuth is properly enabled in Exchange on all the right virtual directories Outlook might use.

    Run the following cmdlets (and a tip, don’t use -ADPropertiesOnly as that sometimes tells little white lies, try it and see if you don’t believe me)

    Get-MapiVirtualDirectory | FL server,*url*,*auth*
    Get-WebServicesVirtualDirectory | FL server,*url*,*oauth*
    Get-OABVirtualDirectory | FL server,*url*,*oauth*
    Get-AutoDiscoverVirtualDirectory | FL server,*oauth*

    You are looking to make sure OAuth is enabled on each of these VDirs, it will look something like this (and the key things to look at are highlighted);

    [PS] C:Windowssystem32>Get-MapiVirtualDirectory | fl server,*url*,*auth*
    Server : EX1
    InternalUrl : https://mail.contoso.com/mapi
    ExternalUrl : https://mail.contoso.com/mapi
    IISAuthenticationMethods : {Ntlm, OAuth, Negotiate}
    InternalAuthenticationMethods : {Ntlm, OAuth, Negotiate}
    ExternalAuthenticationMethods : {Ntlm, OAuth, Negotiate}

    [PS] C:Windowssystem32> Get-WebServicesVirtualDirectory | fl server,*url*,*auth*
    Server : EX1
    InternalNLBBypassUrl :
    InternalUrl : https://mail.contoso.com/EWS/Exchange.asmx
    ExternalUrl : https://mail.contoso.com/EWS/Exchange.asmx
    CertificateAuthentication :
    InternalAuthenticationMethods : {Ntlm, WindowsIntegrated, WSSecurity, OAuth}
    ExternalAuthenticationMethods : {Ntlm, WindowsIntegrated, WSSecurity, OAuth}
    LiveIdNegotiateAuthentication :
    WSSecurityAuthentication : True
    LiveIdBasicAuthentication : False
    BasicAuthentication : False
    DigestAuthentication : False
    WindowsAuthentication : True
    OAuthAuthentication : True
    AdfsAuthentication : False

    [PS] C:Windowssystem32> Get-OabVirtualDirectory | fl server,*url*,*auth*
    Server : EX1
    InternalUrl : https://mail.contoso.com/OAB
    ExternalUrl : https://mail.contoso.com/OAB
    BasicAuthentication : False
    WindowsAuthentication : True
    OAuthAuthentication : True
    InternalAuthenticationMethods : {WindowsIntegrated, OAuth}
    ExternalAuthenticationMethods : {WindowsIntegrated, OAuth}

    [PS] C:Windowssystem32>Get-AutodiscoverVirtualDirectory | fl server,*auth*
    Server : EX1
    InternalAuthenticationMethods : {Basic, Ntlm, WindowsIntegrated, WSSecurity, OAuth}
    ExternalAuthenticationMethods : {Basic, Ntlm, WindowsIntegrated, WSSecurity, OAuth}
    LiveIdNegotiateAuthentication : False
    WSSecurityAuthentication : True
    LiveIdBasicAuthentication : False
    BasicAuthentication : True
    DigestAuthentication : False
    WindowsAuthentication : True
    OAuthAuthentication : True
    AdfsAuthentication : False

    Once you have checked these over, you might need to add OAuth here and there. It’s important to make sure all the servers are consistent, there’s really nothing harder to troubleshoot than when one server out of ten is wrong…

    (Top Nerd Note: I hope you know why we didn’t include *url* in the Get-AutodiscoverVirtualDirectory cmdlet? Answers in the comments section if you do. There are no prizes to be won!)

    If you need to add an Auth method, here’s a tip. For all except /Mapi, just set the -OAuthAuthentication property to $True. Done.

    But for /Mapi you need add it explicitly, and not using some fancy @Add PowerShell thing you learned in some online course or from that smart guy in the office who tells everyone he doesn’t use ECP as it’s for kids and dogs. Because I've learned too that sometimes that doesn’t always work the way it should.

    If you needed to add OAuth to all the Mapi Vdirs in the org, do it like this;

    Get-MapiVirtualDirectory | Set-MapiVirtualDirectory -IISAuthenticationMethods Ntlm, OAuth, Negotiate

    Up to this point no clients should have been impacted (unless you messed the Vdir auth up, and if you did, you should only have been adding OAuth, not taking others away…you know that now don’t you). So next we start to impact clients – so this is the bit you want to do out of normal business hours. For career reasons.

    So, make sure you validate the following:

    1. Make sure you have completed the steps above in the Azure AD Configuration section. All the SPN’s you need should be in there.
    2. Make sure OAuth is enabled on all virtual directories used by Outlook.
    3. Make sure your clients are up to date and HMA capable by validating you have the minimal version as defined in our supportability requirements.
    4. Make sure you have communicated what you are doing.
    5. Set the EvoSts authentication provider as the default provider (this step affects Outlook 2016 for Mac and native EAS clients that support OAuth right away):

      Set-AuthServer EvoSTS -IsDefaultAuthorizationEndpoint $true

    6. Enable the OAuth client feature for Windows Outlook:

      Set-OrganizationConfig -OAuth2ClientProfileEnabled $True

    That’s it. All the prep you did means it comes down to two cmdlets. Wield the power wisely.

    How do I Know I’m Using HMA?

    After HMA is enabled, the next time a client needs to authenticate it will use the new auth flow. Just turning on HMA may not immediately trigger a re-auth for any client.

    To test that HMA is working after you have enabled it, restart Outlook. The client should switch to use the Modern Auth flow.

    You should see an ADAL generated auth dialog, from Office 365. Once you enter the username you might be redirected to your on-premises IDP, like ADFS (and might not see anything at all if Integrated auth is configured), or you might need to enter a password. You might have to do MFA, it depends on how much stuff you’ve set up in AAD already.

    Once you get connected (and I hope you do), check Outlook’s Connection Status dialog (Ctrl-Right Click the Outlook tray icon) you will see the word Bearer in the Authn column – which is the sign that it’s using HMA.

    hma3

    Well done you. Check everyone else is ok before heading home though, eh?

    Something Went Wrong. How do I Troubleshoot HMA?

    Ah, you’re reading this section. It’s panic time, right? I was thinking of not publishing this section until next year, just for giggles. Mine, not yours. But I didn’t. Here’s what to think about if stuff isn’t working like I said it would.

    Firstly, make sure you did ALL the steps above, not some, not just the ones you understood. We’ve all seen it, 10 steps to make something work, and someone picks the steps they do like it’s a buffet.

    If you’re sure you’ve done them all, let’s troubleshoot this together.

    If you need to simply turn this back off then just run the last two cmdlets we ran again, but setting them to False this time. You might need to run IISReset on Exchange more than once, we cache settings all over the place for performance reasons, but those two will put you back to where you were if all hope is lost (hopefully you still have a chance to capture a trace as detailed in a moment before you do this, as it will help identity what went wrong).

    If you aren’t reverting the settings just yet, you clearly want to troubleshoot this a bit.

    First thing is – is the client seeing any kind of pop up warning dialog? Are they seeing any certificate errors? Trust or name mismatches, that sort of thing? Anything like that will stop this flow in its tracks. The clients don’t need anything more than trusting the endpoints they need to talk to – Exchange, AAD (login.windows.net and login.microsoftonline.com) and ADFS or your iDP of choice if in use. If they trust the issuer of the certs securing those sites, great. If you have some kind of name translation thing going on somewhere, that might cause a warning, or worse, a silent failure.

    Here’s an example of this I saw recently. Exchange was published using Web Application Proxy (WAP). You can do that, but only in pass-through mode. The publishing rule for AutoDiscover in this case was using autodiscover.contoso.com to the outside world, but the WAP publishing rule was set up to forward that traffic to mail.contoso.com on the inside. That causes this to fail, as Outlook heads to AAD to get a token for the resource called https://autodiscover.contoso.com and it does. Then it hands that to WAP, who then forwards to Exchange using the https://mail.contoso.com target URI – the uri used in the token isn’t equal to the uri used by WAP… kaboom. So, don’t do that. But I’ll show you later how an error like that shows up and can be discovered.

    Assuming certificates are good, we need to get deeper. We need to trace the traffic. The tool I prefer to use for this is Fiddler, but there are others out there that can be used.

    Now, Fiddler or the like can capture everything that happens between client and server – and I mean everything. If you are doing Basic auth, Fiddler will capture those creds. So, don’t run a Fiddler trace capturing everything going on and share it with your buddies or Microsoft. We don’t want your password. Use a test account or learn enough about Fiddler to delete the passwords.

    I’ll leave it to the Telerik people who create Fiddler to tell you how to install and really use their tool, but I’ll share these few snippets I’ve learned, and how I use it to debug HMA.

    Once installed and with the Fiddler root certs in the trusted root store (Fiddler acts as a man-in-the-middle proxy) it will capture traffic from whatever clients you choose. You need to enable HTTPS decryption (Tools, Options, HTTPS), as all our traffic is encased in TLS.

    If you have ADFS you can either choose to configure Fiddler to Skip Decryption for the ADFS url, if you don’t want to see what happens at ADFS, but if you do, you will have to relax the security stance of ADFS a bit to allow the traffic to be properly captured. Only do this while capturing the traffic for debug purposes, then reset it back. Start with bypassing decryption for the iDP first, come back to this if you suspect that is the issue.

    To set level of extended protection for authentication supported by the federation server to none (off)

    Set-AdfsProperties -extendedprotectiontokencheck none

    Then to set it back to the default once you have the capture:

    Set-AdfsProperties -extendedprotectiontokencheck Allow

    Read more about all that clever ADFSstuff here.

    Now you run the capture. Start Fiddler first, then start Outlook. I suggest closing all other apps and browsers, so as not to muddy the Fiddling waters. Keep an eye on Fiddler and Outlook, try and log in using Outlook, or repro the issue, then stop tracing (F12).

    Now we shall try to figure out what’s going on. I prefer the view where I have the traffic listed in the left hand pane, then on the right the top section is the request, and hte lower right in the response. But you do whatever works for you. But Fiddler shows each frame, then splits each into the Request, and the Response. That’s how you need to orient yourself.

    So the flow you’ll see will be something like this;

    Client connects to Exchange, sending an empty ‘Bearer‘ header. This is the hint to tell Exchange it can do OAuth but does not yet have a token. If it sends Bearer and a string of gobbledygook, that’s your token.

    Here are two examples of this. The header section to look at is Security. This is using Fiddler’s Header view. Do you see how the Security header says just Bearer on the left, but shows Bearer + Token on the right.

    hma4   hma5

    Exchange responds with (lower pane of the same packet in Fiddler, raw view), here’s where you can get a token (link to AAD).

    hma6

    If you scroll all the way to the right you’ll see the authorization_uri (AAD)

    hma7

    Normally, Outlook goes to that location, does Auth, gets a token, comes back to Exchange, and then tries to connect using Bearer + Token as above. If it’s accepted, it’s 200’s and beers all round and we’re done.

    Where could it go wrong?

    Client Failure

    Firstly, the client doesn’t send the empty Bearer header. That means isn’t even trying to do Bearer. This could be a few things.

    It could be that you are testing with Outlook 2010 which doesn’t support Bearer (so stop trying and upgrade).

    Maybe you are using Outlook 2013 but forgot to set the EnableADAL reg keys set? See the link below for those.

    But what if this is Outlook 2016, which has EnableADAL set by default and it is still not sending the Header…. Huh?

    Most likely cause, someone has been tinkering around in the registry or with GPO’s to set registry keys. I knew a guy who edited the registry once and three days later crashed his car. So, do not tell me you were not warned.

    You need to make sure keys are set as per https://support.office.com/en-us/article/Enable-Modern-Authentication-for-Office-2013-on-Windows-devices-7dc1c01a-090f-4971-9677-f1b192d6c910

    Outlook2016 for Mac can also have MA disabled (though it’s enabled by default). You can set it back to the default by running this from Terminal:

    defaults write com.microsoft.Outlook DisableModernAuth -bool NO

    That’s how we deal with the client not sending the Header. Check again and see the Header in all its Header glory.

    Auth_URI Failures

    Next thing that might happen is the server doesn’t respond with the authorization-uri, or it’s the wrong one.

    If there’s no authorization_uri at all then the EvoSts AuthServer does not have IsDefaultAuthorizationEndpoint set to $true. Recheck you ran

    Set-AuthServer EvoSts -IsDefaultAuthorizationEndpoint $true

    If it comes back, but with some other value than expected, make sure the right AuthServer is set as default, we only support you using AAD for this flow. If you think setting this to your on-premises ADFS endpoint will make this work without AAD… you’re wrong, as you discovered when you tried. If you are thinking of trying it, don’t bother. That’s an Exchange 2019 thing. Oh, did I just let that out of the bag?

    If HMA is enabled at the org level, but connections still don’t elicit the authorization_uri you expect it’s likely OAuth isn’t enabled on the Virtual Directory Outlook is trying to connect to. You need to simply make sure you have OAuth enabled on all VDirs, on all servers. Go back to the How Do I Enable section and check those VDirs again.

    Now, sometimes that all comes back ok but the client still doesn’t take the bait. If so, check for the following in the response;

    HTTP/1.1 401 Unauthorized
    Content-Length: 0
    Server: Microsoft-IIS/8.5 Microsoft-HTTPAPI/2.0
    request-id: a8e9dfb4-cb06-4b18-80a0-b110220177e1
    Www-Authenticate: Negotiate
    Www-Authenticate: NTLM
    Www-Authenticate: Basic realm="autodiscover.contoso.com"
    X-FEServer: CONTOSOEX16
    x-ms-diagnostics: 4000000;reason="Flighting is not enabled for domain 'gregt@contoso.com'.";error_category="oauth_not_available"
    X-Powered-By: ASP.NET
    WWW-Authenticate: Bearer client_id="00000002-0000-0ff1-ce00-000000000000", trusted_issuers="00000001-0000-0000-c000-000000000000@f31f3647-5d87-4b69-a0b6-73f62aeab14c", token_types="app_asserted_user_v1 service_asserted_app_v1", authorization_uri="https://login.windows.net/common/oauth2/authorize"
    Date: Thu, 13 Jul 2017 18:22:13 GMT
    Proxy-Support: Session-Based-Authentication

    Now this response is interesting because it says, go get a token (www-authenticate), but in x-ms-diagnostics it says, no, don’t. Is Exchange unsure?

    This means OAuth is enabled, but not for Outlook for Windows. So, you ran one of the two commands above (or you ran them both but not enough time has passed for them to kick in)

    Verify that the OAuth2ClientProfileEnabled property is set to $true by checking;

    (Get-OrganizationConfig).OAuth2ClientProfileEnabled

    Other Failures

    We have a token, we know OAuth is enabled at the Org level in Exchange, we know all the Vdirs are good. But it still won’t connect. Dang, what now?

    Now you’ll have to start to dig into server responses more closely, and start looking for things that look like errors. The errors you’ll see are usually in plain English, though of course that doesn’t mean they make sense. But here are some examples.

    Missing SPNs

    Client goes to AAD to get a token and get this:

    Location: urn:ietf:wg:oauth:2.0:oob?error=invalid_resource&error_description=AADSTS50001%3a+The+application+named+https%3a%2f%2fmail.contoso.com%2f+was+not+found+in+the+tenant+named+contoso.com.++This+can+happen+if+the+application+has+not+been+installed+by+the+administrator+of+the+tenant+or+consented+to+by+any+user+in+the+tenant.++You+might+have+sent+your+authentication+request+to+the+wrong+tenant.%0d%0aTrace+ID%3a+cf03a6bd-610b-47d5-bf0b-90e59d0e0100%0d%0aCorrelation+ID%3a+87a777b4-fb7b-4d22-a82b-b97fcc2c67d4%0d%0aTimestamp%3a+2017-11-17+23%3a31%3a02Z

    Name Mismatches

    Here’s one I mentioned earlier. There’s some device between client and server changing the names being used. Tokens are issued for specific uri’s, so when you change the names…

    HTTP/1.1 401 Unauthorized
    Content-Length: 0
    WWW-Authenticate: Bearer client_id="00000002-0000-0ff1-ce00-000000000000", trusted_issuers="00000001-0000-0000-c000-000000000000@8da56bec-0d27-4cac-ab06-52ee2c40ea22,00000004-0000-0ff1-ce00-000000000000@contoso.com,00000003-0000-0ff1-ce00-000000000000@8da56bec-0d27-4cac-ab06-52ee2c40ea22", token_types="app_asserted_user_v1 service_asserted_app_v1", authorization_uri="https://login.windows.net/common/oauth2/authorize", error="invalid_token"
    Server: Microsoft-IIS/8.5 Microsoft-HTTPAPI/2.0
    request-id: 5fdfec03-2389-42b9-bab9-c787a49d09ca
    Www-Authenticate: Negotiate
    Www-Authenticate: NTLM
    Www-Authenticate: Basic realm="mail.contoso.com"
    X-FEServer: RGBMSX02
    x-ms-diagnostics: 2000003;reason="The hostname component of the audience claim value 'https://autodiscover.contoso.com' is invalid";error_category="invalid_resource"
    X-Powered-By: ASP.NET
    Date: Thu, 16 Nov 2017 20:37:48 GMT

    SSL Offloading

    As mentioned in the previous section, tokens are issued for a specific uri and that value includes the protocol value ("https://"). When the load balancer offloads the SSL, the request Exchange will receives comes in via HTTP, resulting in a claim mismatch due to the protocol value being "http://":

    Content-Length →0
    Date →Thu, 30 Nov 2017 07:52:52 GMT
    Server →Microsoft-IIS/8.5
    WWW-Authenticate →Bearer client_id="00000002-0000-0ff1-ce00-000000000000", trusted_issuers="00000001-0000-0000-c000-000000000000@00c118a9-2de9-41d3-b39a-81648a7a5e4d", authorization_uri="https://login.windows.net/common/oauth2/authorize", error="invalid_token"
    WWW-Authenticate →Basic realm="mail.contoso.com"
    X-FEServer →CTSINPUNDEVMB02
    X-Powered-By →ASP.NET
    request-id →2323088f-8838-4f97-a88d-559bfcf92866
    x-ms-diagnostics →2000003;reason="The hostname component of the audience claim value is invalid. Expected 'https://mail.contoso.com'. Actual 'http://mail.contoso.com'.";error_category="invalid_resource"

    Who’s This?

    Perhaps you ignored my advice about syncing all your users to AAD?

    HTTP/1.1 401 Unauthorized
    Cache-Control: private
    Server: Microsoft-IIS/7.5
    request-id: 63b3e26c-e7fe-4c4e-a0fb-26feddcb1a33
    Set-Cookie: ClientId=E9459F787DAA4FA880A70B0941F02AC3; expires=Wed, 25-Oct-2017 11:59:16 GMT; path=/; HttpOnly
    X-CalculatedBETarget: ex1.contoso.com
    WWW-Authenticate: Bearer client_id="00000002-0000-0ff1-ce00-000000000000", trusted_issuers="00000001-0000-0000-c000-000000000000@cc2e9d54-565d-4b36-b7f0-9866c19f9b17"
    x-ms-diagnostics: 2000005;reason="The user specified by the user-context in the token does not exist.";error_category="invalid_user"
    X-AspNet-Version: 4.0.30319
    WWW-Authenticate: Basic realm="mail.contoso.com"
    WWW-Authenticate: Negotiate
    WWW-Authenticate: NTLM
    X-Powered-By: ASP.NET
    X-FEServer: E15
    Date: Tue, 25 Oct 2016 11:59:16 GMT
    Content-Length: 0

    Password Changed?

    When the user changes their password they must re-authenticate to get a new Refresh/Access token pair.

    HTTP/1.1 400 Bad Request
    Cache-Control: no-cache, no-store
    Pragma: no-cache
    Content-Type: application/json; charset=utf-8
    Expires: -1
    Server: Microsoft-IIS/8.5
    Strict-Transport-Security: max-age=31536000; includeSubDomains
    X-Content-Type-Options: nosniff
    x-ms-request-id: f840b3e7-8740-4698-b252-d759825e0300
    P3P: CP="DSP CUR OTPi IND OTRi ONL FIN"
    Set-Cookie: esctx=AQABAAAAAABHh4kmS_aKT5XrjzxRAtHz3lyJfwgypqTMzLvXD-deUmtaub0aqU_17uPZe3xCZbgKz8Ws99KNxVJSM0AglTVLUEtzTz8y8wTTavHlEG6on2cOjXqRtbgr2DLezsw_OZ7JP4M42qZfMd1mR0BlTLWI3dSllBFpS9Epvh5Yi0Of5eQkOHL7x97IDk_o1EWB7lEgAA; domain=.login.windows.net; path=/; secure; HttpOnly
    Set-Cookie: x-ms-gateway-slice=008; path=/; secure; HttpOnly
    Set-Cookie: stsservicecookie=ests; path=/; secure; HttpOnly
    X-Powered-By: ASP.NET
    Date: Thu, 16 Nov 2017 20:36:16 GMT
    Content-Length: 605
    {"error":"invalid_grant","error_description":"AADSTS50173: The provided grant has expired due to it being revoked. The user might have changed or reset their password. The grant was issued on '2017-10-28T17:20:13.2960000Z' and the TokensValidFrom date for this user is '2017-11-16T20:27:45.0000000Z'rnTrace ID: f840b3e7-8740-4698-b252-d759825e0300rnCorrelation ID: f3fc8b2f-7cf1-4ce8-b34d-5dd41aba0a02rnTimestamp: 2017-11-16 20:36:16Z","error_codes":[50173],"timestamp":"2017-11-16 20:36:16Z","trace_id":"f840b3e7-8740-4698-b252-d759825e0300","correlation_id":"f3fc8b2f-7cf1-4ce8-b34d-5dd41aba0a02"}

    Unicorn Rampage?

    When a Unicorn Rampage has taken place and all tokens are invalidated you’ll see this.

    HTTP/1.1 400 Bad Unicorn
    Cache-Control: no-cache, no-store, not-bloody-safe
    Pragma: no-cache
    Content-Type: application/json; charset=utf-8
    Expires: -1
    Server: Microsoft-IIS/8.5
    Strict-Transport-Security: max-age=31536000; includeSubDomains
    X-Content-Type-Options: nosniff
    x-ms-request-id: f840b3e7-8740-4698-b252-d759825e0300
    P3P: CP="DSP CUR OTPi IND OTRi ONL FIN"
    Set-Cookie: esctx=AQABAAAAAABHh4kmS_aKT5XrjzxRAtHz3lyJfwgypqTMzLvXD-deUmtaub0aqU_17uPZe3xCZbgKz8Ws99KNxVJSM0AglTVLUEtzTz8y8wTTavHlEG6on2cOjXqRtbgr2DLezsw_OZ7JP4M42qZfMd1mR0BlTLWI3dSllBFpS9Epvh5Yi0Of5eQkOHL7x97IDk_o1EWB7lEgAA; domain=.login.windows.net; path=/; secure; HttpOnly
    Set-Cookie: x-ms-gateway-slice=008; path=/; secure; HttpOnly
    Set-Cookie: stsservicecookie=ests; path=/; secure; HttpOnly
    X-Powered-By: ASP.NET
    Date: Thu, 16 Nov 2017 20:36:16 GMT
    Content-Length: 605
    {"error":"unicorn_rampage","error_description":"The Unicorns are on a rampage. It’s time go home” '2017-11-16T20:27:45.0000000Z'rnTrace ID: f840b3e7-8740-4698-b252-d759825e0300rnCorrelation ID: f3fc8b2f-7cf1-4ce8-b34d-5dd41aba0a02rnTimestamp: 2017-11-16 20:36:16Z","error_codes":[50173],"timestamp":"2017-11-16 20:36:16Z","trace_id":"f840b3e7-8740-4698-b252-d759825e0300","correlation_id":"f3fc8b2f-7cf1-4ce8-b34d-5dd41aba0a02"}

    And so on. You can see there are a few things that can go wrong, but Fiddler is your friend, so use it to debug and look closely and often the answer is staring you right there in the face.

    Viewing Tokens

    Lastly, and just for fun, if you want to see what an actual, real life Access token looks like, I’ll show you how… calm down, it’s not that exciting.

    In Fiddler, in the Request (upper pane), where you see Header + Value (begins ey…), you can right click the value and choose Send to Text Wizard, and set Transform to ‘From Base64’. Or you can copy the entire value and use a web site such as https://jwt.io to transform them into a readable format like this.

    {
    "aud": "https://autodiscover.contoso.com/",
    "iss": "https://sts.windows.net/f31f3647-5d87-4b69-a0b6-73f62aeab14c/",
    "acr": "1",
    "aio": "ASQA2/8DAAAAn27t2aiyI+heHYucfj0pMmQhcEEYkgRP6+2ox9akUsM=",
    "amr": [
    "pwd"
    ],
    "appid": "d3590ed6-52b3-4102-aeff-aad2292ab01c",
    "appidacr": "0",
    "e_exp": 262800,
    "enfpolids": [],
    "family_name": "Taylor",
    "given_name": "Greg",
    "ipaddr": “100.100.100.100",
    "name": "Greg Taylor (sounds like a cool guy)",
    "oid": "7f199a96-50b1-4675-9db0-57b362c5d564",
    "onprem_sid": "S-1-5-21-2366433183-230171048-1893555995-1654",
    "platf": "3",
    "puid": "1003BFFD9ACA40EE",
    "scp": "Calendars.ReadWrite Contacts.ReadWrite Files.ReadWrite.All Group.ReadWrite.All Mail.ReadWrite Mail.Send Privilege.ELT Signals-Internal.Read Signals-Internal.ReadWrite Tags.ReadWrite user_impersonation",
    "sub": "32Q7MW8A7kNX5dPed4_XkHP4YwuC6rA8yBwnoROnSlU",
    "tid": "f31f3647-5d87-4b69-a0b6-73f62aeab14c",
    "unique_name": "GregT@contoso.com",
    "upn": "GregT@contoso.com",
    "ver": "1.0"
    }

    Fun times, eh? I was just relieved to see my enfpolids claim was empty when I saw that line, that sounds quite worrying and something I was going to ask my doctor about.

    Summary

    We’ve covered why HMA is great, why it’s more secure, how to get ready for it and how to enable it. And even how to troubleshoot it.

    Like all changes it requires careful planning and execution, and particularly when messing with auth, be super careful, please. If people can’t connect, that’s bad.

    We’ve been running like this for months inside Microsoft, and we too missed an SPN when we first did it, so it can happen. But if you take your time and do it right, stronger, better and heck, a more Modern auth can be yours.

    Good luck

    Greg Taylor
    Principal PM Manager
    Office 365 Customer Experience

    Playing with Cortana on Infinity

    $
    0
    0

    Hello All,

    I was very surprised and excited to discover that Microsoft has teamed up with Harman kardon and produced a smart speaker with Cortana, the device hass been named Infinity.  I personally have been waiting for a device like this so I of course had to buy it 🙂

    I got it on Sunday and have started to play with it using the ability to add items to lists and playing music from the online service Iheartradio which was pretty simple, but i have to admit i am considering the paid version of Spotify for improved music selection (However it was awesome watching my oldest yell at Cortana to play Fetty Wap :))

    First impressions, the sound is amazing and it is elegant (I have decided to leave it on mantle in TV room), I was surprised by the size as you can see in the image below.

    Next I will be setting up the house automation portion so that Cortana can control the temperature and several lights in my house.

    I'll let you know how things go as I continue to play with Cortana on Infinity

    Pax

    OnsdagsEvent i BETT ugen hos Microsoft UK

    $
    0
    0

    Skal du på BETT? Kommer du allerede onsdag den 24.1. om morgen? Vil du gerne møde Mike Tholfsen, worldwide program manager for OneNote? Høre mere om vores planer med Office 365? Se vores UK kontor på 2 Kingdom Street, Paddington, London W2 6BD? Så kom forbi kl 08.30-11.30 til kaffe og brød og bliv klogere. Du kan tage direkte fra kontoret til Excel Arena til BETT messen efterfølgende. Adressen er Microsoft, 2 Kingdom Street, Paddington, London W2 6BD

    Agenda og tilmeldingssite kommer snart - hold formiddagen reserveret. Vi glæder os til at se dig 🙂


    My system is having high CPU usage without any explanation and Task Manager does not add up. What can I do?

    $
    0
    0

    Hello everyone,

    This is Alex, and today I would like to discuss a bit about what should be collected in a high CPU issue, where the reason cannot be identified using Task Manager and / or Process Explorer.

    We see a lot of these issues coming in and I want to provide some useful guidelines about systems that are randomly having High CPU.

    If you are thinking about opening a case with Microsoft, here is some initial data that you can gather before the call is open.

    So, what can I do to find out what is happening?

    What most Microsoft engineers will ask, in a high CPU scenario, will be to create a trace, while the issue is present, with a tool called XPerf.

    What is Xperf?

    Xperf is a performance tracing tool based on Event Tracing for Windows, using low overhead, so it will have a minimal impact on the system performance.

    More details can be found also at: https://blogs.technet.microsoft.com/askperf/2008/06/27/two-minute-drill-introduction-to-xperf/

    How to get Xperf:

    Xperf is part of Windows Assessment and Deployment Kit (Windows ADK) and we will need to install it on a system, preferable on a Test system and retrieve some files.

    Once ADK is installed, navigate to the installation location (default “C:Program Files (x86)Windows Kits8.1Windows Performance Toolkit”) and retrieve the “xperf.exe” and “perfctrl.dll” files. Now, we can transfer this files to any system which need to be investigated for a high CPU issue.

    Now that I have XPerf files, what’s next?

    Wait for the high CPU issue to reproduce and do the following:

    • Open an elevated command prompt and navigate to the XPerf location
    • Execute: Xperf -on latency -stackwalk profile
    • Leave the trace running for three to five minutes (don’t close the command prompt)
    • Stop the trace by executing: Xperf -d c:temphighcpu.etl (location can be changed to another folder, but the folder needs to exist prior to execution of the command)

    If you would like to analyze the trace, simply move it to the system where you have installed the ADK tool and open the trace with Windows Performance Analyzer.

    ***

    Please Note:

    XPerf is not the only tool that can be used in this type of scenarios

    Performance Monitor traces can also be set to capture a trace while the CPU utilization is high

    Some Microsoft engineer may ask to create different traces depending on the issue description and preference

     

    -- Alex

    Call-to-arms to all designers!!!

    $
    0
    0

    We are looking for new Banners and logo to the TechNet WIKI for 2018!

    We are in the last month of the year 2017, and this is not just the end of another year but is a new beginning with year 2018... New Year is all about NEW... new beginning, new desires, new thoughts, and NEW DESIGNING

    All you have to do is create new image(s) for our TechNet Wiki Group (banners and/or Logo), publish your files in the TechNet Wiki Group on Facebook, get feedback from the community, and once it is ready register your images to the contest, and upload the image to OneDrive. For more information keep reading...

    The last date for sharing Banners and Logo is not yet decided but you are asked to make sure that you are ready till December 31st 2017. We request all TechNet Wiki Members kindly share your banner/logo in the Facebook group as soon as possible. Please keep reading on How to post your work.

    It's time to test your design skills!


    The winning banners for 2017

    Background

    During online and offline activities of the TechNet community, we are using images to promote the TechNet Wiki in general and in order to improve the visual affect. For example images can be used to improve the looks & feel of announcements, blogs, articles, and even offline use like presentations. Banners are usually presents on top of posts and Logo is use for the content.

    First official "Microsoft TechNet Wiki group on Facebook" was created on July 16, 2014. At the end of 2015 we announced our first "Call-to-arms to all designers", in order to get new banner for 2016, and we continue the same tradition since. During the last year we used all the images sent by the community to promote the TechNet Wiki and our work, and not just the image, which was selected as our new Banners. The winners of the best images contest for 2017 can be seen in the contest winners announcement blog here.

    How to Post your work?

    Step 1: Create image according to the Banner Guidelines, or Logo Guidelines. You can create as many images as you want and we encourage you to do so. Moreover, you can upload and register several versions of the image as well.

    • The name of the image is used as the image ID. Therefore it must be unique and follow this format:
      <your name>_<unique number>.jpg
      For example: RonenAriely_01.jpg
    • Avoid copyright infringement! You must avoid using images that are not yours when you create a Facebook banner, unless you have the proper licence. There are a lot of free images online published under different "Creative Commons licenses" - you can use advance search in order to find images that come with a licence that fit your use, and always confirm the licence.

    Step 2: Upload the image(s) to the Facebook group for feedback.

    • Image that will not be presented on Facebook, will not get into the contest, even if it is registered here!
    • Make sure that one of the team see the message and provide you with a private link for OneDrive. Each participant will get a link to upload and manage his images on OneDrive.
    • Do not share your link! This is a personal link and anything that is done with the link is under your name.

    Step 3: Once the image is ready, register the image on the Wiki registration page.

    On the same time upload the original image to our OneDrive, using the link provided for you in step 2 (If you prefer than you can send your files directly to Ronen Ariely, Syed Shanu, or Gaurav Aroraa in private).

    Step 4: New! We will give the community a chance to vote for the Images. The community votes will be taken under configuration and will guide us to the top images.

    Step 5: Finally one logo will be selected from the top images, by our TechNet Wiki Council members.

    Place your logo today and mark your history with our TechNet Wiki.


    Banner Guidelines

    The banners must fit for the TechNet Wiki Group on Facebook. Therefore you must follow Facebook guidelines for group's banners. Unfortunately, from time to time, Facebook changed their format including the size for cover photo, which is used by groups. The images which were created for last year do not fit for current format. Moreover, standard cover photo (personal page) size is not the same as the cover photo on a group. The dimensions of a group photo are 820 x 428.

    It is highly recommended to test your work on Facebook group, in order to see how it fits into the overall design of the page. For this purpose, you can open a new group on Facebook, for testing.

    Logo Guidelines

    The logo is a small square icon version, which should fit as logo for any purpose from stickers to T-shirt and online posts.

    • The Logo image should be about 56x56 pix.
    • The background must be one color, so we could make it transparent if needed (You can post a version which has transparent background).

    TNWiki, the final frontier
    – Ronen Ariely,
    [Personal Site] [Blog] [Facebook] [Linkedin]
    THOR - it begins...
    – Gaurav Aroraa,
    [Personal Site] [Blog] [Facebook] [Linkedin]
    tnwlogo_3 Share with Care,Yours Wiki Ninja
    – Syed Shanu,
    [MSDN Profile] [MVP Profile] [Facebook] [Twitter]

    [GDPRDemopalooza] Compliance Manager

    $
    0
    0

    Basierend auf der GDPR / DSGVO Demopalooza hier das Demo zum Compliance Manager.

    Wie immer aufgeteilt in zwei Bereiche: Das "Why" und das "How"

    Why

    Anforderungen an Compliance sind in der Regel komplex, herausfordernd zu überwachen und zeitintensiv zu implementieren. Und mit neuen Regulatorien und Anforderungen die permanent auf die zuständigen Stellen zu kommen ist es mehr als herausfordernd (zeitlich wie monetär) mit den Neuerungen Schritt zu halten. Dies gilt natürlich auch oder grade für das Thema GDPR / DSGVO

    Der Compliance Manager unterstützt sie dabei die Compliance Übersicht an einer Stelle zu halten und zu managen. Dazu werden neben real-time Risikoassesments über die Microsoft Cloud Services auch sinnhafte Insights und optimierte Compliance Prozesse dargestellt.

    @Interessierte Kunden: wir unterstützen gerne dabei Partner zu finden, die dieses Thema ganzheitlich unterstützen. Bitte sprechen sie hierzu ihren dedizierten Microsoft Ansprechpartner an

    @Interessierte Partner: wir unterstützen gerne dabei in dem Thema die notwendige Readiness aufzubauen, so dass ihr das Thema bei und mit Kunden umsetzen könnt. Bitte sprecht dazu euren PDM/PTS oder Andreas oder mich direkt an.

    How

    1. Öffnen des Compliance Managers (CM)
      Compliance Manager Portal
    2. Wer möchte kann sich die CM Tour anschauen oder hier der Demo folgen
    3. Zuerst legen wir ein neues Assessment über "+ Add Assessment" an und wählen als Produkt "Office 365" aus
      CM add Assessment
    4. Im folgenden Dialog (natürlich) GDPR auswählen und dem Kind einen Namen geben, z.B. "GDPR" 😉
    5. Durch Click auf den Namen "GDPR" oder wie im Bild "My DSGVO" öffnen wir nun das Assessment
    6. Nun sehen wir neben den statistischen Daten zu unserem Assessment insb. 3 wichtige Bereiche:
      1. Office 365 in-Scope Cloud Services
      2. Microsoft Managed Controls
      3. Customer Managed Controls


      Der nächste Click erfolgt auf "Office 365 in-Scope Cloud Services". In dem aufgeklappten Bereich sehen wir alle Office 365 Services, die automatisch mit in das Assessment aufgenommen worden sind.

    7. Danach öffnen wir den Bereich "Microsoft Managed Controls" und anschließend direkt "Access Control"
    8. Hier sehen wir aus welchem Grund (=>Description) wir den Punkt "Access Control" aufgenommen haben, dass der Punkt bereits den Status "Implementiert" hat und, dass der Test dieses Controls erfolgreich statt gefunden hat. Click auf "More ⇓"
    9. Im "More" Bereich wird detaillierter auf die Implementierung eingegangen. [Optional]Click auf ein weiteres Control, z.B.: "Autority and Purpose" - wir sehen, dass alle "Microsoft Managed Controls" bereits implementiert sind und ihren jeweiligen Test erfolgreich bestanden haben, zum (großen) Teil durch externe Reviewer, u.a. bedingt durch erfolgte Zertifizierungen wie ISO 27001, 27017 und/oder 27018
    10. Nun schließen wir die "Microsoft Managed Controls" und öffnen "Customer Managed Controls" und gleich auch "More ⇓"
    11. Hier sehen wir eine leicht veränderte Ansicht, denn jetzt geht es darum sich nicht auf Microsoft zu verlassen, sondern seine eigenen Aktionen - basierend auf der durch Microsoft bereitgestellten und GDPR compliant Platform - zu beschreiben und den verantwortlichen Mitarbeiter zu definieren. Dazu click unterhalb von "Assessment Users" auf "Assign" und suchen einen adäquaten User, vergeben eine passende Priorität und schreiben noch passende Notizen in das dafür vorgesehene Feld und beenden die Aktion mittels des Clicks auf "Assign".
      CM assign a user
    12. Wer mag kann jetzt noch spannende Details über die Implementatioin einfügen, mir genügt i.d.R. ein Lorem Ipsum auf die Tastatur, wichtig ist in diesem Schritt, dass überhaupt eigene Zeichen in dem Details Feld stehen.
    13. Nun den Status aufklappen und beschließen, dass "Planned" aktuell passend ist.
    14. Als Testdatum gerne ein Datum in der nahen Zukunft auswählen und "Test result" auf "not assessed" stellen
    15. Für ein vollständiges Assessment ist natürlich unumgänglich alle Controls auszufüllen, dies sparen wir uns an dieser Stelle, denn die Verwendung und der Sinn und Zweck sollten jetzt hoffentlich klar geworden sein.
      Als nächstes scrollen wir nach oben [ich persönlich fände einen Button am Ende auch gar nicht verkehrt, habe ich der zuständigen PM bereits mitgeteilt/vorgeschlagen, gerne dafür voten] und clicken auf den Button "Export to Excel"
    16. Im bereitgestellten Excel (Beispiel) finden sich alle im CM eingetragenen Daten, incl. der "Microsoft Managed Controls"
      Excel Export

     

    Die Möglichkeiten des Compliance Managers werden mit der Zeit sukzessive ausgebaut und auf weitere Microsoft Dienste/Platformen wie Microsoft Azure und Microsoft Dynamics 365 ausgeweitet.

     

    Diese Demoanleitungen bieten einen Überblick zur Nutzung der jeweiligen Lösungen und Produkte im Kontext von DSGVO und stellt keine rechtlich bindende Aussage dar!

    Azure データセンター ネットワーク インフラストラクチャー

    $
    0
    0

    皆さんこんにちは。Azure テクニカルサポートの平原です。

    クラウド上でサービスを提供する場合、利用しているサービスはすべて「ネットワークの向こう側」にあります。何か問題が発生した時など、トラブルシューティングをする際に、ネットワークの仕組みなど知っておくとよいこともあります。本トピックでは、Azure を後ろからサポートしている、実際のネットワークについて説明をしていきます。Azure に限った話ではないかもしれませんが、少しでもご参考になれば幸いです。

     

    Azure データセンター内の通信

    まず、データセンター内を考える場合、疑問点にもたれる内容としては、「Azureデータセンターの内部はどのようになっているのか」ではないでしょうか?また、「どのように構築され、どの様に展開しているか?」というのは、少し疑問に持たれたことはあるかもしれません。

    Azure のデータセンターは世界各地にありますが、実は、すべてが同じ機器を使っていたり、同じデバイスを使っていたりなど、統一的な構成をしているわけではありません。もちろん、いくつかの地域ごとの要件やデータセンターとしての諸要件に応じて、「満たすべき条件」等はあります。基本的には、各地の機器提供ベンダーの協力や、場合によってはファシリティ提供企業の方々と協力の上、世界各地に展開しています。また展開後も、データセンター部門では変化する需要に合わせて供給を満たすべく、機器の監視や強化も日々行っています。

    Azure のデータセンターに関して、もし具体的なイメージを知りたければ、データセンターの紹介ビデオがあるので、是非ご覧ください。

    世界各地のデータセンターは、マイクロソフトのバックボーンネットワーク で接続されています。このバックボーンネットワークはインターネットに接続しているので、インターネットからも接続が可能です。例えば、外部のインターネット (例えば一般のプロバイダーを通して接続する場合など) から、接続する場合には外部のルーターを経由して、マイクロソフトのルーターに対して接続がされます。

    Azure データセンターの内部は、また別に1つのネットワークとコンピュートリソースを構成しています。ネットワークに関しては多くのルーターやスイッチで構成されていますが、内部では「クラスター」と呼ばれる単位で大きく分割がされています。これは、例えばコンピュートサービスであったり、ストレージサービスなど、機能ごとに分割がされています。

    クラスター内部ではさらに、仮想化環境を動かすブレードサーバー (ホスト・ノードと呼ばれます)、ノードをまとめるラック、電源、スイッチ、ルーター等で構成されていますが、それ自体はワンセットになっています。各ノードでは、仮想化環境 (ハイパーバイザー) が構築され、お客様が構築する仮想マシンはこのノード上で起動されたり、削除されたりしています。もし、ある仮想マシンAから、データセンター内部である仮想マシンBに通信をする場合には、通常は以下のような経路を通ります。

     

    参考

    • [仮想マシンA] → [ノード A] → [スイッチA] → …. → [スイッチB] → [ノードB] → [仮想マシンB]

     

    通信をする際には、仮想マシンや各ノード (ブレードサーバー) には、「仮想ネットワーク」で利用する以外のデータセンター内部で利用できる固有の IP を持っていますが、もし通信をする場合には、ノードのフィルター側でうまく処理をしてくれており (VFP という技術を使っています)、通信ができるようになっています。またスイッチ間は複数の経路が多重化されており、耐障害性も高くなっています。

    一方、もし通信が仮想ネットワーク間の通信の場合は、仮想ネットワーク自体が、マイクロソフトのネットワーク仮想化技術 (NVGRE等) を使っており、これら機能を使って実現をしています。仮に仮想ネットワーク間の通信であっても、Auzre 内の仮想化環境がうまく処理して、上記で示したデータセンター内部で使う IP に変換し、双方向に通信ができるようになっています。

    この辺りのお話は、英語のドキュメントなるのですが、以下ネットワーク部門の CVP の Yousef Khalidi のブログにも記載がありますので、もし興味がありましたらご覧ください。

    また、マイクロソフトを支えるネットワークの技術は、上記説明した内容も含めて外部学会でも公開されています。英語とはなりますが、もし興味があれば、ご参考ください。

     

    Azure データセンターとインターネット

    インターネットから通信する場合はどうなっているのでしょうか?

    こちらについては、上記項目で説明を少ししましたが、基本的に Azure データセンターはマイクロソフト バックボーン ネットワークに所属をしています。インターネットから通信をする場合には、当該バックボーンネットワークを経由して通信ができるようになっています。例えば、ある人が、プロバイダー経由で接続する場合には、以下のような経路で接続します。

     

    参考

    • [クライアントマシン] → [社内ネットワーク] → [ブロードバンドルーター] → [プロバイダー] → … → [マイクロソフト境界ルーター] → [データセンタールーター] → … → [ロードバランサー] → … → [スイッチ] →  [ノード] → [仮想マシン]

     

    また、知っておいた方がよい事実として、インターネットからの接続については、外部者からの攻撃 (DoS 攻撃等) を防ぐために、多重的な防御を展開しています。詳細な防御方法の細かな構成については、機密になっているため公開情報はありませんが、ネットワーク防御に関する記事は、以下にまとまっていますので、もし興味があればぜひご覧ください。もしインターネットに公開するサービスを展開される場合には、お役に立つ内容かと思います。

     

    Azure データセンター間の通信

    Azure データセンター間の通信は、どうなっているのでしょうか?例えば、東日本と西日本間の通信を考える場合や、さらに、東日本から北米間の通信はどうなっているのでしょうか?こちらの内容も疑問に持たれたことがあるかもしれません。

    データセンター間の接続では、実はインターネットは利用しておらず、マイクロソフトの持つバックボーン ネットワークを利用しています。仮に、通信が大陸間を超える場合は、マイクロソフトが利用できる海底の光ケーブルを通ることになります。またこれらの回線は、複数経路で多重化されており、ある1経路で異常が出たとしても、自動で他の経路を迂回するように構成がされています。また将来のネットワーク帯域の需要に向けて、マイクロソフトでは海底ケーブルの増強にも投資をしています。

    この辺りのお話は、英語のドキュメントになりますが、以下ネットワーク部門の CVP の Yousef Khalidi のブログにも記載がありますので、もし興味がありましたらご覧ください。

     

    Azure データセンターと専用線 (ExpressRoute)

    複数サイト間の接続形態として、Azure では専用線サービスとして ExpressRoute サービスを提供しています。この場合は接続はどうなっているのでしょうか?

    ExpressRoute は、専用線であり、インターネットを経由することなく Azure の各種サービスに接続することが可能になります。専用線のため、直接お客様のオンプレミス環境に対して専用線を引くことになりますが (もしくは回線業者が提供する専用線のつながったWAN に接続する形になりますが)、専用線の出口には、マイクロソフトデータセンターと接続している Microsoft Enterprise Edge (MSEE) が配置されています。MSEE は、冗長性構成となっており1つのサイト当たり、2台以上で構成されています。そのため、もしExpressRoute を引く場合には、既定で冗長構成が構成されています。

    この MSEE は Meet-Me サイトと呼ばれる場所に配置され、ExpressRoute を提供する回線業者と MSEE をつなぐ場所となっています。MSEE は、Meet-Me のサイトから、直接対象のデータセンターへとつながるようになっています。この辺りのお話は、英語のドキュメントなるのですが、以下ネットワーク部門の CVP の Yousef Khalidi のブログにも記載がありますので、もし興味がありましたらご覧ください。

     

    **

    今回の内容はいかがでしたでしょうか?もし以上の内容が、構築をする上でも少しでもお役に立てば幸いです。

    --
    Azure テクニカルサポートチーム

    Microsoft Whiteboard Preview – el lienzo de forma libre para la colaboración creativa

    $
    0
    0

    A partir del 5 de diciembre de 2017, hemos comenzado a entregar de manera gradual la aplicación Microsoft Whiteboard Preview, un lienzo digital de forma libre donde la gente, ideas y contenido pueden reunirse para una colaboración creativa, disponible para descarga en dispositivos Windows 10.* Microsoft Whiteboard Preview está construida para todo aquel que se involucra en pensamiento creativo y de forma libre antes de llegar al resultado final. Está diseñada para equipos que necesitan ideas, iterar, y trabajar en conjunto tanto en persona como de manera remota, y a través de múltiples dispositivos.

    Durante su fase de beta privada, vimos a emprendimientos utilizarla para reunir imágenes, prototipos, y notas, como un tablero de inspiración para su siguiente gran idea. Fuimos testigos de cómo agencias de mercadotecnia la utilizaron en reuniones en línea mientras trabajaban con clientes en diseño de producto en tiempo real. Y nuestro equipo la utiliza para hacer diagramas de planos de ingeniería, con participantes remotos que llenan sus respectivas áreas en el mismo lienzo de trabajo. En conclusión, vemos a Microsoft Whiteboard Preview como una manera de mejorar la manera en que la gente va de conceptualización personal, lluvias de ideas en equipo, y discusiones de grupo, a sus productos finales.

    Colaborar sin esfuerzos

    La superficie sin límites asegura que la imaginación tenga espacio para crecer y brinda espacio para las ideas de todo mundo. Traigan compañeros de equipo – ya sea que estén al otro lado de la sala o en una parte diferente del mundo – con la colaboración en tiempo real a través de múltiples dispositivos. Podrán ver dónde se encuentran todos en el tablero y las actualizaciones que realizan – ya sea que agreguen imágenes, coloquen notas adhesivas, o creen un diagrama. Ahora incluso los trabajadores remotos pueden unirse de manera sencilla y contribuir a la discusión.

    Trabajar de manera natural

    Microsoft Whiteboard Preview les permite crear en la manera que sientan más natural. La tecnología primero la pluma, primero el tacto, les permite realizar gestos fluidos con sus dedos o dibujar detalles más finos con su pluma. A través de esta última, ustedes pueden escribir notas, dibujar ilustraciones precisas, o buscar imágenes en la web. Con sus dedos, pueden desplazarse hacia diferentes secciones de su tablero, girar la regla virtual en el ángulo que elijan, y arrastrar y soltar imágenes para crear una pila de fotos. Ya sea que utilicen pluma o tacto, Microsoft Whiteboard Preview reconoce su intención y les entrega los resultados deseados en un instante.

    Crear de manera digital

    Con la versión previa de Microsoft Whiteboard, pueden utilizar tinta inteligente que reconoce sus dibujos de forma libre y los convierte en formas estándar, para que sea más sencillo crear tablas, esquemas y diagramas de flujo con una mejor presentación. Y a diferencia de las pizarras blancas tradicionales, la aplicación salva de manera automática sus tableros, para que puedan retomar donde se quedaron o compartir ligas de sus tableros, para que otros puedan construir sobre su trabajo.* No se requiere tomar fotos de sus lienzos o enviar fotos por email a los demás cuando necesitan trabajar de inmediato.

    Estamos en verdad emocionados porque prueben Microsoft Whiteboard Preview ya que creemos que les ayudará a desbloquear la creatividad y a aprovechar el poder de sus equipos de trabajo. Esperamos recibir sus comentarios, sugerencias y solicitudes de características a través de Windows Feedback Hub, al que pueden acceder desde la aplicación.

    *Microsoft Whiteboard Preview llegará a todas las versiones en inglés de Windows 10 en las próximas 24 horas, y a idiomas adicionales en los meses siguientes. La aplicación es de uso gratuito para cualquier persona con un dispositivo Windows 10, pero para la colaboración entre diferentes personas se requiere un participante con una cuenta Office 365 personal, de trabajo o escuela. Para los clientes de SurfaceHub, Microsoft Whiteboard Preview reemplazará de manera eventual la aplicación nativa de pizarra blanca que corre en SurfaceHub. Por el momento, pueden instalar la versión previa de Microsoft Whiteboard junto con su aplicación existente.

    Viewing all 34890 articles
    Browse latest View live


    <script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>