Voici les fonctionnalités phares de la version datacenter en vidéo.
Storage Spaces Direct |
Windows Containers |
Software-Defined Networking Load Balancer |
Cluster OS Rolling Upgrades |
Voici les fonctionnalités phares de la version datacenter en vidéo.
Storage Spaces Direct |
Windows Containers |
Software-Defined Networking Load Balancer |
Cluster OS Rolling Upgrades |
従業員は組織の内外で接続しますが、IT インフラストラクチャは必要に応じて常にスケールするわけではありません。このガイドは CIO の次の業務に役立ちます。
・ ディスラプティブ テクノロジーを管理する
・ 実装における時間/コスト面の課題に対処する
・ リスクと利益のバランスを保つ
・ チーム固有のニーズに沿って解決します。
ガイドをダウンロードし、チームの共同作業の向上に役立てましょう。
現代應用程序是可跨越多個資料中心和雲端環境的複雜、多層商業服務系統。為了管理這些應用程序,並確保其符合 SLA,您需要一個完整詳細的管理方案,來將不同的應用程序組件和基礎架構服務結合。
Service Map 解決方案已經正式發布,它讓您可以即時地自動發現並建立伺服器、進程、第三方服務三者間依賴關係的公共參考圖。利用 Service Map,您可以通過視覺化的流程,及伺服器間的依賴關係,來隔離問題,並加速分析根本原因。您可以通過查看級聯警報、失敗的連結、負載平衡問題,和 rogue 客戶端來管理事件,並改進 SLA。您也可以通過詳細的進程,和伺服器依賴性清查,來確保在遷移資料期間沒有遺留任何資訊。
當發生性能問題和機器停機時,首要困難點是隔離問題來源。若無法了解系統和應用程序組件互連的方式,團隊成員每個人會以自己的工具和資料加入會議 ,這時通常容易因為無法解決問題而造成彼此責難。
透過在零預定義的任何工作負載間自動發現相依關係,Service Map 消除了過往隔離問題域所需的猜測。有了一個共同參考點,團隊可以快速關注問題區域,減少平均解決時間(MTTR),及所需資源。
團隊監控和支援關鍵業務應用程序面臨的相同挑戰是,篩掉現有警報及性能指標的噪音,以專注於重要的警報及性能指標。若無法快速確認真正重要的資訊,運營團隊便不可能在故障影響客戶前,主動的識別並解決問題。
Service Map 讓使用者可以了解會影響互連系統相依關係的警報和安全性問題。透過此分析,您可以在您的 SLA 受影響前,快速識別可能影響應用程序性能的更改、連接問題(如:配置錯誤的防火牆規則),或可能影響終端使用者的性能峰值。
透過 Service Map 的自動相依關係之發掘及對應,使用者可以根據說明文件,視覺化多個來自 OMS 解決方案的數據,例如記錄分析、追蹤更改、更新管理,和安全性分析。您現在可以檢視您最關心的系統之所有資料,以及其相依關係的資料,而不是只能查看個別類型的資料。
除了加強您的故障排除與故障根本原因分析外,Service Map 還有助於加速遷移您的應用程式和工作負載至雲端。Service Map 可以讓您不需在猜測要隔離哪些問題、識別您環境中突發和中斷的連結,並在了解關鍵系統與端點不會被遺漏的情形下,執行 Azure 遷移,Service Map 提供了一個 REST API,讓您可以輕鬆的將相依關係的數據提取到現有的工具和進程中。
此解決方案是微軟 OMS 洞察和分析解決方案的一部份,若需更詳細資訊,請至以下網站:Service Map 說明文件、洞察和分析解決方案網站。
Following Matt Bongiovi's post at the Hey, Scripting Guy! Blog about PowerShell support for certificate credentials, I ported the main parts of the c# code he references in his post to PowerShell.
So here you have, a quick-and-dirty Get-CertificateFromCredential function you can use to get the certificate for the credentials the user selected from the drop down in the Get-Credential window:
function Get-CertificateFromCredential {
param([PSCredential]$Credential)
Add-Type -TypeDefinition @'
using System;
using System.Runtime.InteropServices;
public static class NativeMethods {
public enum CRED_MARSHAL_TYPE {
CertCredential = 1,
UsernameTargetCredential
}
[StructLayout(LayoutKind.Sequential)]
public struct CERT_CREDENTIAL_INFO {
public uint cbSize;
[MarshalAs(UnmanagedType.ByValArray, SizeConst = 20)]
public byte[] rgbHashOfCert;
}
[DllImport("advapi32.dll", CharSet = CharSet.Unicode, SetLastError = true)]
public static extern bool CredUnmarshalCredential(
IntPtr MarshaledCredential,
out CRED_MARSHAL_TYPE CredType,
out IntPtr Credential
);
}
'@ -ReferencedAssemblies System.Runtime.InteropServices
$credData = [IntPtr]::Zero
$credInfo = [IntPtr]::Zero
$credType = [NativeMethods+CRED_MARSHAL_TYPE]::CertCredential
try {
$credData = [System.Runtime.InteropServices.Marshal]::StringToHGlobalUni($Credential.UserName);
$success = [NativeMethods]::CredUnmarshalCredential($credData, [ref] $credType, [ref] $credInfo)
if ($success) {
[NativeMethods+CERT_CREDENTIAL_INFO] $certStruct = [NativeMethods+CERT_CREDENTIAL_INFO][System.Runtime.InteropServices.Marshal]::PtrToStructure(
$credInfo, [System.Type][NativeMethods+CERT_CREDENTIAL_INFO])
[byte[]] $rgbHash = $certStruct.rgbHashOfCert
[string] $hex = [BitConverter]::ToString($rgbHash) -replace '-'
$certCredential = New-Object System.Security.Cryptography.X509Certificates.X509Certificate2
$store = New-Object System.Security.Cryptography.X509Certificates.X509Store -ArgumentList @(
[System.Security.Cryptography.X509Certificates.StoreName]::My,
[System.Security.Cryptography.X509Certificates.StoreLocation]::CurrentUser
)
$store.Open([System.Security.Cryptography.X509Certificates.OpenFlags]::ReadOnly)
$certsReturned = $store.Certificates.Find([System.Security.Cryptography.X509Certificates.X509FindType]::FindByThumbprint, $hex, $false)
if($null -eq $certsReturned) {
throw ('Could not find a certificate with thumbprint {0}' -f $hex)
}
$certsReturned[0]
}
} catch {
throw ('An error occured: {0}' -f $_.Exception.Message)
}
finally {
[System.Runtime.InteropServices.Marshal]::FreeHGlobal($credData)
[System.Runtime.InteropServices.Marshal]::FreeHGlobal($credInfo)
if($null -ne $store) { $store.Close() }
}
}
Then, you can use the function:
$cred = Get-Credential -Message 'Select the SMARTCARD'
Get-CertificateFromCredential -Credential $cred
Important: Keep in mind that Get-Credential cmdlet doesn't verify the credentials anywhere, it just opens the $Host.UI.PromptForCredential popup and returns a PSCredential object. The credentials themselves are verified only when used with another cmdlet.
This means that the user can select a certificate from the dropdown in the Get-Credential window, enter an incorrect PIN and this function will still return the certificate.
I've also been following issue #3048 in the PowerShell repository on github. Hopefully, native support for certificate authentication will be added in a future version (6.1.0?)
HTH,
Martin.
(この記事は2017年10月25日にMicrosoft Partner Network blog に掲載された記事 Innovation and inclusion: Changing lives through culture and technology の翻訳です。最新情報についてはリンク元のページをご参照ください。)
世界銀行 (英語) の調査によれば、世界全体でおよそ 10 億人もの人々が何らかのハンディキャップを負っており、重大な障害を抱えている人は、世界人口の 5 分の 1 を占めています。たとえ障害があっても、人はだれでも有意義ですばらしい人生を送れるはずです。それを支援するために、マイクロソフトとパートナー様はさまざまな活動に取り組んでいます。
10 月は「全米障害者雇用認識月間」です。それにちなんで今回の記事では、世界中のすべての人がより多くのことを達成できるように、マイクロソフトとパートナー様がどのような支援を行っているのかご紹介します。
Microsoft Inspire では、Lianna という少女にまつわる印象的な話を聞くことができました。Lianna は、未熟児網膜症という珍しい眼の病を患っています。これは、未熟児で生まれた赤ん坊が、眼球の成長が未熟なことから発症する病気で、失明を引き起こすおそれもあります。アメリカではほぼ 100% 治療可能な病気ですが、Lianna が生まれたアルメニアでは、必要な専門治療を受ける手段が限られていました。
Lianna を救いたいという熱意から、マイクロソフト パートナーである SADA Systems (英語) は、ロサンゼルスこども病院とアルメニア アイケア プロジェクト (英語) と連携し、イノベーションの力で最先端の遠隔治療ソリューションを実現しました。Polycom 製デバイスとマイクロソフトの Skype for Business を使用して、手術室から忠実度の高い映像を送ることで、Lianna の病院の医師は 10,000 km 以上も離れた場所にいる専門医と共に手術を行うことができたのです。
Satya Nadella は先日の Good Housekeeping 誌のインタビュー (英語) の中で、だれでも利用できる革新的なテクノロジを通じて、自身の息子と同じく障害を抱えている人々に機会を提供することがいかに重要かについて、自身の家庭ならではの考えを語っています。マイクロソフトでは、従業員 1 人ひとりの貢献を最大化することを目標に、多様な考え方を取り入れながらイノベーションを起こし、チームを作り上げています。多様性と受容を重視した職場環境が、より優れた製品やソリューションを生み出し、当社の従業員にとって、ひいては世界中の働く人々にとって、より良い働き方を実現する土壌になると確信しています。
ここで、「プロジェクト エマ」という特にインパクトのある取り組みをご紹介しましょう。デザイナーの Emma Lawton 氏は、パーキンソン病を発症し、手の震えにより筆記が困難になっていました。彼女の友人であるマイクロソフトの研究者 Haiyan Zhang は、彼女を助けるテクノロジを生み出したいという思いから、内蔵の小型モーターを振動させることで手の震えを相殺する腕時計型のデバイスを考案し、試作品を完成させました。現在、パーキンソン病を完治させる方法は確立されておらず、患者は世界で 1,000 万人以上にも上ります。それでも、Emma Lawton 氏は Zhang の作り出した腕時計型デバイスのおかげで、再び文字が書けるようになりました。
今年の「全米障害者雇用認識月間」のテーマは、「Inclusion Drives Innovation (インクルージョンがイノベーションを生む、英語)」です。このメッセージが表すのは、イノベーションを起こし従業員が成功するうえで考え方の多様性は重要な役割を果たすということです。
前述の 2 つのエピソードからもわかるように、テクノロジとビジネスには、あらゆる人々の生活をより良いものにする力があります。そこに個人の能力は関係ありません。排除される人のいないカルチャーを育てることと、テクノロジのイノベーションをいかに活用するかをクリエイティブに考えることが大切なのです。
マイクロソフトは障害を持つ人々を歓迎し、ビジネス環境の多様性を高めるよう努めています。その一環として、インクルーシブ雇用プログラム (英語)、援助付き雇用プログラム (英語)、国内外のさまざまな組織との提携を行っています。従業員の多様性こそが、企業の成長とイノベーションのエンジンを動かす燃料になるのです。
インクルージョンとイノベーションでビジネスが変わったという経験をお持ちではないですか。ぜひマイクロソフト パートナー コミュニティ (英語) でご紹介ください。
SharePoint is one of the most important collaboration platforms in the market and Microsoft has put a lot of investments in this product through its whole life cycle not only on premise versions but also the SharePoint Online which is part of the Office 365.
Due to the importance of SharePoint Microsoft has provided a tool to allow end users, IT pros to migrate data from on premise and file share to SharePoint Online.
This tool is very promising and I have gone through the whole process to migrate a sample document from my on premise environment hosted over Azure to my SharePoint Online site, in this blog post I will go through the steps to do that.
Indeed this tool is very promising for migration to SharePoint online and it is designed for migrating huge amount of data and the strongest point that it is free tool.
References and more links:
Service Fabric 早在2016年11 月便可支援 Linux 容器編排,現在它也可以支援 Windows Server 容器編排管理了!關於 container 在 Service Fabric 部署和編排管理的詳細資訊,請閱讀此篇文件。若需要更多關於 Service Fabric 的資訊,請見 Azure Service Fabric 概觀。
使用 Service Fabric 上之 OMS Container 監控解決方案,您可以:
Service Fabric 已經建立了一個 Azure 資源管理模板,它在新的 Service Fabric 叢集所有節點上,安裝了 OMS 代理器,並在部署叢集 (cluster) 的同時,生成 OMS 工作區。叢集部署後,您可以新增 OMS Container 解決方案(透過 Azure 市集)至 OMS 工作區,它會在幾分鐘內自動開始運作。
若要透過 OMS 為您的 Service Fabric 編排容器設置監控和診斷,請閱讀此篇文章。
2017年2月,我們發布了支援 Windows Server 和 Hyper-V container 監控的解決方案,若需更詳細資訊,請閱讀 Container 解決方案文件。
Říká se, že nejhezčí dárek je ten ručně vyrobený a tak pro Vás máme příručku pro rychlé zvládnutí služby Microsoft Forms. Věříme, že Vám přijde vhod a že v novém roce již nebudete písemky, testy, dotazníky a registrace dělat papírově, protože v Microsoft Forms je vše rychlejší a samo se i opraví. Můžete si přát snad něco jiného? Tak ať se Vám líbí.
1. Introduction
This post provides you with the method to load the Exchange Management Shell into ISE.
The ingredients we're using for this trick are:
2. Prerequisites
The prerequisites are that your Exchange Management Tools, for Exchange 2010, Exchange 2013 or Exchange 2016, must be installed on that machine (which can be a desktop, a server dedicated for Exchange management or an Exchange server itself) – See this link to check which OS are supported to Install the Exchange Management Tools
3. Principle
Basically what we do here is that we just copy the Exchange Management Shell "traditional" shortcut definition on the ISE profile file. In the below script, we also test that Exchange cmdlets are not already present before loading the Exchange Management cmdlets – for more information about ISE profile, see this link.
What I find convenient with ISE is that you can use intellisense and color highlight when editing your Exchange PowerShell Management scripts, and also copy/paste color-coded Exchange instructions for your blogging or documentation purposes, which I did for my last posts.
Also, wish Powershell ISE, you have an action pane that drills down all the cmdlets that are loaded with your current session – loading the Exchange Management Shell within ISE will also provide you all the available Exchange (2010, 2013, 2016) cmdlets available for Exchange management. Note that this "action pane" in ISE also loads all O365 and/or Azure cmdlets if you import a Powershell Session where you are connected to your O365/Azure tenant.
#1- First start by forcing the creation of your PowerShell ISE profile file if it doesn't exist
Type or paste the below directly on the command part of your ISE console and press "Enter" (or paste it on the script pane and then press "F5", which I did on the below screenshot)
if (!(Test-Path -Path $PROFILE ))
{ New-Item -Type File -Path $PROFILE -Force }
See here for more details about ISE PowerShell profile…
#2- Second, within your ISE console, type the following
psEdit $profile
Note 1: this will open your $profile (for ISE $profile is Microsoft.PowerShellISE_Profile.ps1 as you can notice in the script pane's title that just opened – again for more information about ISE profile, see this link) in the ISE script pane.
Note 2: this psEdit command is available in ISE only, not in the text-based PowerShell console.
#3- in the script pane, just copy/paste the below blue script
The below script goes a little further than just loading the Exchange Management Shell into ISE:
$StopWatch = [System.Diagnostics.StopWatch]::StartNew()
Function Test-Command ($Command)
{
Try
{
Get-command $command -ErrorAction Stop
Return $True
}
Catch [System.SystemException]
{
Return $False
}
}
IF (Test-Command "Get-Mailbox") {Write-Host "Exchange cmdlets already present"}
Else {
$CallEMS = ". '$env:ExchangeInstallPathbinRemoteExchange.ps1'; Connect-ExchangeServer -auto -ClientApplication:ManagementShell "
Invoke-Expression $CallEMS
}
The ISE window will look like the below:
#4- Just save it, close your ISE, et voilà !
Now every time you open it, ISE will load the Exchange Management Tools like below, and even indicate the time it took to load the Exchange Management tools… Maybe milliseconds is a bit overkill, then just use the "TotalSeconds" property instead of "TotalMilliseconds" property in $Time=$StopWatch.Elapsed.TotalMilliseconds line within your ISE's $Profile …
The below articles provides basic understanding on how to configure the dynamic DNS update when the application in the virtual machine scale set uses the fqdn for resolving the hosts in the scale out occurs in the Linux serve. Performed the below configuration steps in Red hat Enterprise Linux 7.2 VM Scale Set and upload them as custom image for the VM Scale set. The below procedure can also be followed for the Linux Virtual Machine in Azure if it requires to perform dynamic update in the dns server. Validate in the test lab before getting into the production environment.
There are few pre-requisites should be considered before moving to the configuration in the Red hat Linux.
After validating the above steps follow the below configuration to perform in the Linux system
The root password remains private. First login with the user account you created. When you need to "become root" this is the command you use.
sudo -s
it will ask you to type in your own password again (not the root password, just your own). After that you will be logged in as root.
[root@vmss01irl000000 var]# cat /etc/resolv.conf
# Generated by NetworkManager
search reddog.microsoft.com
nameserver 10.0.0.5
[root@vmss01irl000000 dhcp]# nslookup dc01.azureinfra.info
Server: 10.0.0.5
Address: 10.0.0.5#53
Name: dc01.azureinfra.info
Address: 10.0.0.5
[root@vmss01irl000000 dhcp]# ping 10.0.0.5
PING 10.0.0.5 (10.0.0.5) 56(84) bytes of data.
64 bytes from 10.0.0.5: icmp_seq=1 ttl=128 time=1.85 ms
64 bytes from 10.0.0.5: icmp_seq=2 ttl=128 time=0.558 ms
64 bytes from 10.0.0.5: icmp_seq=3 ttl=128 time=0.631 ms
--- 10.0.0.5 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2002ms rtt min/avg/max/mdev = 0.558/1.016/1.859/0.596 ms
[root@vmss01irl000000 dhcp]# vi /etc/dhcp/dnsreg.sh
#!/bin/sh
host=`hostname`
requireddomain=azureinfra.info
new_ip_address="$(ip addr show eth0 | grep "inetb" | awk '{print $2}' | cut -d/ -f1)"
nsupdatecmds=/var/tmp/nsupdatecmds1
echo "update delete $host.$requireddomain a" > $nsupdatecmds
echo "update add $host.$requireddomain 3600 a $new_ip_address" >> $nsupdatecmds
echo "send" >> $nsupdatecmds
nsupdate $nsupdatecmds
Save and quit the file using the command : wq!
[root@vmss01irl000000 dhcp]# chmod +x /etc/dhcp/dnsreg.sh
[root@vmss01irl000000 dhcp]# chmod +x /etc/rc.d/rc.local
[root@vmss01irl000000 dhcp]# chmod +x /etc/rc.local
[root@vmss01irl000000 dhcp]# ls -l /etc/dhcp/dnsreg.sh
-rwxr-xr-x. 1 root root 373 Dec 17 02:20 /etc/dhcp/dnsreg.sh
[root@vmss01irl000000 dhcp]#
[root@vmss01irl000000 dhcp]# vi /etc/rc.local
#!/bin/bash
# THIS FILE IS ADDED FOR COMPATIBILITY PURPOSES
#
# It is highly advisable to create own systemd services or udev rules
# to run scripts during boot instead of using this file.
#
# In contrast to previous versions due to parallel execution during boot
# this script will NOT be run after all other services.
#
# Please note that you must run 'chmod +x /etc/rc.d/rc.local' to ensure
# that this script will be executed during boot.
touch /var/lock/subsys/local
sh /etc/dhcp/dnsreg.sh
:wq!
Note: The above behaviors can occur for all users within the site, or just for specific users.
All these behaviors can occur if there’s something wrong with the users record within the User Information List for the site collection.
Try these same things in other site collections, they probably work just fine.
First, try deleting the user from the site collection:
$web = "http://teams.contoso.com/sites/t1" $user = Get-SPUser -web $web | where {$_.UserLogin -like "*domainUserName*"} Remove-SPUser $user -web $web
Then try to add the user as a Site Collection Administrator (Site Settings | Site collection Administrators).
Note: This is a temporary troubleshooting step. Adding to site collection administrators takes a different code path and can bypass some of the problems you can run into when adding users to other SharePoint groups.
If the user is successfully added to Site Collection Administrators, then their user record should be fixed up. You can then remove the user from there and add them to the intended SharePoint groups / site permissions.
Another thing you could do is export, and then import the entire site collection. This basically pulls out all data from the site into some files and then puts it back. It's not a full fidelity backup and restore.
See this: https://technet.microsoft.com/en-us/library/ee428301.aspx
If you’re unable or unwilling to do the export / import, unfortunately, you’re probably in a state where the only fix is via direct database update, which is not supported without the proper diagnosis and approvals from the SharePoint Product Group. If you open a support case with Microsoft, we can help get you fixed up in a supported manner.
Run this PowerShell. It will output the ID for the User Information List.
$site = get-spsite http://theProblemSite $web = get-spweb $site.rootweb.url $list = $web.lists['User Information List'] $list.id
Now take that ID and use it in this SQL query against the content database:
SELECT * FROM AllUserData (nolock) WHERE tp_ListId = 'the User Info List ID' and tp_UIVersionString > 1.0 order by tp_id
If that query returns rows for your problem users (or really any rows at all), it means that versioning or moderation was enabled on your User Information List and now you have multiple versions of your user objects (not good). Unlike other types of list items, SharePoint does not expect more than one version of a user object. This is a complex problem involving multiple SQL tables. You can try the export / import, but you’ll probably need Microsoft support to fix up your SQL tables -- trust me, this is not a situation for a DIY fix.
Following the great success of the Azure Security Infrastructure book, Tom and I signed another contract with Microsoft Press, and we are working on a new book dedicated to Azure Security Center. This new book is now available for pre-order at Amazon, and it will cover all capabilities available in Security Center.
Stay tuned for more news about this, soon.
マイクロソフトでは、IT に携わるすべての皆様へイベントやセミナーを通じて、役立つ技術やさまざまなノウハウを提供しています。
マイクロソフトが開催するイベント・セミナーや各種トレーニングは以下のサイトから検索していただくことができます。マイクロソフトのイベントに興味がある方はぜひ、ご活用ください
[notranslate]
integration components are a good indicator of the health of a VM as it can tell us OS and services are properly started. it does not replace OS/application but can give you a quick clue about the health state of your VMs. Here I used the heartbeat IC to verify VM state from a Hyper-V perspective further we created a daily report which is sent to our team with a summary for each cluster like. as the report is been generated based on HTML we can also add some formatting to highlight failed VMs a.e. in red
#
#Creator: ramacan
#Last Modified: 06/11/14
#
# - Quick & Dirty detection of VMs heartbeat state
# - can run from any machine which has failovercluster powershell commandlet installed
# - check only for VMs with state running (bug with saved state VMs)
#V1.0 - better output view
cls
""
$clu=Read-Host "Cluster name which should be checked "
$Clusternodes=(Get-Cluster $clu | Get-Clusternode).Name | Sort-Object
$ClusternodeCount=$Clusternodes.Count
[array]$NoHBVMs=@{} | Out-Null
""
Write-Host "scan heartbeat status for any VM running in cluster $clu" -foregroundcolor green
Write-Host "found"$ClusternodeCount" nodes in cluster $clu" -foregroundcolor green
" "
foreach ($node in $clusternodes) {
$AllVMs=get-vm -ComputerName $node | Sort-Object
$AllVMsCount=$AllVMs.Count
" "
Write-Host "scanning node $node...." -foregroundcolor green
Write-Host "found"$AllVMsCount" VMs at node $node" -foregroundcolor green
foreach ($VM in $AllVMs) {
$VMN=$VM.Name
$VMStat=(get-vm -computer $node "$VMN").State
if ($VMStat -match "Running") {
$HBStatus=(Get-VMIntegrationService -computername $node -VMName $VMN Heartbeat).PrimaryStatusDescription
if ($HBStatus -match "No Contact")
{Write-Host ""$VM.Name"has HB status - No Contact ! detected on host -> $node" -foregroundcolor yellow
$NoHBVMs+=$VM.Name}
}}}
" "
if ($NoHBVMs -ne $null) {
Write-Host "Following VMs need to be checked :" -foregroundcolor green
$NoHBVMs} else
{Write-Host "Great - No VMs detected in cluster $clu which has failed heartbeat" -foregroundcolor green}
" "
[/notranslate]
With Intune's migration to the Azure portal, the default Device Enrollment Program (DEP) profile functionality was eliminated. While this prevents unintentional profile assignment, we heard from customers that they want to automate the assignment of enrollment profiles. Thanks to the Microsoft Graph API, you can do just that!
Keep reading to find out how you can automate this in your environment and save your Help Desk and Intune admins a lot of time!
Customers who use Apple's Device Enrollment Program often have new devices flowing in to Intune daily. When this is the case, it can be a lot of work to constantly sign in to the Intune portal and ensure that all of your devices have a profile assigned prior to them reaching your end users. If a device makes it to an end user before a DEP profile is assigned, it will be treated as a standard device and not a corporate-owned DEP device. To help decrease the overhead of ensuring all devices have a profile assigned, you can leverage Graph API to do this work for you!
The first time you run this script if you are using a "service account", you will need to execute another script for this to work properly. By default, non-Global Admins do not have the ability to execute Graph API calls unless a GA delegates rights to them. To allow this account to execute the script, you first must assign it rights within Intune. The easiest way to do this is to assign the account the AAD role "Intune Service Admin" since this will give the account the user rights to execute these Graph API calls.
NOTE: You will not want to have this account enforce MFA since you are trying to automate this action. MFA requires user input, so to make this a truly automated task, do not require MFA for this account
Next, you need to run the Admin Consent script from our Intune PowerShell Samples site on GitHub. You can find a copy of this script here: https://github.com/microsoftgraph/powershell-intune-samples/blob/master/AdminConsent/GA_AdminConsent_Set.ps1
Tip: You can copy this into Notepad and save it as a .ps1
Once this is complete, you can set up the automated DEP profile assignment.
There are a few pre-requisites you must complete prior to creating the Scheduled Task that will run the PowerShell script to automate assigning DEP profiles.
Now that you have your script, let's create our Scheduled Task. You should create this task on a server (or Windows 10 machine) in your environment that is consistently online and available, so that you can ensure the script will run at the same time every day. As a side note, on this machine you will need to make sure you have the latest Azure AD PowerShell module installed. You can install this by simply launching PowerShell as an admin and running this cmdlet: Install-Module -Name AzureAD
NOTE: If you are running a version of Windows Server (or Windows) other than Windows Server 2016 (or Windows 10) on the machine you are setting up the scheduled task, you will need to install the latest version of the Windows Management Framework to be able to install the Azure AD PowerShell module. You can download WMF 5.1 here: https://www.microsoft.com/en-us/download/details.aspx?id=54616
This will install the module on the server so that way the automation script for assigning DEP profiles will work.
Back on track, here is what we need to do to create the task…
NOTE: To easily filter on devices that do not have a DEP profile assigned, follow the instructions on our docs site to do this directly in the Azure portal: https://docs.microsoft.com/en-us/intune/device-enrollment-program-enroll-ios#assign-an-enrollment-profile-to-devices. By loading the "unassigned" filter, as an IT Pro you can see if even a single device in your environment is missing a profile.
To learn more about the actions you can automate with Graph API, check out our Graph API samples on Github! You can find them all here: https://github.com/microsoftgraph/powershell-intune-samples
In this blog post, we used two of the sample scripts from GitHub. You can find both of them individually at these locations:
Authentication: https://github.com/microsoftgraph/powershell-intune-samples/tree/master/Authentication
Apple Enrollment: https://github.com/microsoftgraph/powershell-intune-samples/tree/master/AppleEnrollment
We hope this post helps you and leave us comments if you have questions or feedback!
-Sarah and Josh
"Azure periodically performs updates to improve the #reliability, #performance, and #security of the host infrastructure for virtual machines. All you need to get up to speed, in one post!"
> Bookmark this short URL! https://aka.ms/focuson/apm
> Last Updated: 18th December 2017 (periodically updated as a reference / index to relevant resources)
During a recent customer workshop, as we explored and started to map out their cloud journey, I delved deep into a really good discussion regarding how Microsoft manages the underlying fabric of the Azure platform, vs. how these would be done in typical on-premise environment. One of the advantages gained by customers moving to Azure is the need to manage/patch/update the physical infrastructure is removed, along with all the maintenance and management of each of these components making up the space, power, servers, storage, network etc. typically associated with a data centre environment - within Azure, this maintenance still needs to occur, however this is managed by Microsoft.
Over the past year, a number of announcements have been made increasing the level transparency to customers of how these updates are performed, allowing our customers to better manage the availability of their core services and workloads. I wanted to take the opportunity to collate as many of theses resources currently available into this single 'Focus on...' post, such that anyone can quickly skill-up in understanding how to take advantage of these new capabilities, such as the Planned Maintenance and Scheduled Events features within your own Azure deployments.
>> Introducing... Azure Planned Maintenance!
Azure periodically performs updates to improve the reliability, performance, and security of the host infrastructure for virtual machines. These updates range from patching software components in the hosting environment (like operating system, hypervisor, and various agents deployed on the host), upgrading networking components, to hardware decommissioning. The majority of these updates are performed without any impact to the hosted virtual machines. However, there are cases where updates do have an impact:
There have been a number of improvements to the planned maintenance experience in Azure, including better visibility and control of maintenance events that impact virtual machine availability - this introductory video covers how to create alerts, discover which virtual machines are scheduled for maintenance, and proactively start the maintenance using the Azure portal, REST API, Azure PowerShell, or Azure CLI.
During a communicated window, customers can choose to start maintenance on their virtual machines. If you do not utilize the window, the virtual machines will be rebooted automatically during a scheduled maintenance window (which is visible to you). Starting the maintenance will result in the VM being redeployed to an already-updated host. While doing so, the content of the local (temporary) drive will be lost.
Native cloud applications running in a cloud service, availability set, or virtual machines scale set, are resilient to planned maintenance since only a single update domain is impacted at any given time.
You may want to use proactive-redeploy in the following cases:
Scheduled Events is one of the subservices under Azure Metadata Service that surfaces information regarding upcoming events (for example, reboot). Scheduled events give your application sufficient time to perform preventive tasks to minimize the effect of such events. Being part of the Azure Metadata Service, scheduled events are surfaced using a REST Endpoint from within the VM. The information is available via a Non-routable IP so that it is not exposed outside the VM.
>> Documentation
For over 18 months now, docs.microsoft.com has been running as our new unified technical documentation experience; to learn more check out our blog post: https://docs.microsoft.com/en-us/teamblog/introducing-docs-microsoft-com. For additional documentation on Microsoft products or services, please visit MSDN (https://msdn.microsoft.com/) or TechNet (https://technet.microsoft.com/).
Planned Maintenance Documentation
https://docs.microsoft.com/
There are a number of useful articles to be aware of, dependent on the operating system of your virtual machine, as there will be some specific differences in how you can query the metadata service for upcoming scheduled events.
For Windows Virtual Machines:
For Linux Virtual Machines:
Azure Architecture Center
https://docs.microsoft.com/en-us/azure/architecture/
The Azure Architecture Center is the official centre for guidance, blueprints, patterns, and best practices for building solutions with Microsoft Azure, curated by the Microsoft patterns & practices team. Specifically in the context of mitigating the potential impact of maintenance events, applications should look to take advantage of high availability options, such as availability sets and availability zones (in preview at the time of writing):
There are also a number of Cloud Design Patterns regarding availability and resiliency, which where possible should be architected into your application. Availability defines the proportion of time that the system is functional and working. It will be affected by system errors, infrastructure problems, malicious attacks, and system load. It is usually measured as a percentage of uptime. Cloud applications typically provide users with a service level agreement (SLA), which means that applications must be designed and implemented in a way that maximizes availability.
Resiliency is the ability of a system to gracefully handle and recover from failures. The nature of cloud hosting, where applications are often multi-tenant, use shared platform services, compete for resources and bandwidth, communicate over the Internet, and run on commodity hardware means there is an increased likelihood that both transient and more permanent faults will arise. Detecting failures, and recovering quickly and efficiently, is necessary to maintain resiliency.
>> Updates & Roadmap
As the Azure platform continues to evolve, be aware of these sites so you can subscribe to the latest updates and feature releases.
Azure Blog
https://azure.microsoft.com/en-us/blog/
Hear from Azure experts and developers about the latest information, insights, announcements, and news in the Microsoft Azure blog.
Azure Updates Blog
https://azure.microsoft.com/en-us/updates/
In addition to the Azure Blog, further detail on all updates into Azure are available on the Azure Updates Blog.
Azure Roadmap
https://azure.microsoft.com/en-us/roadmap/
As Azure continues to grow, you will want to stay informed. The product roadmap is the place to find out what’s new and what’s coming next. Let us know what you think by providing feedback and voting on items. You can also subscribe to notifications, so you’ll always be the in the know.
>> Podcasts
Listening to Podcasts can be a great way to keep up to date, especially when you're out and about, perhaps in the car on the way to work for example. While much of the Channel 9 content is also available in audio format, there are a small number of podcasts that have touched on Planned Maintenance in the past.
Microsoft Cloud Show
http://www.microsoftcloudshow.com/
Whether you are new to the cloud, old hat or just starting to consider what the cloud can do for you this podcast is the place to find all the latest and greatest news and information on what's going on in the cloud universe. Join long time Microsoft aficionados and SharePoint experts Andrew Connell and Chris Johnson as they dissect the noise and distil it down, read between the lines and offer expert opinion on what is really going on. Just the information … no marketing … no BS, just two dudes telling you how they see it.
>> Presentations
Throughout the year, Microsoft hosts a number of public events allowing both in-person and online attendance, while common to all is on-demand access to the recordings of most, if not all sessions presented. These are often given by the engineering teams working closely on the Azure platform itself, or by experienced architects who are working deep in the field in implementing Azure services to solve customer's business challenges.
Ignite 2017 - 25th to 29th September 2017
https://myignite.microsoft.com/videos/
Microsoft Ignite brings together the best of previously individual conferences - Microsoft Management Summit; Microsoft Exchange, SharePoint, Lync, Project, and TechEd conferences - into a single annual event, last held 25th to 29th September 2017 and showcases the company’s enterprise products and services, while providing incredibly valuable IT training. It also provides plentiful opportunities for IT professionals to get together for collaboration and networking.
Tuesday with Corey
https://channel9.msdn.com/Shows/Tuesdays-With-Corey
Corey Sanders answers your questions about Microsoft Azure - Virtual Machines, Web Sites, Mobile Services, Dev/Test etc. If you have a question, Corey will find the answer!
Azure Friday
https://channel9.msdn.com/Shows/Azure-Friday
Join Scott Hanselman as he engages one-on-one with the engineers who build the services that power Microsoft Azure as they demo capabilities, answer Scott's questions, and share their insights. Follow us at: friday.azure.com.
Microsoft Azure on YouTube
https://www.youtube.com/channel/UC0m-80FnNY2Qb7obvTL_2fA
Supporting videos and material are also posted independently to YouTube.
>> Code Samples
Various sample and introductory code snippets to take advantage of Planned Maintenance and Scheduled Events functionality.
Azure Code Samples
https://azure.microsoft.com/en-us/resources/samples/
Learn to interact with Azure services through code. A number of code samples are published via the Azure Code Samples library.
All Azure Code samples are available via GitHub.
Additionally, the following code sample can be found directly on GitHub:
Azure Quickstart Templates
https://azure.microsoft.com/en-us/resources/templates/
Deploy Azure resources through the Azure Resource Manager with community contributed templates to get more done. Deploy, learn, fork and contribute back. With Resource Manager, you can create a template (in JSON format) that defines the infrastructure and configuration of your Azure solution. By using a template, you can repeatedly deploy your solution throughout its lifecycle and have confidence your resources are deployed in a consistent state.
All Azure Quickstart Templates are available via GitHub.
>> Community
There are a large number of users of Azure out in the community, with many taking the time to document and share their experiences of using the Azure services. I've included a selection of individuals and articles here, but please let me know if you've found and can recommend other good resources.
Bob Rouderbush
https://roudybob.blog/
As a Cloud Solution Architect here at Microsoft, Bob Rouderbush maintains a personal blog on roudyblb.blog.
Daniel Petri
https://www.petri.com/
Launched by Daniel Petri in 1999, the The Petri IT Knowledgebase has served as a leading content and community resource for IT professionals and system administrators for more than 15 years.
CUGC - Citrix User Group Community
https://www.mycugc.org/
For the users, by the users, CUGC are dedicated to helping members and businesses excel. Members are technology professionals interested in maximizing the value of Citrix and partner products.
Bert Wolters
http://www.azureman.com/
The personal blog of Bert Wolters, MVP, currently working as a Technical Consultant at inspark in The Netherlands.
Hello everyone! Tim Beasley, Platforms PFE here again from the gorgeous state of Missouri. Here in the fall, in the Ozark Mountains area the colors of the trees are just amazing! But hey, I'm sure wherever you are it's nice there too. Quick shout out to my buds SR PFE Don Geddes (RDGURU), and PFE Jacob Lavender who provided some additional insight on this article!
I am writing this blog post to shed some light on the question of "How come we keep getting prompted warning messages about certificates when we connect to machines via RDP?" A couple of examples you might see when running the Remote Desktop Connection Client (mstsc.exe)…
If you've come across this in your environment, don't fret…as it's a good security practice to have secure RDP sessions. There's also a lot of misguiding information out there on the internet… Being a PKI guy myself, I thought I'd chime in a bit to help the community.
The answer to the question? It depends.
Okay I'm done.
HA! If only it was that easy! You people reading this right now wouldn't be here if it were that easy, right?
To get started, I'm going to break this topic up into several parts. I'm also going to assume that whoever is reading this knows a bit of PKI terminology.
Unless there are security requirements that they must meet, most organizations don't deploy certificates for systems where they are simply enabling RDP to allow remote connections for administration, or to a client OS like Windows 10. Kerberos plays a huge role in server authentication so feel free to take advantage of it. The Kerberos authentication protocol provides a mechanism for authentication — and mutual authentication — between a client and a server, or between one server and another server. This is the underlying authentication that takes place on a domain without the requirement of certificates.
However, to enable a solution where the user can connect to the apps or desktops that you have published for them from ANY device and from ANYWHERE, then you eventually need to deploy certificates.
Let's be clear on one thing: The warning messages / pop-ups that end users see connecting via RDP are a GOOD THING. Microsoft wants you to be warned if there's a potential risk of a compromise. Sure, it can be perceived as a hassle sometimes, but dog gone it…don't just click through it without reading what it's trying to tell you in the first place! Why not you ask? Well for one thing, using sniffing tools attackers can successfully extrapolate every single key stroke you type in to an RDP session, including login credentials. And given that, often customers are typing in domain admin credentials…which means you could have just given an attacker using a Man-in-the-Middle (MTM) attack the keys to the kingdom. Granted, current versions of the Remote Desktop Client combined with TLS makes those types of attacks much more difficult, but there are still risks to be wary of.
I'm going to go through a few scenarios where the warning messages can be displayed, and then how you can remediate them THE SUPPORTED WAY. I can't tell you how many times we've seen customers manually change registry settings or other hacks to avoid the warning prompts. However, what should be done is making sure the remote computers are properly authorized in the first place.
DO NOT JUST HACK THE REGISTRY TO PREVENT WARNING PROMPTS FROM OCCURRING.
Read the following quick links, and pick which one applies for your situation: (or read them all )
I'm going to begin this by saying that I'm only including this scenario because I've come across it in the past. We HIGHLY recommend you have an internal PKI/ADCS deployed in your environment. Although technically achievable, using self-signed certificates is normally NOT a good thing as it can lead to a never-ending scenario of having to deploy self-signed certs throughout a domain. Talk about a management overhead nightmare! Additionally, security risk to your environment is elevated…especially in public sector or government environments. Needless to say, any security professional would have a field day with this practice an ANY environment. IT life is much better when you have ADCS or some other PKI solution deployed in an organization.
A fellow colleague of mine, Jacob Lavender(PFE), wrote a great article on how to remove self-signed RDP certificates…so if you're wanting the details on how you can accomplish this, check out this link!
Jacob has also written a couple of awesome guides that will come in handy when avoiding this scenario. The first one is a guide on how to build out an Active Directory Certificate Services (ADCS) lab, and the second link is for building out an RDS Farm in a lab. Both of course feature the amazing new Windows Server 2016, and they are spot on to help you avoid this first scenario. Just remember they are guides for LAB environments.
ADCS - https://gallery.technet.microsoft.com/Windows-Server-2016-Active-165e88d1
RDS Farm - https://gallery.technet.microsoft.com/Windows-Server-2016-Remote-ffc383fe
Off my soapbox now…back to the topic at hand:
More than likely, you've decided to RDP to a machine via IP address. I don't know how many users are out there that believe that this method is correct. Sure, it works…but guess what? You will always get the warning because you are trying to connect using IP address instead of a name, and a certificate can't be used to authenticate an IP address. Neither can Kerberos for that matter. So, RDP asks you to make sure you want to connect since it can't verify that this is really the machine you want to connect to. Main security reason: Someone could have hijacked it. (This is very easily done with environments that don't use secure DNS btw…)
Take a quick second to smack yourself for doing this, and make a mental note to establish RDP sessions using machine names going forward…go on, I'll wait. If by simply changing HOW you connect via RDP to machines (names vs IP address) fixes your problem…congrats! You can stop reading now. And in case you're wondering, yes…that's a supported solution. *stifles laughter*
However, if RDP using names still produces warning messages then let's continue. You've launched the RDP client (mstsc.exe) and typed in the name of a machine…hit connect…and pops up a warning regarding a certificate problem. At this point, typically this is due to the self-signed certificate each server generates for secure RDP connections isn't trusted by the clients. Think of a Root CA Certificate and the chain of trust. Your clients want to use/trust certificates that a CA issues, but they must trust the certificate authority that the certificates come from, right? RDP is doing the same thing. The client machine you're trying to establish the RDP session from doesn't have the remote machine's self-signed certificate in the local Trusted Root CA certificate store. So how do we remedy that?
Solution for this scenario – Export the remote machine's certificate (no private key needed) and create a GPO that disperses the self-signed certificate from the remote machine to the local machine. Import remote machine's certificate into a new GPO at Computer Configuration -> Policies -> Windows Settings -> Security Settings -> Public Key Policies -> Trusted Root Certification Authorities.
This will install the machine's certificate accordingly on the local machine, so the next time you RDP using the remote machine's name, the warning vanishes. One little caveat though: Certificate SAN names for CNAME DNS entries. If you use CNAME (alias) DNS records in your environment, DO NOT try and connect to a machine using the CNAME entry unless that CNAME exists on the certificate. The name you're trying to connect to must exist on the certificate! Otherwise you'll get warnings despite the fact the cert is deployed in the local Trusted Root CA store. Just because it's trusted doesn't guarantee warnings are forever gone. You still must connect using the correct machine names.
Notice I didn't say to make any registry changes or click the little "Don't ask me again for connections to this computer" option? The idea is to get rid of the warning message the right way…heh.
Okay this scenario is a little like the previous one, except for a few things. Devil's in the details!
First, your domain-joined client should already have a valid chain of trust if ADCS is deployed…so that can't be the root cause. But perhaps it's not a domain-joined client…in that case get the appropriate certificate(s) installed on your local machine to have a valid chain of trust to eliminate that possibility. Moving on and re-referencing the info in Part 1, quit trying to RDP to an IP address, and make sure you're connecting to a machine that has a certificate that contains the name you're trying to establish an RDP session into. I don't believe I need to harp on that one any more...
Normally when deploying ADCS, certificate autoenrollment is configured as a good practice. In this instance, all users and machines can be configured to automatically enroll for a certificate, barring a published template's permissions are set correctly. But RDS is a bit different since it can use certificates that not all machines have. For instance, just because a machine with autoenrollment enabled acquires a computer certificate from an ADCS issuing CA, doesn't mean RDS will use it automatically. Remember, by default the local Remote Desktop Protocol will use the self-signed certificate…not one issued by an internal CA…even if it contains all the right information. If you want to use a certificate other than the default self-signed certificate that RDP creates, you must configure the RDP listener to use the custom certificate…just installing the cert isn't enough. If needed, refer to this article for additional info on configuring the RDP listener for WS2012 /2012R2. Basically, the right certificate with appropriate corresponding GPO settings for RDS to utilize…and that should solve the warning messages. How do we do that?
Keep in mind the requirements of certificates that RDS uses:
Now that you have the certificate requirements, you'll want to create a custom certificate template with the above EKU settings (or none…but I've always used Server Auth or RDA). It's always best to use a custom certificate template, and not the default ones. But, I'm not going to completely go off on a PKI best practices rant here…that's for another day. (There's several articles that walk you through this process if you haven't done so already - here and here).
Once the template's created and scoped appropriately via permissions (autoenrollment or whatever) then it's time for the machine to request the certificate. Remember, certificates you deploy need to have a subject name (CN) or subject alternate name (SAN) that matches the name of the server that a user is connecting to! And in this scenario where the RDS Roles aren't deployed, then the subject name will typically be the machine's name…configure the certificate template to pull the subject name from AD. Manual enrollment is a bit time consuming, so I prefer autoenrollment functionality here.
What about computers that don't have RDS enabled, will they get those certificates too? Answer: If autoenrollment is configured and the template is configured to auto-enroll "domain computers" then, Yes. To mitigate the CA from handing out a ton of certs from multiple templates, just scope the template permissions to a security group that contains the machine(s) you want enrollment from. I always recommend configure certificate templates use specific security groups. Where certificates are deployed is all dependent upon what your environment requires. Just take the time to plan / lab things out before deploying to production…
Next, we configure Group Policy. This is to ensure that ONLY certificates created by using your custom template will be considered when a certificate to authenticate the RD Session Host Server (or machine) is automatically selected. Translation: only the cert that came from your custom template will be used when someone connects via RDP to a machine…not the self-signed certificate.
Create a new GPO at the domain level (or OU...and don't use the Default Domain Policy…bad practice), then edit it. Navigate to Computer Configuration -> Policies -> Administrative Templates -> Windows Components -> Remote Desktop Services -> Session Host -> Security. The option you want to set is "Server Authentication certificate template." Simply type in the name of your custom certificate template, and close the policy to save it. As soon as this policy is propagated to the respective domain computers (or forced via gpupdate.exe), every machine the GPO is scoped to that allows Remote Desktop Connections will use it to authenticate RDP connections.
Here's an example: In my lab, a custom certificate with the Remote Desktop Authentication EKU was installed via autoenrollment. I then created a GPO called "RDP Certificate" and linked it at the domain level. I updated group policy on a member server, and tested it.
Proof: In my lab, I got a warning message since I tried to RDP to an IP . Image2 shows the OID for the custom EKU of Remote Desktop Authentication.
Of course, as soon as I try to connect using the correct machine name, it connected right up as expected. Warning went POOF!
Another way of achieving this result, and forcing machines to use a specific certificate for RDP…is via a simple WMIC command from an elevated prompt, or you can use PowerShell. The catch is that you must do it from the individual machine. You will need the thumbprint of the certificate you wish RDP to use, and the cert itself must exist in the machine's personal store with the appropriate EKU.
CMD:
wmic /namespace:\rootcimv2TerminalServices PATH Win32_TSGeneralSetting Set SSLCertificateSHA1Hash="THUMBPRINT"
PowerShell:
$path = (Get-WmiObject -class "Win32_TSGeneralSetting" -Namespace rootcimv2terminalservices -Filter "TerminalName='RDP-tcp'").__path
Set-WmiInstance -Path $path -argument @{SSLCertificateSHA1Hash="THUMBPRINT"}
Quick, easy, and efficient…and unless you script it out to hit all machines involved, you'll only impact one at a time instead of using a scoped GPO.
Now we get to the meaty part (as if I haven't written enough already). Unlike the above 2 scenarios, you don't really need special GPO settings to deploy certificates, force RDS to use specific certs, etc. The roles themselves handle all that.
Let's say Remote Desktop Services has been fully deployed in your environment. It can be 2008 R2 RDS, or 2012 / 2012 R2 RDS. Doesn't matter…or does it? Kristin Griffin wrote an excellent TechNet Article detailing how to use certificates and more importantly, why for every RDS role service. Her article details RDS certificates for Server 2008 R2, GPO settings, etc. When it comes to WS2012 and WS2012R2 however, it gets easier and a bit less complicated. Just remember the principals are the same. Again, we use certificates to maximize security pertaining to Remote Desktop Connections and RDS.
By default, RD Session Host sessions use native RDP encryption. However, RDP does not provide authentication to verify the identity of an RD Session Host server. You can enhance the security of RD Session Host sessions by using Secure Sockets Layer (SSL) Transport Layer Security (TLS 1.0) for server authentication and to encrypt RD Session Host communications. The RD Session Host server and the client computer must be correctly configured for TLS to provide enhanced security. (https://technet.microsoft.com/en-us/library/ff458357.aspx)
First thing to check if warnings are occurring, is (yep, you guessed it) …are users connecting to the right name?
Next, check the certificate(s) that are being used to ensure they contain the proper and accurate information. Referring to the methods mentioned in
The following information is from this TechNet Article:
"In Windows 2008 and Windows 2008 R2, you connect to the farm name, which as per DNS round robin, gets first directed to the redirector, then to the connection broker, and finally to the server that hosts your session.
In Windows 2012 / 2012R2, you connect to the connection broker, and it then routes you to the collection by using the collection name.
The certificates you deploy need to have a subject name (CN) or subject alternate name (SAN) that matches the name of the server that the user is connecting to. For example, for Publishing, the certificate needs to contain the names of all the RDSH servers in the collection. The certificate for RDWeb needs to contain the FQDN or the URL, based on the name the users connect to. If you have users connecting externally, this needs to be an external name (it needs to match what they connect to). If you have users connecting internally to RDWeb, the name needs to match the internal name. For Single Sign On, the subject name needs to match the servers in the collection."
Go and read that article thoroughly. It talks about proper SAN names to include for external and internal naming for the 2012 / 2012 R2 RDS server roles. Only the RD Web Access and RD Gateway roles should ever be exposed to the Internet, which means obtaining a certificate for those roles from a Public CA.
Now that you have created your certificates and understand their contents, you need to configure the Remote Desktop Server roles to use those certificates. This is the cool part! For 2012 / 2012R2:
You can use a single certificate for all the roles if your clients are internal to the domain only, by generating a wildcard certificate (for example: *.CONTOSO.com) and binding it to all roles. Or you will use multiple certs if you have both internal and external requirements.
Note: even if you have multiple servers in the deployment, Server Manager will import the certificate to all servers, place the certificate in the trusted root for each server, and then bind the certificate to the respective roles. See! Told you it was cool! You don't have to manually do anything to each individual server in the deployment! You can of course, but typically not mandatory.
PRO TIP: For most scenarios where the client is not domain-joined but connecting via RDP to a machine that IS domain joined you should probably be using an RD Gateway…since in those scenarios the client is coming in externally anyways.
To recap…DON'T try to establish an RDP connection using an IP address. DO use the correct naming. DO use an internal PKI and/or GPOs. DO use custom templates with proper EKUs. DO use RDS.
1. You don't have an internal PKI, then use the self-signed certs...and always connect via server names (assuming the DNS suffix on NIC is good) or FQDN. The other takeaway is just have an internal PKI...
2. If you do have an internal PKI, then replace the self-signed certs using GPO and custom certs for the RDS service to use...and connect using server names or FQDN.
3. DON'T connect via IP (did I mention that before?)
And for all our sanity, do NOT mess with the security level and encryption level settings! The default settings are the most secure. Just leave them alone and keep it simple.
Thank you for taking the time to read through all this information. I tried to think of all the scenarios I personally have come across in my experiences throughout the past 25 years, and I hope I didn't miss any. If I did, please feel free to ask! Happy RDP'ing everyone!
Tim Beasley, Microsoft PFE – Platforms.
By Jacob McQuillan, Developer and Social Media Manager at Microsoft
When ensuring that the product or thing you're making is top quality, whether that be a game or a PowerPoint presentation, it's so much better to delay something than to rush to release it, ultimately not making the product as good as it can be.
Some of you reading this may remember that Rockstar Games delayed Red Dead Redemption 2 this year. It was supposed to be released in the Autumn, however it's now set to come out in Spring 2018.
There was an uproar about this; people were absolutely furious that they had to wait longer for a game they were looking forward to so much. However, there was a minority that were happy that the game they were looking forward to would be released eventually, and their patience would be rewarded with a better game.
When it comes to the things you're personally working on, be it a presentation, a website or software, people will be much happier with the final product if you take as long as you need, rather than rushing or not finishing it.
Sticking with games as the example, let's say that the team that worked on Call of Duty: WWII were concerned that they wouldn't be finished in time for its release, and the game was riddled with problems and errors. There are two ways this can go: release the game in its broken state, or push the launch back by a month and release it in the state that was originally intended.
If you release the game broken, you risk losing sales, having angry customers and receiving bad press. If you apologise and push the launch back, people may still get angry, but not as angry as they would have been if they'd bought a broken game. Now they have a game without major bugs that, although they had to wait for it, isn't broken!
AAA developers can’t always do this, as there are other factors to take into account such as marketing spend and shareholders. However, you can definitely do this with your app, game or website!
Although this article has been focused on games, the important thing to remember is when this applies to your work. If you have the chance to make sure that something you release is top notch, you're going to want to take it. Even an extra 5 minutes can help improve code drastically.
A great way to do this is to always write unit tests, add comments to your code or even get someone else to critique your work. All three of these things can help make your software even better.
So next time, think before you commit your work to GitHub or update your website; be sure to double check it and only release it once you’re 100% happy!
Deployment of Windows 10 Updates using System Center Configuration Manager Current Branch
A question I get regularly asked is how to manage Windows 10 updates via System Center Configuration Manager. In this blog post I will explain the different options as well as the basic configuration of these options. The assumption is made that you are familiar of the ConfigMgr update deployment functionalities. In this blog post you will find an addition on your current knowledge of ConfigMgr update management.
Before explaining how to manage Windows 10 updates with ConfigMgr we need to make a distinction between the different update types. With the introduction of Windows 10 we can separate updates into two types:
More information about Windows as a service and the difference between the separate updates can be found here.
Prerequisites:
Before we can deploy these updates with ConfigMgr the right catalog need to be selected, before selecting the catalog the prerequisites need to be in-place. For ConfigMgr the prerequisite is that WSUS is installed and working correctly and before syncing and deploying feature updates as a minimum the July monthly rollup or higher (quality rollup updates are superseding) for Windows Server 2012 and 2012 r2 need to be installed. These rollups provide the capabilities of earlier updates: KB3095113, and KB3159706. After installing the quality rollup the ‘wsusutil.exe postinstall /servicing’ command need to be applied to enable ESD decryption. Please note when running Windows Server 2016 these updates are not needed to synchronize the upgrade classification catalog.
After installing the prerequisites, we can select the right catalog from ConfigMgr.
From the ConfigMgr console: Administration -> Sites -> select the site server -> Configure site components -> Software Update Point Component properties -> tab classification here we Select:
After selecting the classification, we need to select the products. Please note when this is a new installation a first sync need to be accomplished before Windows 10 products are visible in the product list. A synchronization can be initiated via: ConfigMgr console -> Software Library -> Software Updates -> right click Synchronize Software Update. Synchronization can be monitored by reviewing the wsyncmgr.log.
After the initial synchronization is finished we can select the products in our case, this should be Windows 10. For selecting the products we need to go to: the Software Update Point Component Properties->tab products. Here we can select Windows 10 or make a narrowed selection to individual versions. In my case I select Windows 10 as a hole.
After the initial synchronization is finished we can select the products in our case, this should be Windows 10. For selecting the products we need to go to: the Software Update Point Component Properties->tab products. Here we can select Windows 10 or make a narrowed selection to individual versions. In my case I select Windows 10 as a hole.
After the selection is made and the synchronization is completed the updates should be visible in the console: ConfigMgr console -> Software Library -> all Software Updates section:
Deployment of: Quality updates.
The deployment of quality updates with SCCM can be done via the traditional way, by using Automatic deployment rules(ADR’s) or manual deployments.
From ConfigMgr1706 onwards there is an additional capability added to deploy Windows update for business policies. By using these policies, we can configure Windows Update for business and deferral settings. Configuring these setting can be accomplished by Group Policies and MDM settings as well. But please note that the behavior of Windows update for business is different!
More information about this behavior can be found here more information about the more advance options can be found here.
Deployment of Feature Updates
The feature updates of Windows 10 can be deployment in two different ways, by using:
A question I receive regularly is which solution I should use. Both are valid solutions, but servicing does have some considerations. Currently via the service plan language packs, compatibility pre-assessment, addition of additional drivers is not possible. Long story short: A upgrade task sequence gives you more flexibility due to the flexibility of adding manual steps and customize the upgrade process.
Windows 10 servicing can be configured via the servicing section in the software library. Here we can create different servicing plans for the different deployment rings which you want to introduce in your environment. We can filter on languages and limit the number of servicing updates you will download and configure the delay configuration of the selected Semi-Annual Channel(Targeted) and Semi-Annual Channel. You are basically configuring an automatic deployment rule. Based on the delay configuration and collections selected the service plan is created and can run on a schedule automatically.
The upgrade task sequence is a separate task sequence option which can be created from the software library –> operating system –> task sequence section. Before creating this task sequence, we need to add the operating system upgrade package to the software library. For a normal task sequence, a .wim file will be used in this scenario we need to use the media of the release of Windows 10 were you want to upgrade to, in my example this is Windows 10 1709. During the upgrade task sequence a Windows 10 setup will be initiated with the appropriated commands. The power of this way of upgrading Windows 10 to a newer release, is the flexibility and possibility to customize the upgrade.
To add the operating system upgrade package, we are going to: Software Library -> Operating systems -> Operating System Upgrade Packages and click on Add operating system upgrade package. Browse to the Windows 10 media content and add them to ConfigMgr. When the Operating System Upgrade Packages is added we can create an upgrade task sequence.
To create an upgrade task sequence, we are going to: Software Library -> Task Sequences -> Create Task Sequence. In the create Task Sequence Wizard we can select “Upgrade an operating system from an upgrade packages” during the wizard we can select the operating system upgrade packages and add updates or applications when needed. Eventually we end up with a task sequence with three steps where we can add additional customization when needed.
This ends up this blog post, hope this is helpful, please leave questions or comments below.
Thanks
Corné
本記事は、Windows Security のブログ “Making Microsoft Edge the most secure browser with Windows Defender Application Guard” (2017 年 10 月 23 日 米国時間公開) を翻訳したものです。
敵は決断力と複雑さの両面において強大化しており、攻撃空間でのイノベーションは絶え間なく続いています。防御策に対する投資の増加に呼応するように、攻撃者は猛烈なスピードで順応し、戦術を向上させています。朗報としては、防御する方も進化し、長い間攻撃者が頼みとしてきた手法を、新しいテクノロジで破壊しています。Windows 10 では、最新の攻撃に対するその場しのぎの報復的ソリューションだけを提供しているわけではありません。それどころか、根本原因をしっかりと調べて、攻撃のクラス全体を根絶できるようなプラットフォームに変換しています。最もインパクトの大きい改善は、攻撃表面の縮小とアーキテクチャの変更という形で提供されます。このような破壊的なアプローチの 1 つの例は、Windows Defender Application Guard (WDAG) で見ることができます。
WDAG では、Hyper-V 仮想テクノロジをスリム化したバージョンが導入されており、Microsoft Edge を通じて Windows アプリケーションに、Azure クラウド級の分離とセキュリティのセグメント化をもたらします。WDAG for Microsoft Edge は今日最も強力な分離機能を提供しており、最近リリースされた Windows 10 Version 1709 (Fall Creators Update としても知られている) では、Windows 10 Enterprise のユーザーは Microsoft Edge ブラウザーを完全に分離されたハードウェア環境で実行することができます。これにより、ゼロデイ エクスプロイト、修正プログラムが適用されていない脆弱性、および Web ベースのマルウェアに対して最高レベルの保護が提供されます。WDAG コンテナーは、ユーザーがインターネットを利用するための一時的なコンテナー環境を提供します。ユーザーがログ オフした時にコンテナーをリフレッシュする機能があるため、マルウェアが持続性を保持することができません。
近年、ブラウザーやドキュメント リーダーなど攻撃対象となりやすいアプリケーションをソフトウェア分離することが当たり前になってきました。アプリケーションがエクスプロイトによって侵害された際に、ソフトウェア分離はその被害を抑えようとします。サンドボックスがある場合、アプリケーションの悪用に成功して悪意のあるコードが侵入しても、ホスト オペレーティング システム上のデータやリソースへのアクセスは制限され、侵入後の横展開や機密情報の盗難が阻止されます。
攻撃者はサンドボックスの普及に対応して、注意の先をカーネル攻撃に移すことで戦術をすばやく適応させました。大多数のソフトウェア サンドボックスでは、カーネルの攻撃表面が制限されていないため、サンドボックス化されたアプリ内でのコード実行に成功した攻撃者は「エスケープ」を行い、攻撃を昇格させることができます。この傾向が強まっていることは、マイクロソフトの脅威アナリストが収集した Windows の既知のカーネル エクスプロイト数に関するデータによって裏付けられています。
マイクロソフトが収集したカーネル エクスプロイト数 (年単位)
近年見られる急増は、カーネル エクスプロイトを悪用してソフトウェア サンドボックスをエスケープする攻撃者に起因します。セキュリティ意識の高い企業では、Microsoft Edge のトップレベルのエクスプロイト軽減策 (英語情報) と分離機能に、Windows Defender Application Guard for Microsoft Edge が提供するカーネル保護層を追加することで、セキュリティを増強することができます。
マイクロソフトでは、カーネル攻撃の増加に対抗するために、サンドボックス技術において大規模な技術的ブレークスルーを遂げました。Windows Defender Application Guard は、ハードウェアにサポートされた仮想化技術を活用して、信頼されていないインターネットを閲覧する際に Microsoft Edge をホストするための、親 Windows OS の「ミニチュア」バージョンを作成します。ユーザーが完全なエクスプロイト チェーンを含むリンクまたはサイトにアクセスした場合、「ゲスト」カーネルのコンテナーは、機密情報、企業情報および企業の認証情報を含むホスト マシンから完全に分離されます。つまり、ゼロデイのカーネル エクスプロイトでさえコンテナーまでしか侵害できず、ユーザー データ、アプリ、組織のネットワーク、およびその他の OS の安全が確保されるということを意味します。ユーザーがログオフした際にコンテナーは破棄され、攻撃のすべての痕跡が消去されます。
この分離に関するブレークスルーは、ゲスト コンテナーと親 OS の間でリソースを安全に共有する新しい形式のコンテナー技術を作ることによって達成されました。標準的な仮想マシンとは異なり、WDAG コンテナー技術は DLL、実行可能ファイル、およびその他のオペレーティング システムのリソースをゲストと親との間で安全に共有し、WDAG VM の作成に必要なリソースを最小限にします。その結果、WDAG コンテナー イメージの固有のディスク フットプリントは驚くべきことに 18 MB しかありません! さらに、Windows オペレーティング システムは WDAG コンテナー アプリへのフル サポートに「対応」しました。これによって、使用されていないコンテナーを一時停止したり優先度を下げたりすることでバッテリーの寿命を維持し、コンテナー アプリのユーザー エクスペリエンスがネイティブ アプリと比較可能になりました。言語設定、アクセシビリティ、およびその他の多くのオペレーティング システムのコアな機能はすべて、コンテナーを介して動作し、WDAG が提供する高度なセキュリティはほぼ透過的にユーザーに提供されます。
WDAG コンテナー技術の価値提案の中でもセキュリティは最重要であるため、Microsoft Offensive Security Research (OSR) および Windows の Security Assurance (SA) は WDAG エンジニアリング チームと協業し、徹底的に安全なテクノロジを構築しました。このパートナーシップがもたらしたメリットは、WDAG そのものと、最終的に WDAG に実装することができたセキュリティに対して劇的な影響をもたらしました。使用したプロセスの詳細は、マイクロソフトにおける今後のセキュリティ関連の研究および開発に対する強力なモデルとなると考えているため、開催予定* の Microsoft BlueHat Conference で公開します。WDAG が出荷された今、安全性を高める取り組みは続きます。実施中の WDAG に関する報奨金プログラムでは引き続きレビューが行われ、その土台となっているハイパーバイザーに影響のある問題を発見した場合、最高で 25,000 米国ドルが支払われます。
一言で言えば、WDAG は極めて少ないシステム リソースとユーザー エクスペリエンス コストで VM レベルの分離を提供します。
* Microsoft BlueHat Conference は、2017 年 11 月 8 ~ 9 日に開催済み。
ユーザー エクスペリエンスと分離のカスタマイズは、分離ベースのセキュリティ ソリューションについて議論する上で最も一般的なトピックです。Windows Defender Application Guard は、組織が、エンタープライズのリスク プロファイルとセキュリティに対する姿勢に基づいて、ユーザー エクスペリエンスとセキュリティ ポリシーをカスタマイズできるよう、複数のポリシーを提供します。
信頼性に関する判断の視点から最も重要なポリシーは、ネットワーク分離ポリシーです。ネットワーク分離ポリシーでは、エンタープライズが管理していない、または明示的に信頼していない URL やネットワークの場所、それゆえにネイティブのホスト ブラウザーではなく、分離されたコンテナー環境で開くかということを定義します。WDAG は IP ベースおよびホストベースのポリシー定義のオプションにより、この管理をシンプルにします。このポリシーは Windows Information Protection などの機能とも共有され、企業のデータ漏えいを防止するために使用されます。
クリップボードおよび印刷のポリシーは、Windows 10 ホストと WDAG コンテナー間で行われるユーザー主導のデータ交換をコントロールします。永続化ポリシーは、ユーザーが生成したすべてのセッション データ (クッキー、ダウンロードされたファイル、インターネット一時ファイルなど) を、コンテナーのリサイクル時に破棄すべきかどうか、あるいはコンテナー内で後に使用するために保持するかどうかを決定します。
WDAG ポリシーの詳細については、製品ドキュメントを参照してください。
Windows Defender Application Guard 管理オプション
WDAG は、Windows Defender ATP および Microsoft 365 を使用しているお客様に対して WDATP の侵入後の保護と EDR (Endpoint Detection and Response) 機能との深い統合を提供します。これによって、WDAG を使用しているお客様はコンテナー内で阻止および分離された悪意のある攻撃を確認し、Windows が提供する複数のセキュリティ レイヤーに渡ってさらなる修復や防御的なアクションを取ることが可能になるため、重要な統合のポイントであると言えます。
WDATP チームでは、ブラウザーおよびカーネルの侵害を検出できるあらゆる種類のコンテナー固有の IOA (攻撃インジケーター) を開発しました。統合的なゼロデイ攻撃シナリオにおける侵入前および侵入後のソリューションとして WDAG と WDATP の組み合わせが発揮する力に焦点を当てている最近のマイクロソフトのメカニック向けセッション (英語情報) では、これらの機能が紹介されました。
Windows Defender ATP コンソールに表示された WDAG コンテナー関連のイベント
Windows Defender ATP のユーザーには、視覚的なキューとイベント フィルタリングを通じてコンテナー固有の調査を実施する手段が与えられつつ、コンテナーとホストからのイベントを 1 つのタイムラインで調査するエクスペリエンスも提供されます。
WDAG の侵入前の分離機能と Windows Defender ATP が提供する詳細な調査と分析の組み合わせによる堅牢な防御で、お客様を複雑で最先端の攻撃者からも守ります。
Windows Defender Application Guard は、Microsoft Edge の非常に優れたエクスプロイト軽減策とサンドボックス機能に、ハードウェア分離レベルの機能を追加します。これは、Windows コアにハードウェアのコンテナーベース分離機能を組み込むことで実現しました。WDAG は、低いリソースの消費、詳細な OS のエンライトメント、および健全なハードウェア要件のもと、ネイティブとほぼ同等のユーザー エクスペリエンスを提供します。Fall Creators Update を展開している企業では今すぐ WDAG を導入することで、ワールドクラスのハードウェアに根ざしたセキュリティによって、エンタープライズ向けのブラウザーとして最も安全性に優れた Microsoft Edge のメリットを実感することができます。
David Weston (@dwizzzleMSFT)
Principal Group Manager, Windows & Devices Group, Security & Enterprise