Quantcast
Channel: TechNet Blogs
Viewing all 34890 articles
Browse latest View live

Microsoft 365×データレスPCレンタルパック

$
0
0

[提供: 横河レンタ・リース株式会社]

横河レンタ・リースのデータレスPCソリューションとMicrosoft 365、PCレンタルのパックは、企業の働き方改革を推進します。

 

■横河レンタ・リース株式会社がご提供するMicrosoft 365×データレスPCレンタルパック とは

横河レンタ・リースのデータレスPCソリューションとMicrosoft 365、PCレンタルのパックは、企業の働き方改革を推進します。

<データレスPCソリューション>
PCの内蔵ディスクに対するデータの読み書きを、すべてサーバ上で実行し、内蔵ディスクにはデータを保存いたしません。そのため、端末紛失時のデータ漏洩を防止できます。VDIのコスト1/10で導入が可能なのも魅力です。

 

<Microsoft 365 Business>
セキュリティの心配やIT管理にわずらわされることなく「より一層活躍できる働き方」を実現する中堅/中小企業向けに特化した統合ソリューションです。
●最新のOffice:使い慣れたWord、Excel、PowerPointでドキュメント作成ができるだけでなく、Microsoft 365のOfficeにはコミュニケーション連携機能やスムーズなドキュメント共有機能があるので業務生産性もあがります。
●チーム力・業務生産性向上:チームのハブになるビジネスチャット機能としての「Microsoft Teams」やリアルタイムコミュニケーションを実現する「Skype for Business」が、いつでもどこでもチームのコミュニケーション・コラボレーションを促進させます。「Exchange Online」はメール機能だけでなく、予定表の共有ができ、オンラインファイル共有の「OneDrive for Business」と合わせてチーム情報共有もセキュアに実現可能です。
●セキュリティ強化:「Microsoft Intune」によるデバイス管理に加えて「Windows Defender Security Center」はデバイスのセキュリティを一元管理できます。また、「Windows Information Protection」を使用することで企業で使用するアプリケーションと個人で使用するアプリケーションを分離し、企業アプリケーションとして定義されたファイルにある情報を保護することができるため情報漏洩対策も万全です。

 

<PCレンタル>
Windows10環境導入コストを大幅に削減でき、軽量・高パフォーマンスの最新機種の利用でモバイルワークでの業務効率アップも期待できます。

 

横河レンタ・リースは、 Windows 10 へのリプレースはもちろん、導入後の運用管理サービスが充実。また、企業が抱えるPCの運用管理コスト低減と生産性の向上を実現するための、ワークスタイル変革に欠かせないサービスを用意し、「所有」から「利用」へのキャッチフレーズのもと企業の働き方改革を推進します。

 

 

 

 

 


Microsoft365導入支援(ディーアイエスソリューション)

$
0
0

[提供: ディーアイエスソリューション株式会社]

Office 365活用コンサル、ADのAzure AD移行、Windows 10移行等多数実績があり、Microosft 365導入支援致します。

 

■ディーアイエスソリューション株式会社がご提供するMicrosoft 365 導入支援とは

 

セキュリティ対策は何をすれば良いかわからない、Windows10の展開や導入後の管理をどうしたら良いかわからない企業様に朗報です。

Microsoft 365とは?
Microsoft 365はこれまでのOffice 365に加え、「セキュリティ対策」と「Windows10ライセンス」がバンドルされた統合パッケージで、個別に組み合わせて管理する必要がありません。
2種類のプランがあり、お客様の規模やセキュリティ要件に合わせて選択することが可能です。

ディーアイエスソリューション株式会社は、「クラウド・データセンターサービス」、「システムインテグレーション」、「アプリケーション開発」、「コミュニケーションエンジニアリング」、「システム運用&サポートサービス」とお客様のご要望をワンストップで対応できる最新かつハイレベルの技術体制を整えております。システムの導入では各技術の専任のプロが連携し、最適な内容でスピーディーに構築することができます。
また国内トップクラスのITディストリビューターである親会社の「ダイワボウ情報システム株式会社」の製品調達力も活かしながらお客様が望む最適なソリューションを提供いたします。

是非ご相談ください!

 

 

 

Azure Active Directory の PowerShell モジュール

$
0
0

こんにちは、 Azure ID チームの三浦です。今回は Azure Active Directory (Azure AD) PowerShell モジュールの種類、インストール方法についてご案内します。

Azure AD への操作は主に Azure ポータル、 Office 365 ポータル、 Graph API それに今回紹介します PowerShell からおこなうことができます。PowerShell モジュールについても複数ありますので、その種類をまず紹介します。

 

Azure AD PowerShell の種類

 Azure AD PowerShell には次のようなものがあります。

 

  1. MSOnline (Azure AD v1)

Office 365 をご利用いただいている方にとっては Office 365 のライセンスを割り当てるときにも利用しますので馴染み深いかもしれません。当初からあるもので、あとから紹介する Azure AD v2 と明示的に区別する際に Azure AD v1 モジュールとも呼びます。Connect-Msolservice のようにコマンドレットに “Msol” という文字列が含まれています。現状でも Azure AD に対する PowerShell を利用した操作では最もよく使われています。

 

  1. Azure AD for Graph (Azure AD v2)

当初は Azure AD の目的が Office 365 の認証基盤という意味合いが大きかったのですが、 Azure AD にどんどんと新しい機能が追加されることに伴い Azure AD PowerShell の機能も拡張していく必要がでてきました。その対応のために従来の MSOnline を拡張するのではなく、別途異なるモジュールを開発するというアプローチを取ることになり Azure AD for Graph (Azure AD v2 と言うほうが一般的で通りが良いので以降は Azure AD v2 とします) というモジュールが作成されました。基本的には MSOnline で提供していた機能は Azure AD v2 でも提供されます。例えばユーザー作成をおこなうときに MSOnline モジュールでは New-Msoluser というコマンドを使いますが、 Azure AD v2 では New-AzureADUser を利用するというような感じです。基本的に MSOL コマンドレットの拡張は予定されていないため、新しい機能などは Azure AD v2 のみで提供されます。というわけで今後は基本的に Azure AD v2 コマンドをご利用くださいと案内したいところなのですが、、、完全に MSOL コマンドで提供していたものをカバーしているわけではないため、残念なのですが (大変申し訳ないのですが)、従来の MSOL コマンドも併用していく必要があります。

 

  1. Azure AD for Graph preview (Azure AD v2 preview)

Azure AD v2 モジュールはかなり早いスピードで新しいコマンドが追加されています。なるべく早く新しいモジュールを提供できるようプレビュー版も用意しています。プレビュー版については実際の運用環境での利用は推奨していません。また、 Azure AD v2 の通常版 = General Availability 版がすでにインストールされている環境で通常版とプレビュー版を併用することはできず、 Preview 版をインストールするとプレビュー版が利用されます。

 

インストール方法について

<前提条件>

MSOnline (Azure AD v1) Azure AD v2 では前提となる .net Framework のバージョンが厳密には異なり、 MSOnline の場合には必ずしも必要にはならないのですが、古いものを利用していると問題が生じることがあるので、 Azure AD v1v2 を問わず、以下の前提条件を満たすようにしてください。

 

.NET Framework: バージョン 4.5 以降

PowerShell: バージョン 5.0 以降

OS: Windows 7 SP1 Windows Server 2008 R2 以降

 

* PowerShell のバージョンなど厳密にはこれを満たしていなくても動作しますが、モジュールのインストールのために 5.0 以降が必要になるほか、古いバージョンだと問題が生じることがあるので上記を満たすようにしてください。

 

.NET Framework PowerShell のインストール方法、現在インストールされているバージョンの確認方法は次の通りです。

 

.NET Framework

以下のサイトから .NET Framework 4.5 をインストールします。

 

Microsoft .NET Framework 4.5

https://www.microsoft.com/ja-JP/download/details.aspx?id=30653

インストールを試みたときにすでにインストールされているという旨のメッセージが表示された場合にはすでにインストールされていますので不要です。

実際にインストール ウィザードを実行していなくとも .NET Framework でどのバージョンがインストールされているかは、 PowerShell のウィンドウを開き、次のコマンドを実行することで確認できます。

 

Get-ChildItem 'HKLM:SOFTWAREMicrosoftNET Framework SetupNDP' -recurse | Get-ItemProperty -name Version,Release -EA 0 | Where { $_.PSChildName -match '^(?!S)p{L}'} | Select PSChildName, Version, Release

 

 以下の例では 4.7.02046 という新しいバージョンがインストールされていることが確認できます (4.7 のバージョンは OS によっては追加のモジュールのインストールが必要なのでここでは 4.5 の紹介にしていますが、もちろん 4.7 のインストールで構いません)

PowerShell モジュール

PowerShell のバージョンを 5.0 以上にするために Windows Management Framework 5.0 を以下のサイトからダウンロードしてインストールします。

 

Windows Management Framework 5.0

URLhttps://www.microsoft.com/en-us/download/details.aspx?id=50395

 

システムの PowerShell のバージョンは $psversiontable で確認できます。

以下の例だと 5.1.15063.726 という、より新しいバージョンがインストールされていることが確認できます。

 

<Azure AD PowerShell インストール方法>

上記の前提条件を満たしたうえで、次の手順でインストールを実施します。

 

  1. 管理者で PowerShell を起動します。

 

  1. 下記のコマンドを実行し、モジュールをダウンロードします。

 

MSOnline (Azure AD v1):

Save-Module -Name MSOnline -Path "C:Program FilesWindowsPowerShellModules"

 

Azure AD for Graph (Azure AD v2):

Save-Module -Name AzureAD -Path "C:Program FilesWindowsPowerShellModules"

 

Azure AD for Graph preview (Azure AD v2 preview):

Save-Module -Name AzureADPreview -Path "C:Program FilesWindowsPowerShellModules"

 

-Path で指定するフォルダ名は上記以外の任意のものに指定できますが、存在するフォルダを指定する必要があります。

Nuget プロバイダーが必要というメッセージが表示された場合には Y をクリックして進めます。

※ 明示的にバージョン番号を指定する場合にはコマンドに -RequiredVersion 2.0.0.115 というようにバージョン情報を追加します。

MSOnline については、以下のサイトからインストールパッケージをダウンロードすることも可能です。

http://connect.microsoft.com/site1164/Downloads/DownloadDetails.aspx?DownloadID=59185

 

  1. 下記のコマンドを実行し、2. でダウンロードしたモジュールをインストールします。

 

MSOnline (Azure AD v1):

Install-Module -Name MSOnline

 

Azure AD for Graph (Azure AD v2):

Install-Module -Name AzureAD

 

Azure AD for Graph preview (Azure AD v2 preview):

Install-Module -Name AzureADPreview

 

以上です。

では、最後に参考資料をご紹介しまして今回の記事はここまでとします。

 

参考資料

Azure Active Directory PowerShell for Graph

https://docs.microsoft.com/ja-jp/powershell/azure/active-directory/install-adv2?view=azureadps-2.0

 

AzureAD (Azure AD v2 コマンド一覧)

https://docs.microsoft.com/en-us/powershell/azuread/v2/azureactivedirectory

 

AzureAD PowerShell's Profile (Azure AD PowerShell 最新版リスト。 V1 V2 V2Preview)

https://www.powershellgallery.com/profiles/AzureADPowerShell/

 

Microsoft 365導入・活用支援、保守サポートサービス(マイクロリンク)

$
0
0

[提供: 株式会社マイクロリンク]

東海地区中心に、中小企業のお客様のMicrosoft 365の導入、技術サポートが可能です。OSやOfficeのバージョンアップを検討中のお客様、ぜひご相談ください。

 

■株式会社マイクロリンクがご提供するMicrosoft 365導入・活用支援、保守サポートサービス とは

当社は主に中小企業向けにMicrosoftのクラウドサービスを数多く構築してまいりました。
Microsoft Gold Partnerの認定を受け、常に高い技術、情報を得て、質の高いサービス提供を行っております。

Microsoft 365製品につきましてもご提案から環境構築、導入支援、運用支援、保守サポートまでコンサルテーションしながらお客様のニーズに合わせた最適なサービスをご提供致します。

まずはお気軽にお問い合わせ下さい。

 

 

 

 

Microsoft 365導入支援、定着・トレーニング支援、技術サポートサービスのトータルソリューション(タッチ)

$
0
0

[提供: 有限会社タッチ]

豊富なサポート経験、パソコンやサーバーの知識をもつ弊社から、お客様のMicrosoft 365の導入、技術支援のサービスも開始いたしました。OSやOfficeのバージョンアップを検討中のお客様、ぜひご相談ください。

 

■有限会社タッチがご提供するMicrosoft 365導入支援、定着・トレーニング支援、技術サポートサービスのトータルソリューション とは

Microsoft 365の導入時のご相談から導入後のフォローまでトータル的な
安心のサポートサービスを提供します。

 

「Microsoft 365」は、次のMicrosoftのサービスが統合された
大変お得なサービスです。
・Office 365
・Enterprise Mobility + Security(EMS)
・Windows 10

ビジネスツールもOSもセキュリティもデバイス管理も全部まとめた
パッケージMicrosoft 365は、こんな中小企業様におすすめなサービスです。
・IT担当者が不在、または人手不足の企業
・Active Directory をご利用でない企業
・Office365とのコラボレーション機能を活用したい企業
・デバイス管理とセキュリティのソリューションを必要とする企業
・BCP対策を迫られて悩まれている企業

弊社では、Microsoft 365の導入をご検討される企業様に寄り添いながら
以下のサポートサービスを提供しております。
・導入検討時の、導入プランのご提案、PCの選定、技術支援など
・導入時のPC設定から社内教育・トレーニングなど
・導入後のトラブルや不具合発生時のサポートなど
お客様のご希望に合わせたサポートプランにより、安心してご利用いただく
ためのサービスを提供しています。

20年以上の豊富なサポート経験、パソコンやサーバーの知識をもつ弊社だから
こそ提供できる安心のサポートサービスです。

OSやOfficeのバージョンアップや、より安全性の高いクラウド環境などを
ご検討中のお客様、是非お気軽にご相談ください。

【お問い合わせ先】
ITサポートセンター タッチ
TEL:052-806-8899

 

 

 

 

Azure AD の条件付きアクセスに関する Q&A

$
0
0

こんにちは、Azure & Identity サポート チームの高田です。

今回はお問い合わせをよくいただく、Azure AD の条件付きアクセスについてです。

お問い合わせの多いご質問について、Q&A 形式でおまとめいたしました。既存のドキュメントではカバーされていない動作や質問について今後も適宜内容を拡充していきますので、ご参照いただければと思います。

 


 

Q. Office 365 を利用しているが、条件付きアクセスを利用できますか?

A. はい、利用可能です。Office 365 をご利用いただいているお客様は、認証基盤として Azure AD をご利用いただいている状態となります。そのため、追加で Azure AD Premium のライセンスを購入いただくことで、利用可能になります。

 


 

Q. Azure AD Application Proxy を利用して公開しているアプリケーションなども条件付きアクセスで制御可能でしょうか。

A. はい、条件付きアクセスで制御可能です。Azure AD 上に登録されているアプリケーションであれば、条件付きアクセスで制御できます。Azure AD Application Proxy を利用して公開しているアプリケーションやご自身で開発し Azure AD 上に登録したアプリケーションも制御対象とすることが可能です。

 


 

Q. Azure AD B2B コラボレーション機能により招待されたゲスト ユーザーに対して条件付きアクセスのルールを適用する場合には、Azure AD Premium のライセンスを購入する必要があるのでしょうか。

A. いいえ、テナントに割り当てられている Azure AD Premium ライセンス数の 5 倍までのアカウントであれば、ゲスト ユーザーに対して条件付きアクセスを含む Azure AD Premium の機能を利用させることが可能です。詳細は下記公開情報を参照ください。

Azure Active Directory B2B コラボレーションのライセンスに関するガイダンス
https://docs.microsoft.com/ja-jp/azure/active-directory/active-directory-b2b-licensing

 


 

Q. Azure AD Premium のライセンスを対象人数分購入すれば、該当ユーザーに割り当てる必要はないでしょうか。

A. いいえ、要件として人数分購入いただくだけでなく、ユーザーに対して割り当てる必要がございます。

 


 

Q. 条件付きアクセスを利用するためには、Azure AD Premium のライセンス数を何個購入すればよいでしょうか?

A. 条件付きアクセスの機能を利用してアプリケーションへのアクセス可否の評価が行われるユーザーに対して、Azure AD Premium (P1 以上) を割り当てる必要があります。現時点の実装では、Azure AD Premium ライセンスを割り当てていないユーザーであっても、ポリシーの対象であれば条件付きアクセス ポリシーの内容に従ってアクセス制限が行われますが、このような状態での利用はライセンス違反となります。

 


 

Q. 条件付きアクセスのポリシーを複数作成し、適用の優先順位をつけることは可能でしょうか。

A. いいえ、優先順位を作成することはできません。条件付きアクセスではそれぞれのポリシーが独立しており、条件に合致したものが適用されます。各ポリシーで条件が重複しないように構成することを検討ください。

 


 

Q. Exchange Online に対して条件付きアクセスを設定したところ Office 365 ポータルに対しても条件付きアクセスが設定されてしまいました。これは想定される動作でしょうか?

A. はい、これは想定される動作です。 2017 年 8 月 24 日以降は Exchange Online または SharePoint Online を対象とした条件付きアクセスが Office 365 ポータルにも反映されます。詳細は英語での情報となりますが以下のリンクも参照ください。

An update to Azure AD Conditional Access for Office.com
https://cloudblogs.microsoft.com/enterprisemobility/2017/08/04/an-update-to-azure-ad-conditional-access-for-office-com/

 


 

Q. 条件付きアクセスの [場所] の条件にクライアントの IP アドレス範囲を入れましたが制御されません。どうしてでしょうか?

A. 条件付きアクセスの [場所] の条件では、組織が外部と通信する際のグローバル IP アドレス (Azure AD から見た送信元グローバル IP アドレス) を利用します。例えば、社内のクライアントがプライベート IP アドレスを保持しており、外部ネットワークと通信する際にはグローバル IP アドレスを持つゲートウェイを経由して Azure AD と通信する環境があるとします。この場合、Azure AD から見ると、送信元 IP アドレスはグローバル IP アドレスを持つゲートウェイとなります。このような時は、件付きアクセスの [場所] の条件には、ゲートウェイのグローバル IP アドレスを指定ください。

 


 

Q. 条件付きアクセスで、X-Forwarded-For HTTP ヘッダーなどを利用して、組織内のクライアントの送信元 IP アドレスを判断が可能ですか?

A. いいえ、X-Forwarded-For HTTP ヘッダーを使用し、条件付きアクセスで組織内のクライアントの送信元 IP アドレスを判定することはできません。条件付きアクセスの [場所] の条件では、組織が外部と通信する際のゲートウェイが持つグローバル IP アドレス (Azure AD から見た送信元グローバル IP アドレス) が制御に利用されます。Azure AD には、このグローバル アドレスを場所として利用します。

X-Forwarded-For HTTP ヘッダーは HTTP ヘッダー フィールドの 1 つです。ロード バランサーなどでクライアントの送信元IPアドレスが変換された場合でも、HTTP ヘッダーに接続元のクライアント IP アドレスの情報を付加することで、接続先サーバーが接続元クライアント IP アドレスを特定できるようにするために利用されます。しかしながら、このX-Forwarded-For HTTP ヘッダーで指定された情報は組織内の IP アドレスであり、場所を示すものではありません。このような理由から、現状 Azure AD では、組織が外部と通信する際のゲートウェイのグローバル IP アドレス (Azure AD から見た送信元グローバル IP アドレス) を制御に利用しています。

 


 

Q. クレームルールと条件付きアクセスは併用は可能ですか?

A. はい、技術的には可能です。AD FS を利用したフェデレーション環境であれば、クレーム ルールが判定された後、条件付きアクセスが動作します。クレーム ルールで認証が拒否された場合は、その後の条件付きアクセスの処理は動作しません。ただし、類似機能であるため運用の複雑さなどを考慮すると、どちらか一方の機能をメインでご利用いただくのがよいかと存じます。

 


 

Q. 条件付きアクセスの設定において、全てユーザーがアクセスできない設定になってしまいました。設定を解除可能でしょうか。

A. このような状況の場合は、残念ながらお客様側での解除はできません。そのため、設定の解除をご希望の場合は、お手数ですが弊社サポート サービスをご利用いただけますと幸いです。Azure ポータルにもアクセスができない状況と存じますので、ほかにお持ちのテナントからお問い合わせを発行ください。

サポート サービスをご利用いただくには、Azure ポータル上から、Azure Active Directory を選択し、[新しいサポート要求] を選択ください。以下のような画面からお問い合わせいただければと思います。

 

 

上記内容が少しでもお客様の参考となりますと幸いです。

製品動作に関する正式な見解や回答については、お客様環境などを十分に把握したうえでサポート部門より提供させていただきますので、ぜひ弊社サポートサービスをご利用ください。

※本情報の内容(添付文書、リンク先などを含む)は、作成日時点でのものであり、予告なく変更される場合があります。

AI をヘルスケア分野のパートナーと連携してがん治療に適用

$
0
0

[ブログ投稿日:2017年11月28日]

Posted by:アリソン リン(Allison Linn

InnerEye プロジェクトの主任研究員、アントニオ クリミニシ(写真:ジョナサン バンクス)

 

マイクロソフトの英国ケンブリッジ研究所の人工知能専門家チームは、AI によるがん治療を、よりターゲットの定まった効果的なものにする方法の発見に 10 年以上の期間を費やしてきました。

今、このようなプロジェクトのひとつである InnerEye の研究チームは、医療専門家ががん治療の計画に使用しているツールへの研究成果の統合に関する理解を深めるために、サードパーティのソフトウェアプロバイダーの支援を求めています。これは、マイクロソフトの Healthcare NExT という取り組みの一環です。

火曜日に、シカゴで開催された Radiological Society of North America 年次会議の全体スピーチで、同プロジェクト主任研究員であるアントニオ クリミニシ (Antonio Criminisi) は、プライベートプレビューの目的が、プロジェクトの研究成果のサードパーティ医療ソフトウェア製品への統合を支援してくれるパートナーを見つけることであると述べました。

「これは私たちにとって大きな学びの機会です」とマイクロソフト英国研究所主任研究員であるクリミニシは述べています。

InnerEye 研究プロジェクトは、機械学習と画像認識というAIの2つの主要な領域を活用し、良性と悪性の腫瘍の判別を明確にし、放射線腫瘍科医が放射線療法で使用できるツールを医療ソフトウェアプロバイダーが提供できるよう支援します。

このクラウドベースの”radiomics”(放射線医学の多量の情報を系統的に扱う科学)サービスは、放射線腫瘍科医と線量測定士が結果の編集や調整などのより詳細な作業に集中できるよう支援する製品を開発できるようにすることを目的にしています。

たとえば、現時点では、画像の境界線付け作業は時間を要し、高コストな手作業のプロセスです。そのため、多くの場合、この作業は治療の最初に一度だけ行われています。

InnerEye のテクノロジを使用したサードパーティのソリューションにより、治療期間中に病状をモニターし、患者の反応に応じて化学療法を調整するなどの選択を行うことが現実的になります。これにより、将来的に、よりターゲットが定まり効果的な化学療法が実現する可能性があります。

先進的 AI のがん治療への応用を長年研究してきたクリミニシは、研究成果がようやく現実の医療界に貢献し、社会的利益をもたらし始めたことを大変うれしく思っていると同時に、彼のチームの専門は AI の研究であり、ヘルスケアではないことから、研究成果を最善の形で使用するために外部のパートナーの支援を求めていると述べます。

「自分たちだけでやることはできないのです」とクリミニシは述べています。

 

ーーー

本ページのすべての内容は、作成日時点でのものであり、予告なく変更される場合があります。正式な社内承認や各社との契約締結が必要な場合は、それまでは確定されるものではありません。また、様々な事由・背景により、一部または全部が変更、キャンセル、実現困難となる場合があります。予めご了承下さい。

Office 365 コンプリートパック

$
0
0

[提供: 日本ビジネスシステムズ株式会社]

スムーズに活用できるように、契約から利用開始までに発生する各種さまざまな準備作業をサポートします。

 

■日本ビジネスシステムズ株式会社がご提供するOffice 365 コンプリートパック とは

 

<サービス概要>
Office 365 ProPlus の導入を、JBS がトータルサポートします!

 

● 計画
豊富な導入実績をもとに、お客様の環境に最適な導入計画を立案します。
お客様の状況に寄り添ったスムーズな導入計画や導入後の更新フローの検討を行います。

 

● Office 365 環境の用意
スムーズな Office の利用のため、Office 365 環境とオンプレミス環境を連携する用意を行います。
※本項目は必須ではありませんので、まずはお客様の要件等をヒアリングし連携の要否を判断したうえで提案します。すでに Office 365 を利用中の場合、本項目は不要です。
※お客様にて一部作業(ライセンス登録等)を対応いただきます。

 

● Office 365 ProPlus の導入

 

● ヘルプデスクサービス
リモート型、常駐型などお客様のニーズに合った形態で提供します。
※ヘルプデスクサービスはオプションです。・ソリューション説明

 

●前提条件
・導入後の Office バージョンは、導入作業完了時点における最新版 Office となります。
・Office 365 ProPlus は Web 版の Office ではなく、通常時はローカルにインストールされた Office をオフラインで利用できますが、30日に一度はインターネット接続を行わないと機能制限モードになり利用できる機能が限定されます。
・Office 365 ProPlus の仕様上、Office はアップグレードではなく新規のインストールとなります。そのため、既存の Office で利用されているカスタマイズ設定やマクロは引き継げません。JBS による対応をご要望の場合は別途お見積りさせていただきますので、ご相談下さい。

 

 

 


System Center 2012 R2 更新プログラム最新版がリリースされました!!

$
0
0

こんにちは、日本マイクロソフト System Center Support Team の益戸です。
公開が遅くなってしまいましたが、System Center の更新プログラムが先週公開されました。
既に、System Center 2012 R2 については、メインストリームを終了しておりますが、Transport Layer Security (TLS) protocol version 1.2 への対応としてリリースされております。

 

システム センター 2012 R2 の TLS 1.2 プロトコル サポートの展開ガイド
https://support.microsoft.com/ja-jp/kb/4055768

 

修正プログラムによっては、適用時にデータベースに対して自動的に更新が発生いたします。
その為、修正プログラムの適用に失敗した場合や、修正プログラムに致命的なエラーが発生した場合に備え、可能な限り、適用前にシステムおよび、データベースのバックアップの取得を実施ください。
System Center 製品については、明確にアンインストールを指示する場合を除き、適用した修正プログラムのアンインストール実施後に動作に問題が発生する場合がございます。

本修正プログラムは、Microsoft Update 経由で更新プログラムをダウンロードしてインストールすることができます。また、オフラインの環境では、Microsoft Update Catalog を通じてダウンロードしたパッケージを手動で適用することもできます。詳細な適用手順や、修正内容については、それぞれのリンクをご参照ください。

 

Description of Update Rollup 14 for Microsoft System Center 2012 R2
https://support.microsoft.com/ja-jp/kb/4043306

 

 

・Data Protection Manager (KB4043315)
https://support.microsoft.com/ja-jp/kb/4043315
* 更新プログラムの適用後にエージェントを更新する必要があります。

 

・Operations Manager (KB4024942)
https://support.microsoft.com/ja-jp/kb/4024942
* 更新プログラムの適用後に、レジストリの変更や、SQL の実行、管理パックのインポート等が必要です。
また、アップデートの順番についても指定がございますので、ご注意ください。
* 適用時に SCOM 管理サーバーの再起動を求められる場合があります。

 

・Orchestrator (KB4047356)
https://support.microsoft.com/ja-jp/kb/4047356
* 更新プログラム適用の際に、前提条件等をご確認ください。

 

・Service Manager (KB4024037)
https://support.microsoft.com/ja-jp/kb/4024037
* インストール時に他のコンポーネントの関連性にご注意ください。

 

・Virtual Machine Manager (KB4041077)
https://support.microsoft.com/ja-jp/kb/4041077
* 更新プログラム適用後にホストのエージェントを更新する必要があります。
* 適用時に SCVMM サーバーの再起動を求められる場合があります。

 

A Secure Azure

$
0
0

By Olivier Subramanian

Olivier is a Cloud Solution Architect in the UK Customer Success Unit.  He has been helping companies achieve digital transformation for over 20 years and leads the Azure security interest group in the CSU.

Cybersecurity is a big issue. As society transitions through the digital revolution, we rely more and more on the built-in applications and development of our technology to protect our data and assets than we do having 6ft concrete walls, armed guards and CCTV; and it’s something that a lot of our customers are struggling to understand, accept and embrace; because hey, who likes change?

The frequency and sophistication of cybersecurity attacks are escalating as attackers are residing within a victim’s network for an average of 140 days before going undetected, costing in the region of $4 million for one single data breach alone. Even worse, more that 75% of these intrusions are only made possible due to compromised user credentials. Sobering statistics, right?

For some, this is enough to shy away from storing their data in their own data center, let alone someone else’s, but they would be wrong to worry. Microsoft spends $1 billion a year on cyber security with much of that going into making Microsoft Azure the most trusted cloud platform there is through encrypting data at rest and in transit, machine learning for threat detection and penetration testing, and risk banded user access with the Azure Active Directory.

What with so much revenue being invested into advancing the architecture and design of the Azure cloud platform, it means that your abilities as a consumer grow with us too. Following on from Microsoft’s Ignite Conference, where our partners came together to celebrate what we’ve achieved and learn of our future, we have a few new announcements for you from our cloud platform:

The use of prescriptive application whitelisting learns application patterns and recommends whitelists

Once compromised, an attacker will likely execute malicious code on a VM as they take action toward their objectives. Whitelisting legitimate applications helps block unknown and potentially malicious applications from running, but historically managing and maintaining these whitelists has been problematic. Azure Security Center can now automatically discover, recommend whitelisting policy for a group of machines and apply these settings to your Windows VMs using the built-in AppLocker feature. After applying the policy, Azure Security Center continues to monitor the configuration and suggests changes making it easier than ever before to leverage the powerful security benefits of application whitelisting.

The ability to gain valuable insights about your attackers with threat intelligence from Microsoft (Interactive map and threat reports)

By using the threat intelligence option available in Security Center, IT administrators can identify security threats against the environment. For example, they can identify whether a particular computer is part of a botnet. Computers can become nodes in a botnet when attackers illicitly install malware that secretly connects the computer to the command and control. Threat intelligence can also identify potential threats coming from underground communication channels, such as the dark web. To build this threat intelligence, Security Center uses data that comes from multiple sources within Microsoft. Security Center uses this data to identify potential threats against your environment.

The choice to explore notable links between alerts, computers, and users to triage alerts, determine scope, and find root cause

Security Center has added a new visual, interactive investigation experience, now in preview, which helps you quickly triage alerts, assess the scope of a breach, and determine the root cause. Explore notable links between alerts, computers, and users that indicate they are connected to the attack campaign. Use predefined or ad hoc queries for deeper examination of security and operational events.

The function of automate security workflows with logic apps integration

Security Center now integrates with Azure Logic Apps to automate and orchestrate security playbooks. Create a new Logic Apps workflow using the Security Center connector, and trigger incident response actions from a Security Center alert. Include conditional actions based on alert details to tailor the workflow based on alert type or other factors. Automate common workflows such as routing alerts to a ticketing system, collecting additional data to help during an investigation, and taking corrective action to remediate a threat.

 

The management of security across all HYBRID cloud workloads in one console

You can now onboard VMs and computers running on-premises and in other clouds by simply installing the Microsoft Monitoring Agent on these machines. For Operations Management Suite (OMS) Security & Compliance customers, connected computers will be automatically discovered and monitored by Security Center.

Cloud First, Safety First

$
0
0

Logbucheintrag 171204:

In jeder Stunde entsteht in Deutschland ein wirtschaftlicher Schaden von mehr als 100.000 Euro durch Hackerangriffe, Datenverlust und Systemausfälle. Diese Zahl ergibt sich jedenfalls, wenn man den Analysten folgt, die einen Schadenswert von rund einer Milliarde Euro pro Jahr errechnet haben. Dabei sind die nicht gemeldeten oder nicht einmal wahrgenommenen Angriffe auf die Computersysteme von Unternehmen nur geschätzt. Die tatsächliche Zahl der Schäden könnte noch viel höher liegen. Und dass Datenklau zwar nicht zu einem unmittelbaren Schaden führt, aber durchaus zum Verlust eines Wettbewerbsvorteils führen kann, ist in dieser Schätzung ebenfalls noch nicht berücksichtigt.

Die zweite Ungeheuerlichkeit, die Umfragen zufolge im Zusammenhang mit Cybercrime festgestellt wurde, ist die Vermutung, dass drei Viertel der Unternehmen in Deutschland in den vergangenen zwei Jahren schon einmal Ziel eines Hackerangriffs gewesen sein sollen. Das wären also mehr als zwei Millionen Angriffsversuche – oder um es deutlich zu sagen: Straftatbestände! Denn „Hacken“ ist kein Kavaliersdelikt.

Umso unverständlicher, dass Computerzentralen zwar im Hochsicherheitstrakt eines Unternehmens untergebracht sind, die Datenleitungen aber nur unzureichend geschützt sind. Denn das größte Sicherheitsrisiko ist noch immer der Mensch an seinem Arbeitsplatz. In jedem Unternehmen gibt es immer noch mindestens eine Person, die auf jeden Link in einer Email klicken würde.

Sicherheit ist kein statisches Qualitätsziel. Denn wenn sich Prozesse in einer Organisation ändern, ergeben sich schnell auch neue Sicherheitslücken. Dies gilt vor allem in agilen Unternehmen, in denen schnelle Entscheidungen, kurze Lernphasen, ständige Kurskorrekturen und hohes Entwicklungstempo zur Firmenkultur gehören. CIOs müssen bei der Planung ihrer Datacenter deshalb ständig die lebendige Organisation, die sie unterstützen sollen, im Blick haben. Doch damit sind viele mittelständische Unternehmen überfordert.

In der Cloud sind diese Sicherheitsaspekte hingegen hochgradig automatisiert. Microsoft Plattformen wie Azure, Office 365 und Dynamics 365 gehören, bestehen aus weltweit verteilten und skalierbaren Rechenzentren in denen verschiedene Dienste von IaaS bis SaaS angeboten werden. Für diese Rechenzentren und Dienste hat Microsoft über 53 verschiedene Zertifizierungen verschiedenster Organisationen erhalten. (Mehr dazu unter diesem Link: https://www.microsoft.com/de-de/trustcenter ) Dazu gehören globale Zertifizierungen wie die ISO27001, ISO27018 oder auch SOC3 genauso wie Industrie-spezifische Zertifizierungen wie HIPAA, HITRUST und MPAA oder die Berücksichtigung regionaler Anforderungen wie die EU Model Clauses, Canada Privacy Law oder das Grundschutzhandbuch und C5 Testat des deutschen Bundesamts für Sicherheit in der Informationstechnik.

Als führender Cloud-Anbieter gewährleistet Microsoft außerdem, dass bis zum Inkrafttreten der Datenschutzgrundverordnung (DSGVO bzw. GDPR General Data Protection Regulation) am 25. Mai 2018 die Microsoft Cloud-Dienste mit der DSGVO rechtskonform sein werden. Das schließt Produkte wie Office 365, Dynamics 365, Microsoft Azure, SQL Server, Enterprise Mobility + Security (EMS), Windows 10 und Microsoft 365 ein. Die Ziele der DSGVO stimmen mit den bereits bestehenden Zusagen von Microsoft im Hinblick auf Sicherheit, Datenschutz und Transparenz überein. Microsofts Rechenzentren nutzen weltweit einheitliche, geprüfte und bewährte Technologien und bieten die gleichen Service-Level und Sicherheitsstandards, zum Beispiel Datenverschlüsselungen nach aktuellen SSL/TLS-Protokollen. Die Microsoft Cloud bietet damit einen sicheren Weg zur DSGVO-Compliance.

Darüber hinaus investieren wir mit dem Microsoft Cyber Defense Operations Center jedes Jahr über eine Milliarde Dollar in die proaktive Abwehr von Sicherheitsbedrohungen. Dazu gehört auch, in allen Entwicklungs- und Betriebsprozessen von Anfang an das Thema Sicherheit und Datenschutz als Kernbestandteil zu berücksichtigen. Mit Hilfe von Machine Learning und künstlicher Intelligenz können wir selbst solche Angriffsszenarien frühzeitig zu erkennen, die noch nicht bekannt und beschrieben sind.

Es ist geradezu fahrlässig, auf eine solche Sicherheitsumgebung zu verzichten. Mit Microsoft Azure ist die Cloud nicht weniger zuverlässig als das firmeneigene Rechenzentrum. Im Gegenteil, die Cloud schützt vor Angriffen, die nur bei immensen Kosten mit hausinternen Sicherheitsfeatures abgewehrt werden könnten. Man geht ja auch nicht gegen eine komplexe Immunschwäche mit den Mitteln aus der Hausapotheke vor.

 

 

[GDPRDemopalooza] Azure Information Protection [Discover]

$
0
0

Basierend auf der GDPR / DSGVO Demopalooza hier das Demo zum Azure Information Protection (AIP) im Kontext von "Discover". AIP im Kontext von "Protect" wird in einem späteren Post beleuchtet.

Wie immer aufgeteilt in zwei Bereiche: Das "Why" und das "How"

Why

 

Im Kontext von GDPR / DSGVO ist es zwingend erforderlich zu identifizieren in welchen Daten sich (ggf.) Persönlich Identifizierbare Informationen (PII) befinden. Denn, sollte ich das nicht wissen, so müsste ich alle Daten gleich - und damit mit der höchst möglichen Sicherheit und den strengsten Prozessen belegen. Dies ist weder sinnvoll noch rechnet sich der Aufwand.

Daher ist es unumgänglich automatisiert und/oder manuell eine Datenklassifizierung durchzuführen. Die Haupt-Herausforderung an dieser Stelle ist weniger die Technologie (das ist lösbar 😉 )  als viel mehr die Prozessarchitektur und Implementierung (das ist mit viel Geld auch lösbar! 😛 ) dar. D.h. es ist essentiell, dass die Prozesse und Anforderungen für die Datenklassifizierung entsprechend genau erstellt werden und dabei sowohl die Mitarbeiter als die zukünftig genutzte Technologie miteinbezogen werden.

Genau hier setzt Azure Information Protection (AIP) mit seinem Labelingkonzept an. Basierend auf einer Klassifizierungsdefinition - also z.B.: was ist "vertraulich", was ist "streng vertraulich" und was sind "PII" - werden sog. Label beschrieben. Mittels dieser Label können Daten dauerhaft den entsprechenden Klassen zugewiesen werden und basierend auf den Klassen können dann Regeln definiert werden - z.B. für Verschlüsselung, Retention, Logging, Zugriffsschutz, etc.

Diese Label können dabei manuell oder automatisiert (z.B. anhand des Inhalts, des Speicherorts [z.B. "HR SharePoint"], oder des Autors) gesetzt werden. Wie dies funktioniert wird anhand dieses Demoskripts beschrieben.

@Interessierte Kunden: wir unterstützen gerne dabei Partner zu finden, die dieses Thema ganzheitlich unterstützen. Bitte sprechen sie hierzu ihren dedizierten Microsoft Ansprechpartner an

@Interessierte Partner: wir unterstützen gerne dabei in dem Thema die notwendige Readiness aufzubauen, so dass ihr das Thema bei und mit Kunden umsetzen könnt. Bitte sprecht dazu euren PDM/PTS oder Andreas oder mich direkt an.

How

  1. Ein In-Private Browser Fenster öffnen
  2. Navigieren auf das Azure Information Protection Portal
    AIP Global Policy
  3. Erklären: Hier sieht man eine von Microsoft vorgeschlagene Klassifizierungsarchitektur. Diese lässt sich selbstverständlich anpassen, erweitern oder löschen
  4. Für die DSGVO (aber auch weitere Gründe) ist es sinnvoll bzw. notwendig Regeln für die Verwendung festzulegen, z.B., dass jedes Dokument und jede Email auch ein Label besitzt und natürlich, dass bei Herabstufung der Klasse (z.B. von "vertraulich" auf "Generell") zum einen der Nutzer darauf hingewiesen wird, dass er die Klasse herabstuft und dann auch ein entsprechender auditierfähiger Logeintrag erstellt wird (Hinweis: Mitarbeiterschulung, Reporting, Nachweis[pflicht], aber auch den Betriebsrat nicht vergessen! 😉 ). Selbstverständlich muss es möglich sein Fehleinschätzungen auch zu korrigieren, hier im Bild der passende Dialog von PowerPoint:
    Lower the Label dialog
  5. Nun klick auf die "Credit Card Data" Policy und direkt auf  "+Add a new condition"
    Policy Settings
  6. Hier kann die Policy entsprechend eingestellt werden: das Aussehen [also Farbe], die Beschreibung [dies ist prinzipiell Multilanguage fähig, wird aber an dieser Stelle nur für die default Sprache definiert] und das Verhalten bei der Nutzung eines AIP aware Clients, z.B. dem Einblenden von Header/Footer/Watermark  in entsprechend geschützten Dokumenten, vgl. den Screenshot unter 4. in der linken, unteren Ecke "Classified as Microsoft Confidential".
  7. Aktuell gibt es noch zwei Bereiche, in denen AIP definiert werden kann: Azure und Office 365. Dies wird sich aller Voraussicht nach ändern, aktuell müssen wir damit allerdings noch leben, daher wechseln wir nun auf die AIP Steuerung unter Office 365:
    AIP Labels O365
  8. Wie zu sehen ist, entspricht das hier aufgeführte (noch) nicht dem gleichen wie im Azure Portal. Allerdings habe ich hier noch weiterführende Möglichkeiten, insb. bzgl. Retention Policies. Diese sind wichtig im Kontext von DSGVO, da PII nur für und die Dauer des definierten Zwecks genutzt und gespeichert werden dürfen. Es kann aber aus anderen Gründen (andere Gesetze und Verordnungen) die Notwendigkeit existieren, dass bestimmte Daten erst nach z.B. 10 Jahren gelöscht werden dürfen. Dies können wir über eine solche Retention Policy erreichen. Dazu auf "+Create a label" klicken und die entsprechenden, beschreibenden Felder füllen:
    Retention Label definition
  9. Nun die Retention Settings vornehmen:
    Detailed Retention settings
  10. Und anschließend die Settings überprüfen und das Label erstellen.

Add Conditional Access to your Windows 10 VPN with Intune and Azure AD

$
0
0

I recently published a post on setting up your own Windows 10 VPN lab with instructions to build a lab environment needed to start playing with the Windows 10 VPN – specifically using Intune to configure cool features like app-triggered VPN.

This post is an add-on, so I suggest you start at my first post, then come back.

In this VPN Scenario, Windows 10 clients are no longer going to authenticate using a certificate issued from your on-prem CA, but instead when a user goes to launch VPN, Windows will reach out to Azure AD, authenticate via modern auth and ask for a short-lived certificate. Azure AD will look at the authentication session and compare it against conditional access policy(s) that you set up. If it passes, the client gets a cert issued from Azure AD and will be good for an hour, if not.. no vpn for you.

On the Intune side we no longer need to deploy a user certificate because Azure AD will handle that. We just need to configure and deploy the VPN profile, so that Windows clients know which certificate to present to your Radius server when connecting. On the AAD side we'll need to configure a couple of things – VPN Connectivity (ie: download a trusted CA) and the Conditional Access policy itself.

Simple!

In this post we will:

  1. Configure AZURE AD as a trusted Certificate Authority for our clients
  2. Create a conditional Access policy in Azure AD, and specify that devices have to be enrolled and compliant in Intune before being issued a certificate they can use for VPN
  3. Configure the VPN profile in Intune in such a way that it leverages the Azure AD issued certificate instead of one that comes from your internal PKI
  4. Test connecting to the VPN with a compliant device and a non-compliant device

Step 1 - Configure AZURE AD VPN Connectivity

Log into the azure portal, go to Azure Active Directory then conditional access. Under VPN Connectivity, create a new certificate

Validity = One Year

Primary= Yes

Download Certificate, and save it somewhere handy on your Network Policy Server (NPS)

On the NPS Server, Use Certutil commands to import into RootCA and NtAuthCA stores:

Certutil -dspublish -f AADTrustedRootCert.cer RootCA
Certutil -dspublish -f AADTrustedRootCert.cer NTAuthCA

One additional step required on the NPS server is configuring it to ignore revocation checking. The reason for this is that the AAD issued certificate doesn't have any CRL information.

To do this, you will need to set the following registry keys and then reboot.

HKEY_LOCAL_MACHINESYSTEMCurrentControlSetServicesRasManPPPEAP13

IgnoreNoRevocationCheck = 1

If you don't configure this, your Windows 10 client won't be allowed to connect with an error like this:


and you will receive an error in the radius server event log:

"The revocation function was unable to check revocation for the certificate."


Step 2 – Create an Azure AD Conditional Access Policy

In the Azure AD blade, Create a new Conditional Access Policy:

In my test policy, I want to make sure that all VPN connections from "VPN Users" group gets stopped at the front door. I want to make sure that the Windows 10 devices that connect are compliant against my Intune Compliance policies.

Name: VPN CA Policy

Users and Groups: VPN Users

Cloud Apps: "VPN Server"

Conditions: blank

Grant: Grant Access

     Require Device to be marked as compliant

Session: blank

Note: You could also use other controls like Require MFA or Require Hybrid Azure AD in this scenario. (Require approved client app doesn't make sense in this scenario).

Don't forget to enable the policy when you are happy with it.

Now that you have a conditional access policy in place, you need a way for devices to be marked "Compliant" in Azure AD so that they can get access to VPN.

NOTE: A recent change in the Intune service means that all devices require a Compliance policy (even if its blank) to be eligible for a "Compliant" status.

So head to the Intune portal, go to Device Compliance, Policies, Create Policy and Create a new Compliance Policy without configuring any Settings in it.

Now Assign it to VPN Users group (or the "All Users" assignment).

Step 3 – Create a VPN Profile in Intune

Go to the Intune Portal and create a new VPN Profile for Windows 10. You can use exactly the same VPN Profile that we created in my last post…. with one key addition.

Conditional Access: Enable

When you are happy with the VPN Profile, assign it to the VPN Users group and test it out.

Step 4 – Test it

On an Intune enrolled Windows 10 device, trigger a Sync and wait for the new VPN profile to be installed. Then attempt to connect the VPN.

After you have successfully connected, Open Certmgr.msc – You should see a newly provisioned certificate in there issued by "Microsoft VPN root CA gen 1". Its expiry date will be 60minutes from when it was first requested.

Now to test access on a non-compliant device….

Back in the Intune Console, edit your existing, deployed conditional access policy so that devices require encryption to be compliant

Return to the test Win 10 device and download the company portal app. Select the Win10 device you are using and "Check Compliance" and note that the device is not compliant because it isn't encrypted.


Now test to see if you can connect to the VPN.

If the device is not-compliant You will be blocked with an OOPS message:

More details….

TIP: If you were allowed to connect, chances are you still have a valid certificate from the last test, you might need to wait for it to expire or delete it to speed things along.

To resolve, enable bitlocker the device and then run the compliance check again.

TIP: If using a hyper-V VM, you might need to shut the vm down and enable the Virtual TPM first before it will let you enable Bitlocker.

After enabling Bitlocker you should be able to obtain a user certificate from AAD and access the VPN again.

Summary:

This post was to demonstrate Azure AD Conditional Access and Intune working together to enable a nice remote access solution for your Windows 10 devices. I recommend trying out some of the other variations on this scenario – for example, if the User is dialling VPN from an untrusted region, require MFA and a compliant device. Or get fancy with the compliance policies and require more validation from the Windows 10 Device Health Attestation service to be marked compliant.

Don't forget that in a Modern Managed Windows 10 environment, a VPN isn't always needed – we can leverage cloud services like OneDrive/Sharepoint Online for storing docs, Azure App Proxy for publishing internal web applications.

Bluetooth and DECT – A History of Wireless Devices

$
0
0

Many of us can think back to a time before the Internet, MTV(Video Killed the Radio Star), and the topic of this blog - personal wireless communications (cordless phones, wireless headsets, Cellular Phones, Bluetooth and DECT). Who amongst us can remember being tied via a "phone cord" to a telephone when making a phone call. Nowadays, if there are cords to a telephone, it is typically an Ethernet cable to our Internet Protocol Phone or IPPhone or a charging cable to our cellular phone.  Many of us prefer not to hold a cordless phone to our ear. Whether it be a true cordless phone or a cellular phone, we demand that it support a wireless headset so that we can be more mobile and have our hands free for other activities - like writing blogs ;-).

While I usually talk with customers about different types of Skype for Business certified VoIP phones, headsets, hosted telecom services in Office 365, etc., I find myself often saying terms as if everyone knows them and knows the history of them. I realize that this is not always the case and decided to take a step back in this blog to provide a brief understanding of how these different forms of technology developed. While researching devices to use in your organization or for your own personal use, after reading this blog it is my hope you will have more of an understanding about the features and technology each device advertises today.

As far back as the 1890s when Marconi first developed his wireless telegraph, humans have wanted to communicate wirelessly.  The ability to communicate at great distances had obvious benefits and wireless communications allowed for this. Though it would take another 31 years (1927) before the first commercial radiotelephone service between Britain and the United States could be put into use. After that, it was another 19 years (1946) before it was used domestically in St. Louis. A year later the transistor was invented and the ability to create wireless devices small enough to be held in your hand had begun. This began to change how people viewed wireless technology. Now the ability to move around with a "cordless phone" was within reach. Our mobility while communicating would become common place as the cellular phone industry was expanding. Fast forward to 1993 when Internet Protocol version 4 (IPv4) is established as the defacto standard for over the Internet communications in conjunction with the Transport Control Protocol, thus giving is the popular TCP/IP tag.

IPv4 set the stage for wireless internet that came about in 1997 when the Institute of Electrical and Electronics Engineers or IEEE came out with their 802.11 standard for wireless local area networks. Meanwhile in Europe, around the same time period, the European Telecommunications Standards Institute ETSI was working on creating the Digital European Cordless Telecommunications DECT which is also known as the Digital Enhanced Cordless Telecommunications standard. DECT is now the standard for all cordless phones in Europe. While DECT continued to evolve, its use range from simple handsets to elaborate wireless networks in Europe. DECT 6.0 was coined in the United States for use in personal device communications. It should be noted that DECT 6.0 devices are not allowed in many European countries due to interference with public wireless systems.

In 1998 Bluetooth was developed in a joint venture by Ericsson, IBM, Nokia, Intel and Toshiba. Bluetooth as a standard for wireless data exchange between hand held computers or cellular phones and stationary computers such as the common desktop of today. In 1999 the Wi-Fi Alliance was founded by 3Com (now HP), Aironet (now Cisco), Intersil, Lucent Technologies (now Nokia), Nokia, and Symbol Technologies (now Zebra) which then branded the phrase "WiFi" for WLAN or 802.11 communications, so as to differentiate Bluetooth, DECT and IEEE   802.11 technologies.

Because Bluetooth and DECT are both wireless technologies like WiFi, they are each restricted to certain frequencies within the Radio Frequency Spectrum. DECT 6.0 uses the 1.9 Gigahertz frequency while Bluetooth uses 2.4 Gigahertz and WiFi uses both 2.4 Gigahertz and 5 Gigahertz Frequencies.

The table below breaks down the power levels and frequency per wireless technology. Please note that there are many factors involved in each technology listed so the values will range depending on the device configuration and application.

When devices are using the same frequencies in close proximity, you run the risk of Radio Frequency Interference or RFI. As it pertains to headsets, RFI  can cause degradation in the audio quality and even the inability to connect one device to another. Jabra, Plantronics and Sennheiser each have density studies that can help you determine the likelihood of encountering density conflicts caused by overcrowding and RFI . These density studies will help you to understand where overcrowding may occur and how mixing DECT and Bluetooth headsets may solve the overcrowding issue. The density studies for Jabra, Plantronics and Sennheiser can be found by clicking on the following links: Jabra, Plantronics, Sennhesier.

This figure below shows and example of WiFi overcrowding. Bluetooth and DECT overcrowding would look the same if there were multiple devices within a specific area.

Figure 1

The question is not which should we use, Bluetooth or DECT, but how many of each and where to avoid density and overcrowding issues?   Jabra, Plantronics and Sennheiser each offer both DECT and Bluetooth versions of their headsets. Below are examples of each:

Jabra's Pro 9470 Series comes in both DECT 6.0 and Bluetooth versions.

 

As does the Plantronics Savi 700 series

 

While Sennheiser's MB Pro 2 and SD Pro 2 are Bluetooth and DECT 6.0, respectively.

 

 

 

 

 

 

 

 

 

Note: The historical facts stated above have been referenced from the Wireless History Foundation and Wikipedia.

 

2018 Microsoft Azure Community Study Groups

$
0
0

Interested in earning your Microsoft Azure MCSA, MCSD, or MCSE Certification? Need to study in a way that compliments your busy schedule? The Microsoft Azure Community Study Group is what you’ve been looking for!

Microsoft is hosting a community-based study group that helps you prepare for the Microsoft Azure Certification exams. Each study groups lasts 8-12 weeks depending on the number of exam objectives, and each week you'll have self-study homework to complete at your own pace in preparation for our calls on Friday. We'll meet to discuss specific exam objectives where you can interact with Microsoft experts and other students. During the week ask questions in our Yammer group so that your growth stays on track during this fast-paced 300 level learning environment. What a great way to learn!

Registration

Registration is now open for the following study groups:

Exam

Registration Link

Dates

70-532: Developing Microsoft Azure Solutions

https://aka.ms/532asg

March 23 – May 24, 2018

70-533: Implementing Microsoft Azure Infrastructure Solutions

https://aka.ms/533asg

January 12 – April 13, 2018

70-535: Architecting Microsoft Azure Solutions

https://aka.ms/535asg

January 12 – May 24, 2018

70-483: Programing in C#

https://aka.ms/483asg

January 12 – March 2, 2018

70-486: Developing ASP.NET MVC Web Applications

https://aka.ms/486asg

January 12 – March 23, 2018

70-487: Developing Microsoft Azure and Web Services

https://aka.ms/487asg

March 9 – May 11, 2018

Seating is *very limited* to this event series so please register as soon as possible. Once your registration is complete, join our private Yammer Group where we encourage you to interact with the other students in the class.

Thank you for your interest in building your knowledge and pursuing a Microsoft Azure Certification. We look forward to seeing you online!

SDeming 2017  Steve


Simple PowerShell Network Capture Tool

$
0
0

Hello all. Jacob Lavender here again for the Ask PFE Platforms team to share with you a little sample tool that I've put together to help with performing network captures. This all started when I was attempting to develop an effective method to perform network traces within an air gapped network. My solution had to allow me to use all native functionality of Windows without access to any network capture tools such as Message Analyzer, NETMON, or Wireshark. In addition, I'd need to be able collect the trace files into a single location and move them to another network for analysis.

Well, I know the commands. The challenge is building a solution that junior admins can use easily. Several weeks later I found the need for it again with another customer supporting Office 365. This process resulted in the tool discussed in this post.

Time and time again, it seems that we've spent a great deal of effort on the subject of network captures. Why? Because one of the first questions a PFE is going to ask you when you troubleshoot an issue is whether you have network captures. Same is true when you go through support via other channels. We always want them, seem to never get enough of them, and often they are not fun to get, especially when dealing with multiple end points.

So, let's briefly outline what we're going to cover in this discussion:

Topic #1: How to get the tool.

Topic #2: Purpose of the tool.

Topic #3: Requirements of the tool.

Topic #4: How to use the tool.

Topic #5: Limitations of the tool.

Topic #6: How can I customize the tool?

Topic #7: References and recommendations for additional reading.

Compatible Operating Systems:

  • Windows 7 SP1
  • Windows 8
  • Windows 10
  • Windows Server 2008 R2
  • Windows Server 2012 R2
  • Windows Server 2016

Topic #1: Where can I get this tool?

https://gallery.technet.microsoft.com/Remote-Network-Capture-8fa747ba

Topic #2: What is the purpose of this tool as opposed to other tools available?

This certainly should be the first question. This tool is focused toward delivering an easy to understand approach to obtaining network captures on remote machines utilizing PowerShell and PowerShell Remoting.

I often encounter scenarios where utilizing an application such as Message Analyzer, NETMON, or Wireshark to conduct network captures is not an option. Much of the time this is due to security restrictions which make it very difficult to get approval to utilize these tools on the network. Alternatively, it could be due to the fact that the issue is with an end user workstation who might be located thousands of miles from you and loading a network capture utility on that end point makes ZERO sense, much less trying to walk an end user through using it. Now before we go too much further, both Message Analyzer and Wireshark can help on these fronts. So if those are available to you, I'd recommend you look into them, but of course only after you've read my entire post.

Due to this, it is ideal to have an effective method to execute the built-in utilities of Windows. Therein lies NetEventSession and NETSH TRACE. Both of these have been well documented. I'll point out some items within Topic #7.

The specific target gaps this tool is focused toward:

  • A simple, easy to utilize tool which can be executed easily by junior staff up to principle staff.
  • A means by which security staff can see and know the underlying code thereby establishing confidence in its intent.
  • A lite weight utility which can be moved in the form of a text file.

With that said, this tool is not meant to replace functionality which is found in any established tool. Rather it is intended to provide support in scenarios where those tools are not available to the administrator.

Topic #3: What are the requirements to utilize this tool?

  1. An account with administrator rights on the target machine(s).
  2. An established file share on the network which is accessible by both
    1. The workstation the tool is executed from, and
    2. The target machine where the trace is conducted
  3. Microsoft Message Analyzer to open and view the ETL file(s) generated during the trace process.
    1. Message Analyzer does not have to be within the environment the traces were conducted in. Instead, the trace files can be moved to a workstation with Message Analyzer installed.
  4. Remote Management Enabled:
    1. winrm quickconfig
    2. GPO:
      https://www.techrepublic.com/article/how-to-enable-powershell-remoting-via-group-policy/

Note: Technically, we don't have to have Message Analyzer or any other tool to search within the ETL file and find data. However, to do so, you must have an advanced understanding of what you're looking for. Take a better look at Ed Wilson's great post
from the Hey, Scripting Guy! Blog:

https://blogs.technet.microsoft.com/heyscriptingguy/2015/10/14/packet-sniffing-with-powershell-looking-at-messages/

Topic #4: How do I use this tool?

Fortunately, this is not too difficult. First, ensure that the requirements to execute this tool have been met. Once you have the tool placed on the machine you plan to execute from (not the target computer), execute the PS1 file.

PFE Pro Tip: I prefer to load the file with Windows PowerShell ISE (or your preferred scripting environment).

Note: You do not have to run the tool as an administrator. Rather, the credentials supplied when you execute the tool must be an administrator on the target computer.

Additional Note: The tool is built utilizing functions as opposed to a long script. This was intentional as to allow the samples within the tool to be transported to other scripts for further use – just easier for me. While I present the use of the tool, I'll also discuss the underlying functions.

Now, that I have the tool loaded with ISE, let's see what it looks like.

  1. The first screen we will see is the legal disclaimer. These are always the best. I look forward to executing tools and programs just for the legal disclaimers. In my case, I'm going to accept. I will warn you that if you don't accept, then the tool will exit. I'm sure you're shocked.

  1. Ok, now to the good stuff. Behind the scenes the tool is going to clear any stored credentials within the variable $credentials. If you have anything stored in that variable within the same run space as this script, buckle up. You're going loose it. Just FYI.
  2. Next, the tool is now going to ask you for the credentials you wish to use against the target computer. Once you supply the credentials, the tool is going to validate that the credentials provided are not null, and if they are not, it will test their validity with a simple Get-ADDomain query. If these tests fail, the tool will wag the finger of shame at you.

  1. After supplying the credentials, we will be asked to supply a file share to move the capture files.

Note: The file share must be accessible from both the local client and the target computers. Here is why:

  • The tool is going to validate that the path you provided is available on the network. I'm assuming that after the capture is complete you will want to have access to the files. However, if the local machine is unable to validate the path, it will give you the option to force the use of the path.
  • Second, the tool is going to attempt to validate the file share path on the target computer. If the path is not accessible by that computer, it will give you the option to update the path. If you do not update the path it will leave a copy of the trace files on the target computer.
  1. Next, we will specify the target machine. Once you specify the machine, the tool will validate this machine with DNS by performing a query. If the query fails, you will have to correct the machine. The assumption is that if the query fails, the machine won't be accessible by FQDN (probably a safe assumption unless you're using a hosts file, which is outside the scope of this guide).

  1. Next, we will specify for how long we want the network capture to run. The value is in seconds.

Note: As stated by the tool, capture files can take up a great deal of space. However, the defaults within the tool are not very large.

You can customize the values of the network captures. The commands are located within the Start-NETSH and Start-Event functions.

For the purpose of this tool, I utilized the defaults with NO customization.

  1. Now, once we hit enter here, the tool is going to setup a PowerShell session with the target machine. In the background, there are a few functions its doing:
  • It establishes a PSSession.
  • It establishes the boot volume drive letter.
  • It sets a working path of <bootvolume>:TEMPTracefiles. If this path does not exist, it creates it.
  1. Next, we must specify a drive letter to use for mounting the network share (from Step 4). State any drive letter you want that isn't already in use.

Now, you might be asking why are we mounting a drive letter instead of using the Copy-Item command to the network path. Yeah, I tried that without thinking about it and got a big giant ACCESS DENIED. This is due to the fact that we can't double-hop with our credentials. Kerberos steps in and screams HALT! HALT WITH YOUR DOUBLE-HOP COMMAND!

Great article discussing this problem:

https://blogs.technet.microsoft.com/ashleymcglone/2016/08/30/powershell-remoting-kerberos-double-hop-solved-securely/

If you read the article, you'll see there are multiple ways to address this. I opted for the simple path of just mounting the network share as a drive letter. Simple. Easy. Can be used again without special configuration of computers, servers, or objects in AD. Keep it simple, right? Additionally, we want to minimize any special configuration of systems to accomplish this.

Now, again in the background the tool is performing a little extra logic:

  • It first validates that a drive is not already mounted with the network path provided from Step 4. That would be silly to do it twice.
  • Next, once you provide a drive letter, it validates that you didn't select one already in use.

Great. Now to the really good stuff.

  1. Our next screen presents us with the option to select the capture method we wish to use. Both have advantages and disadvantages. See the references section for details on these. Really, you should read those articles before selecting which capture method if you are not already familiar with them.

For this example, I'm selecting N for NETSH TRACE. NETSH TRACE provides a CAB file by default which I'll show you in Step 10.

Again, we have some behind the scenes logic happening.

Windows 7 and Windows Server 2008 R2 do not have the NetEventSession option available. So, the utility is going to establish what version of Windows the target computer is. If the computer is either Win7 or W2K8R2 it will not allow you to use NetEventSession. It will force the use of NETSH TRACE upon you.

NOTE: Also note that the utility is going to provide a report to you at the end of execution. Within that report it includes the running processes on the target computer.

Why?

Well, one of my favorite features of NETMON and Message Analyzer is the conversation tree. I like to know which of my applications are talking and to who. This is performed on the backend by the application to map PIDS to executables. Well, the capture file might not tell me the executable, but it does give me the PID. So, by looking at the report I can identify which PID to focus on and then use that when looking at the network trace file in Message Analyzer. Yay.

  1. Ok, as soon as we selected which capture method we were going to use, the tool executes the capture on the remote computer and it runs the capture for the length of time previously specified.

As you can see, it states the location. On the target computer we can even see the temporary files which are put in place for the capture:

Once the specified time is reached, the utility sends a stop command to the target computer to end the network capture:

NOTE: In the event that the utility is disconnected from the target computer prior to the stop command being issued, you can issue the commands locally at the target computer itself:

  • NETSH TRACE: netsh trace stop
  • PowerShell NetEventSession: Get-NetEventSession | Stop-NetEventSession

Finally, the tool will move the files used for the trace to the specified network share, and then remove them from the target computer.

  1. Next, we see that the tool completed its network trace and has placed a report for us in the C:Temp directory on the local machine we ran the tool from.

If we open that report file, we're going to be presented with this (there are more than two processes within the actual report) :

  1. Finally, we are now set to utilize the ETL files as necessary. In my case, I've opened an ETL that was generated on a Windows Server 2008 R2 computer using NETSH TRACE, and I'm looking at the LSASS.EXE process. 100 extra points if you can identify what this process is responsible for.

And finally, what's in that CAB file? Lots of goodies. You're going to want to explore that to better understand all the extra information which is provided about the system from this file.

Topic #5: What are the limitation of the tool?

  1. The tool, at present, can only target a single computer at a time. If you need to target multiple machines, you will need to run a separate instance for each. (Multiple PowerShell Sessions)
    1. I would recommend getting each instance to the point of executing the trace, and then do them all at the same time if you are attempting to coordinate a trace amongst several machines.
  2. I'm hoping to release a new version in the future which has the correct arrays and foreach loops built. We're just not there yet.
  1. The variables within the script utilize memory space within the script. They are not set to global. However, I haven't tested this scenario in depth so I would recommend giving that a test prior to trying against production machines.
  • Again, the tool is not meant to replace any other well-established application. Instead, this tool is meant only to fill a niche. You will have to evaluate the best suitable option for your purposes.

 

  • The NETSH TRACE and NetEventSession have not been customized. This was intentional. I highly recommend that you read some of the additional content found in Topic #6 regarding the scenarios and advanced configuration options available within these commands.

 

Topic #6: How can I customize the tool?

Well, we do need to address some customization options. To do so, you simply need to modify the command invoked against the target computer within the trace type's respective function. The function names are called out below.

NETSH TRACE Customization

Function: Start-NETSH

First, let's start with NETSH TRACE. Yong Rhee has a great article discussing some of the functionality within NETSH TRACE, specifically he uses scenarios:

https://blogs.technet.microsoft.com/yongrhee/2012/12/01/network-tracing-packet-sniffing-built-in-to-windows-server-2008-r2-and-windows-server-2012/

Using NETSH to Manages Traces:
https://msdn.microsoft.com/en-us/library/windows/desktop/dd569142(v=vs.85).aspx

Let's look at some of the built-in scenarios. To do so, execute netsh trace show scenarios:

Next, we can view some of the configuration of the providers within the scenarios using netsh trace show scenario <scenario name>, such as netsh trace show scenario LAN:

From this, we can see that one of the providers is Microsoft-Windows-L2NACP, which is currently configured to event logging level (4), Informational. Well, what if I wanted to configure that to be higher or lower. I can customize the NETSH TRACE command to accommodate this:

netsh trace start Scenario=Lan Provider=Microsoft-Windows-L2NACP Level=5 Capture=Yes TraceFile=$tracefile

This would increase the logging level to (5), Verbose:

Note: This is just one sample of how the NETSH TRACE option within the tool can be customized. There are plenty of other options as well. I strongly recommend that you review Netsh Commands for Network Trace:

https://technet.microsoft.com/en-us/library/jj129382(v=ws.11).aspx

NetEventSession Customization

Function: Start-NetEvent

Fundamentally, this is going to be the same as customizing NETSH TRACE. You simply have to know what you're looking for. In this case, we are going to focus on two aspects.

Configuring the NetEventSession: This overall is simple. As a whole we're not going to change too much on this. I'd recommend reviewing the New-NetEventSession documentation:

https://docs.microsoft.com/en-us/powershell/module/neteventpacketcapture/new-neteventsession?view=win10-ps

Now, the real meat of the capture. The NetEventProvider. The default used natively within the tool is the Microsoft-Windows-TCPIP provider. However, there are quite a few others available. You may want to output to a file as there will be several.

From PowerShell, execute:

Get-NetEventProvider -ShowInstalled

What you should notice is that the providers are all set with a default configuration. You can adjust these as necessary as well using:

Set-NetEventProvider

https://docs.microsoft.com/en-us/powershell/module/neteventpacketcapture/set-neteventprovider?view=win10-ps

https://technet.microsoft.com/en-us/library/dn268515(v=wps.630).aspx

By adding an additional Invoke-Command line within the Start-NetEvent function, you can easily customize the provider(s) which you wish to use within the network capture session.

Customization Conclusion: For both NETSH TRACE and NetEventSession, I would recommend making adjustments to the commands locally on a test machine and validating the results prior to executing against a remote machine. Once you know the command syntax is correct and the output is what you desire then incorporate that customization back into the tool as necessary.

Topic #7: References and Recommendations for Additional Reading:

  1. Learning how to use Message Analyzer:

 

Introduction to Network Trace Analysis Using Microsoft Message Analyzer:

  1. Part 2: https://blogs.technet.microsoft.com/askpfeplat/2014/10/12/introduction-to-network-trace-analysis-using-microsoft-message-analyzer-part-2/
  1. Michael Rendino's two posts:
    1. Basic Network Capture Methods: https://blogs.technet.microsoft.com/askpfeplat/2016/12/27/basic-network-capture-methods/
    2. Network Capture Best Practices: https://blogs.technet.microsoft.com/askpfeplat/2017/04/04/network-capture-best-practices/
  2. Victor Zapata's post on Leveraging Windows Native Functionality to Capture Network Traces Remotely:
  3. A note on this post. It includes some sample material on running traces against multiple machines at once. I'd recommend exploring this a little.

 

Infrastructure + Security: Noteworthy News (December, 2017-Part 1)

$
0
0

Hello there! Stanislav Belov here to bring you the next issue of the Infrastructure + Security: Noteworthy News series!  

As a reminder, the Noteworthy News series covers various areas, to include interesting news, announcements, links, tips and tricks from Windows, Azure, and Security worlds on a monthly basis. Enjoy! 

Microsoft Azure
Transforming your VMware environment with Microsoft Azure

Microsoft on November 21, 2017, announced new services to facilitate your VMware migration to Azure.

  • On November 27, 2017, Azure Migrate, a free service, will be broadly available to all Azure customers. Azure Migrate can discover your on-premises VMware-based applications without requiring any changes to your VMware environment.
  • Integrate VMware workloads with Azure services.
  • Host VMware infrastructure with VMware virtualization on Azure.
Free e-book download: Enterprise Cloud Strategy
In the second edition of the Enterprise Cloud Strategy e-book, we've taken the essential information for how to establish a strategy and execute your enterprise cloud migration and put it all in one place. This valuable resource for IT and business leaders provides a comprehensive look at moving to the cloud, as well as specific guidance on topics like prioritizing app migration, working with stakeholders, and cloud architectural blueprints. Download now.
Azure Hybrid Benefit for Windows Server
For customers with Software Assurance, Azure Hybrid Benefit for Windows Server allows you to use your on-premises Windows Server licenses and run Windows virtual machines on Azure at a reduced cost. You can use Azure Hybrid Benefit for Windows Server to deploy new virtual machines from any Azure supported platform Windows Server image or Windows custom images. As long as the image doesn't come with additional software such as SQL Server or third-party marketplace images.
Azure Reserved VM Instances (RIs) are generally available for customers worldwide

Effective November, 16th.  Azure RIs enable you to reserve Virtual Machines on a one- or three-year term, and provide up to 72% cost savings versus pay-as-you-go prices.

Azure RIs give you price predictability and help improve your budgeting and forecasting. Azure RIs also provide unprecedented flexibility should your business needs change. We've made it easy to exchange your RIs and make changes such as region or VM family, and unlike other cloud providers, you can cancel Azure RIs at any time and get a refund.

Azure Interactives

Stay current with a constantly growing scope of Azure services and features. Learn how to manage and protect your Azure resources efficiently and how to solve common design challenges.

Azure AD Pass through authentication

Azure Active Directory (Azure AD) Pass-through Authentication allows your users to sign in to both on-premises and cloud-based applications using the same passwords. This feature provides your users a better experience - one less password to remember, and reduces IT helpdesk costs because your users are less likely to forget how to sign in. When users sign in using Azure AD, this feature validates users' passwords directly against your on-premises Active Directory.

Windows Server
Why use Storage Replica?
Storage Replica offers new disaster recovery and preparedness capabilities in Windows Server 2016 Datacenter Edition. For the first time, Windows Server offers the peace of mind of zero data loss, with the ability to synchronously protect data on different racks, floors, buildings, campuses, counties, and cities. After a disaster strikes, all data will exist elsewhere without any possibility of loss. The same applies before a disaster strikes; Storage Replica offers you the ability to switch workloads to safe locations prior to catastrophes when granted a few moments warning - again, with no data loss.

Storage Replica may allow you to decommission existing file replication systems such as DFS Replication that were pressed into duty as low-end disaster recovery solutions. While DFS Replication works well over extremely low bandwidth networks, its latency is very high - often measured in hours or days. This is caused by its requirement for files to close and its artificial throttles meant to prevent network congestion. With those design characteristics, the newest and hottest files in a DFS Replication replica are the least likely to replicate. Storage Replica operates below the file level and has none of these restrictions.

Windows Client
Announcing Windows 10 Insider Preview Build 17035 for PC

Microsoft on November 8, 2017, released Windows 10 Insider Preview Build 17035 for PC to Windows Insiders in the Fast ring and for those who opted in to Skip Ahead. The new build features an ability to mute a tab that is playing media in Microsoft Edge, an ability to wirelessly share files and URLs to nearby PCs using the Near Share feature, improvements to Windows Update, and more.

Move away from passwords, deploy Windows Hello. Today!

Since Windows 10 originally released we have continued to make significant investments to Windows Hello for Business, making it easier to deploy and easier to use, and we are seeing strong momentum with adoption and usage of Windows Hello. As we shared at Ignite 2017 conference, Windows Hello is being used by over 37 million users, and more than 200 commercial customers have started deployments of Windows Hello for Business. As many would expect, Microsoft currently runs the world's largest production, with over 100,000 users; however, we are just one of many running at scale, the second largest having just reached 25,000 users.

Security
Stopping ransomware where it counts: Protecting your data with Controlled folder access

Windows Defender Exploit Guard is a new set of host intrusion prevention capabilities included with Windows 10 Fall Creators Update. One of its features, Controlled folder access, stops ransomware in its tracks by preventing unauthorized access to your important files.

Defending against ransomware using system design

Many of the risks associated with ransomware and worm malware can be alleviated through systems design. Referring to our now codified list of vulnerabilities, we know that our solution must:

  • Limit the number (and value) of potential targets that an infected machine can contact.
  • Limit exposure of reusable credentials that grant administrative authorization to potential victim machines.
  • Prevent infected identities from damaging or destroying data.
  • Limit unnecessary risk exposure to servers housing data.
Cybersecurity Reference Architecture & Strategies: How to Plan for and Implement a Cybersecurity Strategy

Planning and implementing a security strategy to protect a hybrid of on-premises and cloud assets against advanced cybersecurity threats is one of the greatest challenges facing information security organizations today.

Join Lex Thomas as he welcomes back Mark Simos to the show as they discuss how Microsoft has built a robust set of strategies and integrated capabilities to help you solve these challenges so that you can build a better understanding how to build an identity security perimeter around your assets.

Securing Domain Controllers Against Attack
Domain controllers provide the physical storage for the AD DS database, in addition to providing the services and data that allow enterprises to effectively manage their servers, workstations, users, and applications. If privileged access to a domain controller is obtained by a malicious user, that user can modify, corrupt, or destroy the AD DS database and, by extension, all of the systems and accounts that are managed by Active Directory. Because domain controllers can read from and write to anything in the AD DS database, compromise of a domain controller means that your Active Directory forest can never be considered trustworthy again unless you are able to recover using a known good backup and to close the gaps that allowed the compromise in the process.
Cybersecurity Reference Strategies (Video)
Explore recommended strategies from Microsoft, built based on lessons learned from protecting our customers, our hyper-scale cloud services, and our own IT environment. Get the details on important trends, critical success criteria, best approaches, and technical capabilities to make these strategies real. Discover key learnings and guidance on strategies that cover visibility and control of cloud and mobile assets, moving to an identity security perimeter, balancing preventive measures and detection/response capabilities, focusing on the "cost of attack," protecting information, and applying military lessons learned.
How Microsoft protects against identity compromise (Video)
Identity sits at the very center of the enterprise threat detection ecosystem. Proper identity and access management is critical to protecting an organization, especially in the midst of a digital transformation. This part three of the six-part Securing our Enterprise series where Chief Information Security Officer, Bret Arsenault shares how he and his team are managing identity compromise.
Vulnerabilities and Updates
#AVGater vulnerability does not affect Windows Defender Antivirus

On November 10, 2017, a vulnerability called #AVGater was discovered affecting some antivirus products. The vulnerability requires a non-administrator-level account to perform a restore of a quarantined file. Windows Defender Antivirus is not affected by this vulnerability.

Update 1711 for Configuration Manager Technical Preview Branch—Available Now!

Technical Preview Branch releases give you an opportunity to try out new Configuration Manager features in a test environment before they are made generally available. This month's new preview features include:

  • Improvements to the Run Task Sequence step
  • The option for user interaction when installing applications as system
SharePoint security fixes released with November 2017 PU and offered through Microsoft Update

The article identifies the KB articles of the security fixes released on November 14, 2017, for SharePoint 2010 Suite, SharePoint 2013 Suite, and SharePoint 2016 Suite.

November 2017 security update release

Microsoft on November 14, 2017, released security updates to provide additional protections against malicious attackers. By default, Windows 10 receives these updates automatically, and for customers running previous versions, Microsoft recommends that they turn on automatic updates as a best practice. More information about this month's security updates can be found in the Security Update Guide.

Support Lifecycle
The Azure AD admin experience in the classic Azure portal will retire on November 30, 2017. All Admin capabilities are available in the new Azure portal. The Azure Information Protection (or AIP, formerly Rights Management Service) admin experiences will also be retired in the Azure classic portal on November 30, but can be found here in the new Azure portal.
As Windows Azure Active Directory Sync (DirSync) and Azure AD Sync has reached their end of support on April 13, 2017 it is time for customers to upgrade to Azure AD Connect as DirSync will deprecate at the end of December 2017.  Azure AD Connect is the single solution replacing DirSync and Azure AD Sync and offers new functionality, feature enhancements, and support for new scenarios. Customers must upgrade to Azure AD Connect before January in order to continue to synchronize their on-premises identity data to Azure AD and Office 365. Beginning 31st of December Azure AD will no longer accept communications from Windows Azure Active Directory Sync ("DirSync") and Microsoft Azure Active Directory Sync ("Azure AD Sync").
Microsoft Premier Support News
Application whitelisting is a powerful defense against malware, including ransomware, and has been widely advocated by security experts. Users are often tricked into running malicious content which allows adversaries to infiltrate their network. ​Application whitelisting defines what is trusted by the IT organization and only allows those trusted applications to run. The Onboarding Accelerator - Implementation of Application Whitelisting consists of 3 structured phases that will help customers identify locations which are susceptible to malware and implement AppLocker whitelisting policies customized to their environment, increasing their protection against such attacks.
A new SQL Server - Migration from Oracle Assessment is available to help customers assess what they need to migrate an Oracle database to SQL Server. Also new, WorkshopPLUS - SQL Server: AlwaysOn Availability Groups and Failover Cluster Instances - Setup and Configuration which in-depth technical and architecture details of implementing SQL Server AlwaysOn Availability Group (AG) feature in Azure and on-premises.

Join the US SMB Partner Insider call on Wednesday, December 6, 2017

$
0
0

TimTetrickPhoto

Tim Tetrick

 

Join the Microsoft US team for the December SMB Partner Insider call this Wednesday, December 6, 2017 where you’ll get valuable, actionable information to help your Microsoft business grow. Plus, registration is open for the January through June Insider calls!

The December agenda will cover:

  • Insider Scoop: Covering events, training, offers in market, marketing campaign content and more
  • Technical Demo: Getting started with Microsoft 365
  • Cloud Enablement Desk: Learn about this resource that helps partners build and accelerate their Microsoft practice

STAY IN THE KNOW

We look forward to you joining us on the December 6 Partner Insider call!

ESE Deep Dive: Part 1: The Anatomy of an ESE database

$
0
0

hi!

Get your crash helmets on and strap into your seatbelts for a JET engine / ESE database special...

This is Linda Taylor, Senior AD Escalation Engineer from the UK here again. And WAIT...... I also somehow managed to persuade Brett Shirley to join me in this post. Brett is a Principal Software Engineer in the ESE Development team so you can be sure the information in this post is going to be deep and confusing but really interesting and useful and the kind you cannot find anywhere else :- )
BTW, Brett used to write blogs before he grew up and got very busy. And just for fun, you might find this old  “Brett” classic entertaining. I have never forgotten it. :- )
Back to today's post...this will be a rather more grown up post, although we will talk about DITs but in a very scientific fashion.

In this post, we will start from the ground up and dive deep into the overall file format of an ESE database file including practical skills with esentutl such as how to look at raw database pages. And as the title suggests this is Part1 so there will be more!

What is an ESE database?

Let’s start basic. The Extensible Storage Engine (ESE), also known as JET Blue, is a database engine from Microsoft that does not speak SQL. And Brett also says … For those with a historical bent, or from academia, and remember ‘before SQL’ instead of ‘NoSQL’ ESE is modelled after the ISAMs (indexed sequential access method) that were vogue in the mid-70s. ;-p
If you work with Active Directory (which you must do if you are reading this post 🙂 then you will (I hope!) know that it uses an ESE database. The respective binary being, esent.dll (or Brett loves exchange, it's ese.dll for the Exchange Server install). Applications like active directory are all ESE clients and use the JET APIs to access the ESE database.

1

This post will dive deep into the Blue parts above. The ESE side of things. AD is one huge client of ESE, but there are many other Windows components which use an ESE database (and non-Microsoft software too), so your knowledge in this area is actually very applicable for those other areas. Some examples are below:

2

Tools

There are several built-in command line tools for looking into an ESE database and related files. 

  1. esentutl. This is a tool that ships in Windows Server by default for use with Active Directory, Certificate Authority and any other built in ESE databases.  This is what we will be using in this post and can be used to look at any ESE database.

  1. eseutil. This is the Exchange version of the same and gets installed typically in the MicrosoftExchangeV15Bin sub-directory of the Program Files directory.

  1. ntdsutil. Is a tool specifically for managing an AD or ADLDS databases and cannot be used with generic ESE databases (such as the one produced by Certificate Authority service).  This is installed by default when you add the AD DS or ADLDS role.

For read operations such as dumping file or log headers it doesn’t matter which tool you use. But for operations which write to the database you MUST use the matching tool for the application and version (for instance it is not safe to run esentutl /r from Windows Server 2016 on a Windows Server 2008 DB). Further throughout this article if you are looking at an Exchange database instead, you should use eseutil.exe instead of esentutl.exe. For AD and ADLDS always use ntdsutil or esentutl. They have different capabilities, so I use a mixture of both. And Brett says that If you think you can NOT keep the read operations straight from the write operations, play it safe and match the versions and application.

During this post, we will use an AD database as our victim example. We may use other ones, like ADLDS for variety in later posts.

Database logical format - Tables

Let’s start with the logical format. From a logical perspective, an ESE database is a set of tables which have rows and columns and indices.

Below is a visual of the list of tables from an AD database in Windows Server 2016. Different ESE databases will have different table names and use those tables in their own ways.

3

In this post, we won’t go into the detail about the DNTs, PDNTs and how to analyze an AD database dump taken with LDP because this is AD specific and here we are going to look at ESE specific level. Also, there are other blogs and sources where this has already been explained. for example, here on AskPFEPlat. However, if such post is wanted, tell me and I will endeavor to write one!!

It is also worth noting that all ESE databases have a table called MSysObjects and MSysObjectsShadow which is a backup of MSysObjects. These are also known as “the catalog” of the database and they store metadata about client’s schema of the database – i.e.

  1. All the tables and their table names and where their associated B+ trees start in the database and other miscellaneous metadata.

  1. All the columns for each table and their names (of course), the type of data stored in them, and various schema constraints.

  1. All the indexes on the tables and their names, and where their associated B+ trees start in the database.

This is the boot-strap information for ESE to be able to service client requests for opening tables to eventually retrieve rows of data.

Database physical format

From a physical perspective, an ESE database is just a file on disk. It is a collection of fixed size pages arranged into B+ tree structures. Every database has its page size stamped in the header (and it can vary between different clients, AD uses 8 KB). At a high level it looks like this:

4

The first “page” is the Header (H).

The second “page” is a Shadow Header (SH) which is a copy of the header.

However, in ESE “page number” (also frequently abbreviated “pgno”) has a very specific meaning (and often shows up in ESE events) and the first NUMBERED page of the actual database is page number / pgno 1 but is actually the third “page” (if you are counting from the beginning :-).

From here on out though, we will not consider the header and shadow header proper pages, and page number 1 will be third page, at byte offset = <page size> * 2 = 8192 * 2 (for AD databases).

If you don’t know the page size, you can dump the database header with esentutl /mh.

Here is a dump of the header for an NTDS.DIT file – the AD database:

1

The page size is the cbDbPage. AD and ADLDS uses a page size of 8k. Other databases use different page sizes.

A caveat is that to be able to do this, the database must not be in use. So, you’d have to stop the NTDS service on the DC or run esentutl on an offline copy of the database.

But the good news is that in WS2016 and above we can now dump a LIVE DB header with the /vss switch! The command you need would be "esentutl /mh ntds.dit /vss” (note: must be run as administrator).

All these numbered database pages logically are “owned” by various B+ trees where the actual data for the client is contained … and all these B+ trees have a “type of tree” and all of a tree’s pages have a “placement in the tree” flag (Root, or Leaf or implicitly Internal – if not root or leaf).

Ok, Brett, that was “proper” tree and page talk -  I think we need some pictures to show them...

Logically the ownership / containing relationship looks like this:

5

More about B+ Trees

The pages are in turn arranged into B+ Trees. Where top page is known as the ‘Root’ page and then the bottom pages are ‘Leaf’ pages where all the data is kept.  Something like this (note this particular example does not show ‘Internal’ B+ tree pages):

6

  • The upper / parent page has partial keys indicating that all entries with 4245 + A* can be found in pgno 13, and all entries with 4245 + E* can be found in pgno 14, etc.

  • Note this is a highly simplified representation of what ESE does … it’s a bit more complicated.

  • This is not specific to ESE; many database engines have either B trees or B+ trees as a fundamental arrangement of data in their database files.

The Different trees

You should know that there are different types of B+ trees inside the ESE database that are needed for different purposes. These are:

  1. Data / Primary Trees – hold the table’s primary records which are used to store data for regular (and small) column data.

  1. Long Value (LV) Trees – used to store long values. In other words, large chunks of data which don't fit into the primary record.

  1. Index trees – these are B+Trees used to store indexes.

  1. Space Trees – these are used to track what pages are owned and free / available as new pages for a given B+ tree.  Each of the previous three types of B+ Tree (Data, LV, and index), may (if the tree is large) have a set of two space trees associated with them.

Storing large records

Each Row of a table is limited to 8k (or whatever the page size is) in Active Directory and AD LDS. I.e. so each record has to fit into a single database page of 8k..but you are probably aware that you can fit a LOT more than 8k into an AD object or an exchange e-mail! So how do we store large records?

Well, we have different types of columns as illustrated below:

7

Tagged columns can be split out into what we call the Long Value Tree. So in the tagged column we store a simple 4 byte number that’s called a LID (Long Value ID) which then points to an entry in the LV tree. So we take the large piece of data, break it up into small chunks and prefix those with the key for the LID and the offset.

So, if every part of the record was a LID / pointer to a LV, then essentially we can fit 1300 LV pointers onto the 8k page. btw, this is what creates the 1300 attribute limit in AD. It’s all down to the ESE page size.

Now you can also start to see that when you are looking at a whole AD object you may read pages from various trees to get all the information about your object. For example, for a user with many attributes and group memberships you may have to get data from a page in the ”datatable” Primary tree + “datatable” LV tree + sd_table Primary tree + link_table Primary tree.

Index Trees

An index is used for a couple of purposes. Firstly, to make a list of the records in an intelligent order, such as by surname in an alphabetical order. And then secondly to also cut down the number of records which sometimes greatly helps speed up searches (especially when the ‘selectivity is high’ – meaning few entries match).

Below is a visual illustration (with the B+ trees turned on their side to make the diagram easier) of a primary index which is the DNT index in the AD Database – the Data Tree.  And a secondary index of dNSHostName. You can see that the secondary index only contains the records which has a dNSHostName populated. It is smaller.

8

You can also see that in the secondary index, the primary key is the data portion (the name) and then the data is the actual Key that links us back to the REAL record itself.

Inside a Database page

Each database page has a fixed header. And the header has a checksum as well as other information like how much free space is on that page and which B-tree it belongs to.

Then we have these things called TAGS (or nodes), which store the data.

A node can be many things, such as a record in a database table or an entry in an index.

The TAGS are actually out of order on the page, but order is established by the tag array at end.

  • TAG 0 = Page External Header

This contains variable sized special information on the page, depending upon the type of B-tree and type of page in B tree (space vs. regular tree, and root vs. leaf).

  • TAG 1,2,3, etc are all “nodes” or lines, and the order is tracked.

The key & data is specific to the B Tree type.

And TAG 1 is actually node 0!!! So here is a visual picture of what an ESE database page looks like:

9

It is possible to calculate this key if you have an object's primary key. In AD this is a DNT.

The formulae for that (if you are ever crazy enough to need it) would be:

  • Start with 0x7F, and if it is a signed INT append a 0x80000000 and then OR in the number

  • For example 4248 –> in hex 1098 –> as key 7F80001098 (note 5 bytes).

  • Note: Key buffer uses big endian, not little endian (like x86/amd64 arch).

  • If it was a 64-bit int, just insert zeros in the middle (9 byte key).

  • If it is an unsigned INT, start with 0x7F and just append the number.

  • Note: Long Value (LID) trees and ESE’s Space Trees (pgno) are special, no 0x7F (4 byte keys).

  • And finally other non-integers column types, such as String and Binary types, have a different more complicated formatting for keys.

Why is this useful? Because, for example you can take a DNT of an object and then calculate its key and then seek to its page using esentutl.exe dump page /m functionality and /k option.

The Nodes also look different (containing different data) depending on the ESE B+tree type. Below is an illustration of the different nodes in a Space tree, a Data Tree, a LV tree and an Index tree.

10

The green are the keys. The dark blue is data.

What does a REAL page look like?

You can use esentutl to dump pages of the database if you are investigating some corruption for example.

Before we can dump a page, we want to find a page of interest (picking a random page could give you just a blank page) … so first we need some info about the table schema, so to start you can dump all the tables and their associated root page numbers like this :

1_2

Note, we have findstring’d the output again to get a nice view of just all the tables and their pgnoFDP and objidFDP. Findstr.exe is case sensitive so use the exact format or use /i switch.

objidFDP identifies this table in the catalog metadata. When looking at a database page we can use its objidFDP to tell which table this page belongs to.

pgnoFDP is the page number of the Father Data Page – the very top page of that B+ tree, also known as the root page.  If you run esentutl /mm <dbname> on its own you will see a huge list of every table and B-tree (except internal “space” trees) including all the indexes.

So, in this example page 31 is the root page of the datatable here.

Dumping a page

You can dump a page with esentutl using /m and /p. Below is an example of dumping page 31 from the database - the root page of the “datatable” table as above.

2

3

The objidFDP is the number indicating which B-tree the page belongs to. And the cbFree tells us how much of this page is free. (cb = count of bytes). Each database page has a double header checksum – one ECC (Error Correcting Code) checksum for single bit data correction, and a higher fidelity XOR checksum to catch all other errors, including 3 or more bit errors that the ECC may not catch.  In addition, we compute a logged data checksum from the page data, but this is not stored in the header, and only utilized by the Exchange 2016 Database Divergence Detection feature.

You can see this is a root page and it has 3 nodes (4 TAGS – remember TAG1 is node 0 also known as line 0! 🙂 and it is nearly empty! (cbFree = 8092 bytes, so only 100 bytes used for these 3 nodes + page header + external header).

The objidFDP tells us which B-Tree this page belongs to.

And notice the PageFlushType, which is related to the JET Flush Map file we could talk about in another post later.

The nodes here point to pages lower down in the tree. And we could dump a next level page (pgno: 1438)....and we can see them getting deeper and more spread out with more nodes.

4

5

So you can see this page has 294 nodes! Which again all point to other pages. It is also a ParentOfLeaf meaning these pgno / page numbers actually point to leaf pages (with the final data on them).

Are you bored yet? Open-mouthed smile

Or are you enjoying this like a geek? either way, we are nearly done with the page internals and the tree climbing here. 

If you navigate more down, eventually you will get a page with some data on it like this for example, let's dump page 69 which TAG 6 is pointing to:

6

7

So this one has some data on it (as indicated by the “Leaf page” indicator under the fFlags). 

Finally, you can also dump the data - the contents of a node (ie TAG) with the /n switch like this:

8

Remember: The /n specifier takes a pgno : line or node specifier … this means that the :3 here, dumped TAG 4 from the previous screen.  And note that trying to dump “/n69:4” would actually fail.

This /n will dump all the raw data on the page along with the information of columns and their contents and types. The output also needs some translation because it gives us the columnID (711 in the above example) and not the attribute name in AD (or whatever your database may be). The application developer would then be able to translate those column IDs to some meaningful information. For AD and ADLDS, we can translate those to attribute names using the source code.

Finally, there really should be no need to do this in real life, other than in a situation where you are debugging a database problem. However, we hope this provided a good and ‘realistic’ demo to help understand and visualize the structure of an ESE database and how the data is stored inside it!

Stay tuned for more parts .... which Brett says will be significantly more useful to everyday administrators! 😉

The End!

Linda & Brett

Azure Stack теперь в России!

$
0
0

30 ноября состоялось ключевое и очень важное событие осени — форум Microsoft “Платформа цифрового бизнеса”!

Мероприятие было поистине масштабным: более 700 очных участников, свыше 40 000 онлайн-участников, более 40 фундаментальных и уникальных докладов по различным тематикам: соблюдение законов при бизнес-трансформации, применение современных облачных технологий, обсуждение оптимизации бизнеса и даже специальная дискуссия на тему развития blockchain и его влияние на будущее ИТ-бизнеса. 28 докладов, с глубоким техническим контентом, транслировалось только на онлайн-аудиторию. И одна из ключевых тем, конечно же, была Облачная платформа Microsoft – Azure.

Безусловной сенсацией мероприятия стал главный анонс – наличие Azure Stack в России.

Встречайте!

Если коротко Azure Stack — это расширение Azure, которое обеспечивает адаптивность и эффективность облачных вычислений в локальных дата-центрах, а также позволяет создавать современные приложения в гибридных облачных средах с необходимым уровнем гибкости, контроля и защиты.

Облачный рынок в России находится в стадии активного роста и способствует цифровой трансформации бизнеса. Azure Stack является не только решением для гибридного облака, но еще и платформой для гипер-конвергентной инфраструктуры, на которой возможно создание распределенных решений для промышленного Интернета вещей и индустриального blockchain. Появление Azure Stack в России является подтверждением востребованности этих сценариев бизнесом и должно способствовать их развитию.

Azure Stack является расширением Azure, однако, заказчик будет полностью контролировать как использование локальных мощностей, так и публичного облака Azure. Он сможет выбирать, где развернуть новый экземпляр виртуальной машины, где хранить данные – в своем или партнерском ЦОДе в России или в одном из 42 регионов Azure по всему миру.

С появлением Azure Stack возможностей соблюдения закона о персональных данных становится еще больше, и они еще проще в реализации. Azure Stack устанавливается в ЦОДе самого заказчика или у его партнера в России, и все действия с данными могут осуществляться именно в нем.

Об этом и других решениях смотрите на нашем сайте.
Все доклады доступны в записи.

http://msplatform.ru/

Viewing all 34890 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>