Quantcast
Channel: TechNet Blogs
Viewing all 34890 articles
Browse latest View live

VDI on Azure 関連ウェブセミナーのご案内 (Windows 10 on Azure の発表を受けて!)

$
0
0

こんにちは、マイクロソフトの前島です。
今年4月に “Windows 10 on Azure の現状について as of 2017.04” という記事を投稿し、多くの反響をいただきましたが、ついにアップデート情報をお届けできることになりました。

【Windows virtualization use rights coming to CSP】

Windows virtualization use rights coming to CSP

まだ十分に情報が公開・整理されているとは言えませんが、一言でいうと、9月6日から “Windows 10 on Azure” および “Windows 10 on 認定クラウド” が正式解禁されます!

既報の通り Windows 10 Enterpries E3/E5 per User または VDA per User のライセンス、あるいは新たに発表された Microsoft 365 Enterprise ライセンスをお持ちの場合に Windows 10 on Azure の権利が付与されます。ただし E3 ライセンスについては、クラウド持ち込み権のあるライセンスとないライセンスの2種類ができるようです。このあたりの詳細情報は追って出てくると思います。

また Azure 以外の “Windows 10 on 認定クラウド”は、下記より辿れる Authorized QMTH Partner が対象になりますが、QMTH Partner のリストはまだ公開されていません。

【Qualified Multitenant Hoster Program】
https://www.microsoft.com/en-us/CloudandHosting/licensing_sca.aspx

 

さて本施策を受けて、いよいよ XenDesktop Essentilas などの VDI on Azure も本格利用できるようになります。
そこで多くの方に VDI on Azure をご紹介できるよう、全6回のオンラインセミナー(ウェビナー)を開催することになりました。アライアンスパートナーである Citrix 社や、VDI / Azure どちらにも長けたパートナー各社からゲストスピーカーをお招きし、毎回異なる内容をお届けする予定です。

オンラインですので自席PCから簡単にご参加いただくことができます。また当然ながら参加費用は一切かかりませんので、ぜひお気軽にご参加ください。

開催日

タイトル

概要

スピーカー

(敬称略)

登録サイト

2017/7/20

17:00 18:00

Citrix on Azure で働き方改革を実現

在宅勤務などの働き方改革を支援する手段として、クライアント仮想化への注目が高まっています。 本セミナーでは、アプリケーションやデスクトップの配信基盤を Azure 上のサービスとして提供し、場所やデバイスを問わず、業務環境への安全なアクセスを可能にする Citrix Cloud や関連サービスの概要をご紹介します。

シトリックス・システムズ・ジャパン株式会社

橋本

 

日本マイクロソフト株式会社

前島 鷹賢

https://azure.microsoft.com/ja-jp/community/events/daas-azure-20-july-japan/

2017/8/2

(午後予定)

Citrix on Azure」構成・展開・運用のポイント

Citrix Cloud Azure を組み合わせることで、管理サーバーや仮想マシンを含めたすべてのリソースをクラウド上で管理・運用することが可能となります。

本セミナーでは、Citrix Cloud サービスの一つである XenApp and XenDesktop Service を利用し、Azure にリソースを展開するシナリオにおいて、仮想マシンへの接続フローや展開方法、運用などの技術的なポイントをご紹介します。

シトリックス・システムズ・ジャパン株式会社

馬場 章弘

 

日本マイクロソフト株式会社

前島 鷹賢

公開準備中

(決まり次第更新します)

2017/8/10

(午前予定)

多様化する VDI 構成の最適解

~Citrix on Azure」を使った構成パターン~

VDIを検討する上で、これまでのオンプレミス以外にも、クラウドを活用した「XenApp Essentials」などのサービスが登場し、構成パターンが多様化してきています。

VDIによるメリットを最大限得るには構成パターンの検討が非常に重要です。本セミナーでは多数のVDIおよびAzure案件で培ったナレッジをもとに、お客様の要件に応じて最適な構成パターンを選択するポイントをご紹介します。

日本ビジネスシステムズ株式会社

早川 和輝

 

日本マイクロソフト株式会社

前島 鷹賢

公開準備中

(決まり次第更新します)

2017/9/6

(午前予定)

Citrix on Azure」サイジングベストプラクティス

オンプレでもクラウドでも、XenDesktopXenAppで求められるサイジングの肝は変わりません。本セミナーでは、オンプレで培ったノウハウから、Citrix on Azure (Citrix Cloud) で必要となる StoreFrontNetScalerなどの構成コンポーネントや、VDAの正しいサイジング方法をご紹介します。

日商エレクトロニクス株式会社

石川 大

 

日本マイクロソフト株式会社

前島 鷹賢

公開準備中

(決まり次第更新します)

2017/9/7

(午後予定)

Citrix on Azure

最速で作るVDI環境

XenDesktop Essentials

 実際に「Citrix XenDesktop Essentials」のPOCを実施したSEが、システム構成や構築の流れ、現場で得たノウハウ・ポイント等、包み隠すことなくご紹介します。

また、「Citrix on Azure」に適したエンドポイント端末もご紹介します。

アセンテック株式会社 

馬場 泰一

 

日本マイクロソフト株式会社

前島 鷹賢

公開準備中

(決まり次第更新します)

2017/9/15

(午前予定)

 タイトル調整中

 概要調整中 (テクニカルセッションを予定)

日本マイクロソフト株式会社

山本 美穂 / 小田 学

 公開準備中

(決まり次第更新します)

 

またウェビナー以外にも、8月24日(木)にハンズオンを含むセミナー@都内、9月19日(火)にはソリューションセミナー@品川など、VDI on Azure 関連の情報をお届けする機会を予定しています。

VDI on Azure は働き方改革などを実現する強力なソリューションになる可能性を秘めています。ぜひ皆様のビジネスにお役立てください。


Microsoft 365 のご紹介

$
0
0

(この記事は 2017 7 10 日に Office Blogs に投稿された記事 Introducing Microsoft 365 の翻訳です。最新情報については、翻訳元の記事をご参照ください。)

今回は、Office チーム担当コーポレート バイス プレジデントを務める Kirk Koenigsbauer の記事をご紹介します。

このたび開催された Inspire (英語) で、Satya Nadella は Microsoft 365 (英語) を発表しました。このサービスは Office 365、Windows 10、Enterprise Mobility + Security が統合されたもので、企業の従業員を支援する総合的でインテリジェントかつ安全性の高いソリューションとなっています。Microsoft 365 は、モダン ワークスタイルを実現するためのお客様のニーズに対応した製品を設計、開発し、市場に投入するプロセスを根本的に変革した証と言えます。

従業員の要望が変化し、チームの多様化とグローバル化が進み、脅威に関する状況が複雑化するなど、ワークスタイルは大きく変化しています。この流れの中で、働き方に関する新しい文化が生まれています。お客様からは、この新しい文化を活かすために、従業員が最新技術を利用できるようにしたいという声をお寄せいただいています。

マイクロソフトは、商用利用の月間アクティブ ユーザー数が1 億を超える Office 365 と 5 億台以上使用されている Windows 10 デバイスにより、独自の立場から企業の従業員を支援し、ビジネスの成長やイノベーションを促進します。

大規模企業から小規模企業まで、あらゆる企業のニーズに対応するため、Microsoft 365 Enterprise (英語)Microsoft 365 Business (英語) の 2 種類をご用意しています。

Microsoft 365 Enterprise は大規模企業向けのプランで、Office 365 Enterprise、Windows 10 Enterprise、Enterprise Mobility + Security と統合されており、従業員が安全な環境で創造的に共同作業を行うことができます。

Microsoft 365 Enterprise には以下のような特長があります。

  • AI や機械学習を活用したツールの支援を受けながらインク、音声、タッチ機能を無理なく使用できるため、創造性を最大限に発揮できます。
  • チームワーク用の共通ツールキットを備えた非常に広範で高度なアプリやサービスのセットが提供され、接続や共有、コミュニケーションの方法を多くの選択肢から柔軟に選ぶことができます。
  • ユーザーやデバイス、アプリ、サービスの管理が統合され、IT 管理が容易になります。
  • 組み込み型のインテリジェントなセキュリティ機能により、顧客データや企業データ、知的財産の保護を支援します。

Microsoft 365 Enterprise には Microsoft 365 E3 と Microsoft 365 E5 の 2 種類のプランが用意されていて、両プランとも 2017 年 8 月 1 日から販売が開始されます。

Microsoft 365 Enterprise (英語) は、昨年にシート数 3 桁の伸びを見せ大きな成功を収めた Secure Productive Enterprise を基盤に構築されました。今後は Secure Productive Enterprise に代わり、Microsoft 365 Enterprise が安全な環境で創造的に共同作業を行えるよう真摯に取り組み、従業員の皆様を支援することを新規ユーザーの方々にお約束します。

Microsoft 365 Business はユーザー数 300 以下の中堅中小企業に向けたサービスで、Office 365 Business Premium、専用設計のセキュリティ機能と管理機能を含む Windows 10、Enterprise Mobility + Security と統合されています。このサービスでは、従業員を支援する機能を提供しながらビジネスのセキュリティ強化と IT 管理の簡素化を実現します。

Microsoft 365 Business には以下のような特長があります。

  • 従業員と顧客、サプライヤー間のつながりを強化し互いに協力することで、従来以上に大きな成果を発揮できます。
  • 従業員があらゆるデバイスを使用してどこからでも作業できます。
  • 常時有効なセキュリティ機能により、あらゆるデバイスの企業データを保護します。
  • 単一の IT コンソールから従業員のデバイスやサービスのセットアップや管理が可能になり、作業が簡素化されます。

Microsoft 365 Business は 2017 年 8 月 2 日からパブリック プレビューが開始され、2017 年秋に全世界に向けて一般提供が開始されます。価格は 1 ユーザーあたり 20 米ドル/月となります。

中堅中小企業ユーザー向けの取り組みの一環として、Office 365 Business Premium と Microsoft 365 Business に 3 つの専用設計アプリケーション (英語) が追加されることが発表されました。

  • Microsoft Connections — 簡単に使用できる電子メール マーケティング サービスです。
  • Microsoft Listings — トップ サイトに簡単にビジネス情報を表示できるようにする機能です。
  • Microsoft Invoicing — プロフェッショナルな請求書を作成し迅速に支払いを受けられるようにするための新機能です。

このほか、走行距離追跡アプリの MileIQ も Office 365 Business Premium に追加されることが発表されました。

さらに、Satya Nadella は、Microsoft 365 により、パートナー様がビジネスを成長させる大きなチャンスが得られる点についても説明しました。Microsoft 365 は 64,000 を超えるクラウド パートナー様のサービスの差別化、販売プロセスの簡素化、売上増加を実現し、その成長に貢献します。

Forrester Total Economic Impact™ の調査 (委託を受け Forrester Consulting が実施) によると、今後 3 年間のパートナー様の利益は平均してMicrosoft 365 Enterprise (英語) で35% 増、Microsoft 365 Business (英語) で20% 増と見込まれています。トレーニングや販売、展開するリソースについての詳細をご覧になりたいパートナー様は、Microsoft 365 パートナー サイト (英語) を参照してください。

マイクロソフトでは、Microsoft 365に注力し、このサービスがお客様やパートナー様の成長とイノベーションにどのように貢献できるかについて大きな関心を寄せています。Microsoft 365 の詳細については、Microsoft.com/Microsoft-365 (英語) を参照してください。

—Kirk Koenigsbauer

※ 本情報の内容 (添付文書、リンク先などを含む) は、作成日時点でのものであり、予告なく変更される場合があります。

Understanding Shadow redundancy

$
0
0

Shadow redundancy is a mechanism thorough which the exchange 2016 ensures that message dint get lost during the transmission and that the exchange maintains the redundant copies of the message before it is passed to the next hop. This is done by keeping the message on another mailbox server. There are couple of points to consider while discussing the redundant storage

1. The Shadow redundancy work when the message is in transit and not when it is delivered.

2. The boundary for which the shadow redundancy works depends upon weather you have a DAG or not. Obviously two mailbox servers are needed for shadow redundancy to work. If DAG is in place- the mailbox server will contact mailbox server in the DAG for holding the other mailbox server. If the DAG spans scross multiple sites- the remote site is preferred. this can be modified with powershell. If DAGs are not in picture then the boundary is the active directory site and services.

3. The copy is stored as long as the mail is in the boundaries. As long as the mail goes out- the shadow copy will be maintained.

How it happens:

When the incoming message arrives, as long as shadow redundancy is enabled, the Mailbox server that receives the message makes a redundant copy of the message on another Mailbox server within the transport high availability boundary before acknowledging receipt of the message back to the sending server. Very important commandlet to understand is “Set-TransportConfig” That allows many connection stuff.

During this entire process- the primary and secondary servers always maintain the heartbeat connection.

When the message is delivered to the mailbox or when it leaves the boundary- the primary server updates the diascard status and thus updates the secondary server that the message has successfully been delivered. Once this happens- the secondary server moves the message to the safety net.

If the outage happens:

Set-TransportConfig has a setting called as ShadowHeartbeatFrequency. It decides how much time should the shadow server takes before opening a connection to the primary server and taking over the roles and responsibilities. ShadowResubmitTimeSpan decides for how long the secondary server will process the message. After that the server will take over the primary copy.

Migrating to Azure SQL Database

$
0
0

As well as hosting the databases used by new cloud applications, Microsoft’s Azure SQL Database service can also become a home for existing databases when organisations migrate their on-premises applications to the cloud.  The operational, financial and technical benefits of using cloud services mean that organisations are increasingly considering how they can modernise their existing IT platforms by moving them to the cloud.  Previously, organisations were happy to deploy new capabilities in the cloud while they retained their existing platforms on-premises in their data centres.  That trend is now changing and an increasing number of organisations I work with are asking how they can migrate what they already have to the cloud – and then make best use of the cloud services available to them.

This article considers how and why organisations are moving their databases to Microsoft’s cloud relational database service, Azure SQL Database.

What is Azure SQL Database? (The summer 2017 answer)

Azure SQL Database is Microsoft’s cloud relational database service, a fully managed (PaaS) variant of the SQL Server database engine.  It provides a Database-as-a-Service capability where Microsoft supplies a user database to hold an application’s tables and code while it manages the underlying database server platform.

Although some of that server platform is accessible by developers and system administrators, most of SQL Server’s server-level functionality and management features aren’t – such as the much missed SQL Server Agent scheduling service.  While that might seem like a significant limitation, Microsoft’s objective is to provide an API endpoint that receives T-SQL commands and then replies with data and messages, and it does that very well.

If there’s one non-technical advantage that organisations like time after time about the service, it’s that the hourly cost of using the service includes the software licensing it uses.  Businesses using the service no longer need to worry about buying enough SQL Server licensing to run the SQL Server workloads it hosts for them.

However, the service is about to evolve.  To complement its Database-as-a-Service capability, Microsoft recently announced a Managed Instance variant of the service.  While very little about it has been revealed so far, we know that it’ll provide access to a fully managed instance of the SQL Server database engine, rather than to just a database.  Developers and administrators will be able to connect, develop, deploy, and manage in the same way as they would with a traditional installation of the SQL Server database engine.  The advantage for them though is that as a PaaS service, Microsoft will manage amongst many things the instance’s patching, backups, and high availability.

While that new variant of the service will undoubtedly accelerate the migration of databases to Azure, it’s not yet available.  In fact, all that’s publicly known about it so far, and its complementary Database Migration Service, is what’s in a nine minute MSDN Channel 9 video here.  Consequently, this article focuses on migrating to the Database-as-a-Service variant.

Deciding what to migrate

Deciding when and how to move applications from an on-premises data centre to the cloud is a broad subject of its own and beyond the scope of this article.  However, Microsoft provides very detailed technical guidance, "Moving Applications to the Cloud", in its Patterns and Practices library that’s available to download from here.

When considering which existing databases make good candidates to migrate to the Azure SQL Database service, rather than perhaps to a regular instance of SQL Server hosted in an IaaS virtual machine, then there are some guiding principles which can help you decide.  While the service is functionally rich and high performing, the way the Microsoft delivers the Database-as-a-Service format service introduces some limitations compared to how a traditional instance of SQL Server works.  The list below identifies some of the most significant considerations organisations should review when thinking of moving a database to the Azure SQL Database service:

  • Single vs. multiple database applications – When a query to one database then accesses or modifies data in another, it uses what’s known as a cross-database reference.  While these are natively and transparently supported by traditional instances of SQL Server, Azure SQL Database service doesn’t support them.  It does have external table and elastic query features that can be used to create workarounds, but this service works best when an application has a single, albeit sometimes very large and very busy, database.
  • Application vendor support – The biggest non-technical consideration is often whether the vendor who supplies an application will still provide support for it when it’s using a PaaS database service.  This isn’t usually a problem for applications developed internally, especially those written specifically to use Azure SQL Database, but it can be a blocker to migrating others if the vendor won’t provide support.  Generally speaking, the best application databases to move to Azure SQL Database are those developed internally or for applications which can no longer get or need vendor support.
  • Server-level features – Some data-centric applications are written to use some of SQL Server’s powerful, but server-level, features such as change data capture or service broker, or some support teams need to use monitoring tools such as SQL Profiler or Performance Monitor.  While the Azure SQL Database service provides a strong set of management tools and functionality, there are some features that cannot be provided in a PaaS service.  The list of features the service does and doesn’t support is available here and where a required feature isn’t available then either application refactoring to use a supporting Azure service or the use of IaaS virtual machines is usually the alternative.
  • Workload sizing – The way that server resources, such as CPU and memory, are allocated to a database hosted by the Azure SQL Database service is very different to a traditional SQL Server database engine. Databases are allocated Database Transaction Units (DTUs), which determine the amount of server resource available to them, or they can share a number of DTUs with other databases in an elastic pool. Knowing the smallest number of DTUs a workload requires is important as the cost of using the service is determined by how many DTUs are allocated.  Using a true ‘only-pay-for-what-you-need’ charging model does though introduce the possibility of reducing operating costs by “dialing down” the number of DTUs allocated at quieter times of the day.

Migration assistance

Having decided to migrate a database to the Azure SQL Database service, the best way to start that process is by using Microsoft’s Data Migration Assistant tool.  This reviews an existing SQL Server database for known incompatibilities with the Azure SQL Database service and reports about issues that will block a database’s migration to the service as well as code that uses only partially supported features – both valuable insights into whether any development work will be needed.  The tool is available to download from here.

Figure 1 – The Azure SQL Database specific options

When a database is ready for migration to the Azure SQL Database service, then the most popular way to upload a database is to create and deploy a database BACPAC.  While there are command line methods which can be scripted, Microsoft also provides a GUI based method that uses SQL Server Management Studio and the Azure Portal.  Because of the constantly evolving nature of the Azure SQL Database service, and its isolation within the Azure platform, it’s not possible to upload a regular SQL Server database backup file and restore that to the service.  Consequently, Microsoft provides comprehensive guidance about how to migrate a database here.

Figure 2 – The database import option in the Azure Portal

Migrating to Azure SQL Database

Today, moving an existing SQL Server database to the Azure SQL Database service isn’t just a case of hosting it using the same server software but on someone else’s compute infrastructure.  Instead, it’s a migration to a Platform-as-a-Service variant of the SQL Server database engine.  A service that’s designed to support databases for both cloud applications and its on-premises and IaaS virtual machine relations.  Having access to cloud scalability, pay-for-what-you-use billing, and a fully managed database is a trade-off though.  Databases used by simple single-database applications are usually the easiest to migrate, ageing 3rd party vendor supplied systems the hardest.  When they do migrate though, databases hosted by the service can be easier and cheaper to run, and get the newest SQL Server features first.

Gavin Payne is the Head of Digital Transformation for Coeo, a Microsoft Gold partner that provides consulting and managed services for Microsoft data management and analytics technologies.  He is a Microsoft Certified Architect and Microsoft Certified Master, and a regular speaker at community and industry events.

Updated Microsoft Cloud Networking for Enterprise Architects poster has links to the latest resources

$
0
0

The Microsoft Cloud Networking for Enterprise Architects poster has been reviewed and updated with links to the latest resources:

Download this multi-page poster in PDF or Visio format, or read an article version. Get this poster in eleven languages here.

The Microsoft Cloud for Enterprise Architects series

The Microsoft Cloud Networking for Enterprise Architects poster is just one in a series that provides detailed architectural advice from choosing the right Microsoft cloud offerings to designing IT elements such as security, networking, identity, mobility, and storage. This poster set shows the breadth and depth of the Microsoft cloud, the industry’s most complete cloud solution, and how it can be used to solve your IT and business problems.

Browse through the whole set of posters at http://aka.ms/cloudarchseries. For these posters in eleven languages, see this blog post.

 

To join the CAAB, become a member of the CAAB space in the Microsoft Tech Community and send a quick email to CAAB@microsoft.com to introduce yourself. Please feel free to include any information about your experience in creating cloud-based solutions with Microsoft products or areas of interest. Join now and add your voice to the cloud adoption discussion that is happening across Microsoft and the industry.

Mobility and Identity admins, get EMS up and running at Microsoft Ignite!

$
0
0

Microsoft Ignite is your chance for access to in-depth training, deep dives and demos of new tech, and to connect with your peers. Keynotes by Satya Nadella, Microsoft CEO, and Harry Shum, Executive Vice President Microsoft AI, will showcase the Microsoft vision for the future. More than 700 sessions will give you insights and roadmaps from industry leaders so that you can bring back bold new ideas to your organization. Join us at Microsoft Ignite this year in  Orlando, Florida from September 25-29, 2017.

In addition to the scheduled sessions available, we’re also offering a unique opportunity to get hands-on deployment guidance for Enterprise Mobility + Security at Microsoft Ignite pre-day on September 24. EMS pre-day sessions are designed for admins and include 1:1 collaboration with the engineers who built Microsoft Azure Active Directory and Microsoft Intune. You can tap into their knowledge to plan your deployment and skill up on the latest in identity and access management and mobile productivity.

This year, there are two EMS pre-day sessions offered for a limited number of Microsoft Ignite attendees:

  1. EMS pre-day option 1: Get mobile productivity up and running with Enterprise Mobility + Security (EMS) – Intended for IT admins who manage apps and devices for their organizations and who are looking to understand how Microsoft Intune can help them manage mobile devices and applications, the pre-day will also address conditional access in detail and help you understand how Graph/Intune has been helping customers with automation and data extraction. This will be followed by 1:1 interaction time with engineering.
  2. EMS pre-day option 2: Get identity and access management up and running for Office365 and thousands of other applications – Intended for Microsoft Azure Active Directory (Azure AD) administrators within Office365 and EMS, this pre-day provides attendees a deep dive in to end-to-end authentication and how it flows between Office 365 Applications as well as browser apps and native applications. This session also explores troubleshooting authentication as well as real world information on how to properly configure authentication and how it affects a user. This will be followed by 1:1 interaction time with engineering and an opportunity to troubleshoot your own organization’s deployment blockers.

Pre-day sessions can fill up fast, to reserve your spot, simply select the Full Conference Pass + pre-day session option when you register for Microsoft Ignite. If you’re already registered for Microsoft Ignite, you can sign in to your registration record and add the EMS pre-day session of your choice.

We look forward to working with you at one of the Ignite pre-day sessions—Register now for Microsoft Ignite and EMS pre-day!

See you there!

Tip of the Day: The Best of Defrag Tools - Debugging the Network Stack

$
0
0

Today's tip...

C'mon, you know you've long dreamt of having the Ninja skills to debug the network stack!  I mean, who hasn't?  Well now you can grasshopper.  Simply watch the following very special Defrag Tools episode:

Defrag Tools #177 - Debugging the Network Stack

In this episode of Defrag Tools, Chad Beeder is joined by Jeffrey Tippet from the Windows Networking team to talk about how to debug networking problems in NDIS (Network Driver Interface Specification) using the !ndiskd debugger extension in WinDbg.

Resources:

The NDIS Blog

Timeline:

[00:00] Introductions

[01:10] What is NDIS (Network Driver Interface Specification)

[03:11] Common problems encountered by the Networking Team. (Bug Check 0x9F DRIVER_POWER_STATE_FAILURE, Bug Check 0x133 DPC_WATCHDOG_VIOLATION)

[06:27] Introducing the !ndiskd debugger extension. Start with !ndiskd.help

[10:27] !ndiskd.netreport gives you a Network Debug Report including graphical overview of the network configuration

[18:23] !ndiskd.netreport -verbose takes much longer to run, but gives a lot more detail including animations of how many packets are going over each adapter

[22:58] To enable the logging of recent network traffic, and get the animations in the netreport, enable NBL Logging by setting a registry key (documented here).

[25:20] Wi-Fi can act like an access point in some cases (i.e. Wi-Fi Direct). How this shows up in the Network .

[27:30] The other tabs on the report: Useful if you need to send the report to someone else.

[31:34] DRIVER_POWER_STATE_FAILURE debugging tips: Use !ndiskd.oid to see which network OIDs (networking requests) are pending. One of these may be the power request which is holding up the network stack.

[34:40] DPC_WATCHDOG_VIOLATION debugging tips

[37:15] Comments/questions? Email us at defragtools@microsoft.com

Want to watch more Defrag Episodes? Check out the Defrag Tools series page.

Eternal Synergy Exploit Analysis

$
0
0

Introduction

Recently we announced a series of blog posts dissecting the exploits released by the ShadowBrokers in April 2017; specifically some of the less explored exploits. This week we are going to take a look at Eternal Synergy, an SMBv1 authenticated exploit. This one is particularly interesting because many of the exploitation steps are purely packet-based, as opposed to local shellcode execution. Like the other SMB vulnerabilities, this one was also addressed in MS17-010 as CVE-2017-0143. The exploit works up to Windows 8, but does not work as written against any newer platforms.

This post has four main parts. We will deep-dive into the vulnerability, followed by a discussion of how the vulnerability was weaponized to create Read/Write/eXecute primitives that are used as building blocks throughout the exploit. We will then next walk through the execution of EternalSynergy and see how these primitives were used to deliver a full exploit. Finally, we will briefly discuss the effect of recent mitigations on the presented exploit techniques.

The Vulnerability: CVE-2017-0143

The root cause of this vulnerability stems from not taking the command type of an SMB message into account when determining if the message is part of a transaction. In other words, as long as the SMB header UID, PID, TID and OtherInfo fields match the corresponding transaction fields, the message would be considered to be part of that transaction.

Usually, the OtherInfo field stores a MID. In the case of SMB_COM_WRITE_ANDX messages, however, it stores a FID instead. This creates a potential message type confusion: Given an existing SMB_COM_WRITE_ANDX transaction, an incoming SMB message with MID equal to the transaction FID would be included in the transaction.

PTRANSACTION
SrvFindTransaction (
    IN PCONNECTION Connection,
    IN PSMB_HEADER Header,
    IN USHORT Fid OPTIONAL
    )
{
    ...

    //
    // If this is a multipiece transaction SMB, the MIDs of all the pieces
    // must match.  If it is a multipiece WriteAndX protocol the pieces
    // using the FID.
    //

    if (Header->Command == SMB_COM_WRITE_ANDX) {
        targetOtherInfo = Fid;
    } else {
        targetOtherInfo = SmbGetAlignedUshort( &Header->Mid );
    }

    ...

    //
    // Walk the transaction list, looking for one with the same
    // identity as the new transaction.
    //

    for ( listEntry = Connection->PagedConnection->TransactionList.Flink;
          listEntry != &Connection->PagedConnection->TransactionList;
          listEntry = listEntry->Flink ) {

        thisTransaction = CONTAINING_RECORD(
                            listEntry,
                            TRANSACTION,
                            ConnectionListEntry
                            );

        if ( ( thisTransaction->Tid == SmbGetAlignedUshort( &Header->Tid ) ) &&
             ( thisTransaction->Pid == SmbGetAlignedUshort( &Header->Pid ) ) &&
             ( thisTransaction->Uid == SmbGetAlignedUshort( &Header->Uid ) ) &&
             ( thisTransaction->OtherInfo == targetOtherInfo ) ) {

            ...

            // A transaction with the same identity has been found

            ...
        }
...
}

Exploitation

When a SMB message arrives, the appropriate handler will copy its contents into the corresponding transaction buffer, namely InData. The SMB_COM_TRANSACTION_SECONDARY handler assumes that the InData address points to the start of the buffer.

if ( dataCount != 0 ) {
    RtlMoveMemory(
        transaction->InData + dataDisplacement,
        (PCHAR)header + dataOffset,
        dataCount
        );
}

However, in the case of a SMB_COM_WRITE_ANDX transaction, each time a SMB is received for that transaction, the InData address is updated to point to the end of the existing data.

 RtlCopyMemory(transaction->InData, writeAddress, writeLength );

//
// Update the transaction data pointer to where the next
// WriteAndX data buffer will go.
//

transaction->InData += writeLength;

Leveraging the packet confusion, an attacker can insert a SMB_COM_TRANSACTION_SECONDARY message into a SMB_COM_WRITE_ANDX transaction. In that case, the InData will point past the start of the buffer, and so the SMB_COM_TRANSACTION_SECONDARY handler can overflow the buffer during copying the incoming message data.

Taking Over a Transaction

The building block for the RWX primitives used in the exploit takes over a transaction structure by exploiting the message confusion described in the previous section. First, a 'control' transaction is allocated via a SMB_COM_TRANSACTION client message.

Figure 1 - Memory layout before packet type confusion is triggered. Stripes represent attacker-controlled input.

 

kd> dt srv!TRANSACTION 0xfffff8a00167f010
   ...
   +0x080 InData           : 0xfffff8a0`0167f110
   +0x088 OutData          : (null)
   ...
   +0x0a4 DataCount        : 0x0
   +0x0a8 TotalDataCount   : 0x5100
   ...
   +0x0ba Tid              : 0x800
   +0x0bc Pid              : 0xab9e
   +0x0be Uid              : 0x800
   +0x0c0 OtherInfo        : 0x4000

Then, an SMB_COM_WRITE_ANDX message is sent, crafted to exploit the packet confusion. As a result, the InData pointer of the control transaction is corrupted to point past the start of the buffer. In this case it is off by 0x200. bytes.

kd> dt srv!TRANSACTION 0xfffff8a00167f010
   ...
   +0x080 InData           : 0xfffff8a0`0167f310
   +0x088 OutData          : (null)
   ...
   +0x0a4 DataCount        : 0x200
   +0x0a8 TotalDataCount   : 0x5100
   ...
   +0x0ba Tid              : 0x800
   +0x0bc Pid              : 0xab9e
   +0x0be Uid              : 0x800
   +0x0c0 OtherInfo        : 0x4000

Next, an SMB_COM_TRANSACTION_SECONDARY message is sent to the same transaction, and by leveraging the corrupted InData pointer it modifies a neighboring, 'victim' transaction. We revisit below how the target write address is computed.

if ( dataCount != 0 ) {
    RtlMoveMemory(
        transaction->InData + dataDisplacement,
        (PCHAR)header + dataOffset,
        dataCount
        );
}

The incoming message dataDisplacement is large enough to reach the neighboring transaction.

kd> dv dataDisplacement
dataDisplacement = 0x5020

 

Figure 2 - Memory layout after packet type confusion results in overwriting victim transaction. Stripes represent attacker-controlled input.

 

Specifically, it will overwrite a transaction's OtherInfo field with an attacker-controlled value (in this case 0), so that all future messages sent using MID=0 will be directed to the victim transaction. Below we see the victim transaction just before the overwrite happens.

 kd> dt srv!TRANSACTION 0xfffff8a00167f310+0x5020-0xc0
   ...
   +0x080 InData           : 0xfffff8a0`0168436c
   +0x088 OutData          : 0xfffff8a0`01684ffc
   ...
   +0x0ba Tid              : 0x800
   +0x0bc Pid              : 0xab9f
   +0x0be Uid              : 0x800
   +0x0c0 OtherInfo        : 0x8ccb

After taking over a victim transaction, the exploit can predictably continue corrupting fields within the same or other transactions, and reliably trigger them by sending a message to the transaction. Note that for this technique to work, the attacker must be able to predictably allocate a pair of neighboring transactions.

Remote Arbitrary Write Primitive

The Arbitrary Write Primitive allows the attacker to modify memory contents on the victim system, and serves as the foundation for the rest of the techniques used in this exploit. To corrupt memory, it leverages the technique described in the previous section. Specifically, the Write Primitive is constructed in two steps:

 

Figure 3 - The exploit constructs an Arbitrary Write primitive by corrupting the victim transaction. Stripes represent attacker-controlled input.

 

In Step #1, the victim transaction InData buffer address is overwritten such that it points to the target write address.

Figure 4 – Example SMB message used to perform Step #1 for target address 0xfffff802846f4000.

 

Next in Step #2, the attacker can overwrite arbitrary kernel memory by sending a to the victim transaction. Upon receiving the message, its contents will be copied to the InData buffer; however, in our case the buffer address was corrupted and so the contents are copied to the attacker-controlled address. Below is an example packet, where the shellcode contained in the 'Extra byte parameters' will be copied over to the victim machine.

Figure 5 – Example SMB message used to copy payload to the victim machine,

Remote Arbitrary Read Primitive

The Arbitrary Read Primitive allows the attacker to remotely read the contents of any memory address from the target system. To use this primitive, the attacker must have successfully:

  • Taken over a connection, and established the write primitive, as explained above.
  • Leaked a valid TRANSACTION structure

As seen in Figure 6, we have a control transaction adjacent to a Victim#1 transaction for the write primitive and a Victim#2 transaction. Message#1 uses the write primitive to corrupt the Victim#1 InData buffer address so that it points to the Victim#2 base address. That means that any message directed to the Victim#1 transaction will result in corrupting the Victim#2 transaction at the offset specified by the message's ‘Data Displacement’ field. Victim#2 is the leaked TRANSACTION structure and its base address can be inferred by its contents.

Figure 6 - To construct the Arbitrary Read primitive the exploit overwrites the Victim#2 transaction OutData pointer. Stripes represent attacker-controlled input.

 

The rest of the messages contain the Transaction Secondary command (0x26), and use the same TID, PID and UID. Messages #2-5 target the Victim#1 transaction (MID=0) and perform overwriting of specific fields of the Victim#2 transaction. The table below summarizes the modifications made by each message:

Msg# Offset Transaction Field Overwrite Value
5 0x001 BlockHeader.State 0x2
3 0x054 Timeout 0x40000023
2 0x088 OutData Attacker-specified address (e.g. 0xfffff88004b78150)
2 0x090 SetupCount 0x4
2 0x094 MaxSetupCount 0x0
2 0x098 ParameterCount 0x0
2 0x09c TotalParameterCount 0x0
2 0x0a0 MaxParameterCount 0x10
2 0x0a4 DataCount 0x0
2 0x0a8 TotalDataCount 0x0
2 0x0ac MaxDataCount 0x20000
4 0x0e3 Executing 0x0

Figure 7 - List of the Victim#2 transaction fields that are corrupted during an Arbitrary Read operation

 

As an example, below is the message sent to execute a remote read operation. The payload is specified in the 'Extra byte parameters', the target address in the 'Data Displacement' and the size in the 'Data Count' field.

Figure 8 - Example SMB message that overwrites the Victim#2 transaction.

 

Message #5 is a dummy packet has a non-zero MID targeting the Victim#2 transaction, sent to trigger the server response. During that response message, contents of the memory address of the corrupted DataOut pointer are copied out and sent back to the SMB client. An example of this type of message is seen below:

Figure 9 - Example SMB message demonstrating the final step of triggering the Arbitrary Read operation.

Code Execution

The techniques discussed in the previous sections operate on memory; the exploit still needs a way to alter control flow and invoke execution in a repeatable way. The exploit achieves this by corrupting pointers to SMB message handlers.

First, it uses the write primitive to overwrite entry 0xe of the srv!SrvTransaction2DispatchTable, with the address of the execution target. This is a dispatch table that contains pointers to SMB message handlers. This particular entry, targeting the TRANS2_SESSION_SETUP subcommand handler, is convenient since it is not implemented, and thus, not expected to be used by "normal" SMB traffic. Details on how this global pointer is discovered and leaked back to the attacker are presented in the next section.

Next, a message of type SMB_COM_TRANSACTION2 and subcommand set to TRANS2_SESSION_SETUP is sent to the victim, triggering the execution of the corrupted function pointer. The target transaction of this message is not important. An example packet is seen below.

Figure 10 - Example SMB message sent to trigger payload execution on the victim machine.

Putting It All Together

In this section, we walk through the exploit and see how the above building blocks combine to achieve remote kernel code execution.

Figure 11 - EternalSynergy attempting to leak a TRANSACTION structure

 

Figure 12 - Network traffic during the TRANSACTION leak phase.

 

In this phase, a TRANSACTION structure is leaked from the victim machine. This structure is used in two ways. First, it contains pointers (e.g. EndpointSpinLock) that serve as the base for discovering other useful addresses. Second, it is used as a Victim#2 transaction, since in order to build a Read primitive the attacker needs the details of a valid TRANSACTION structure. The method used to leak the pointer is similar to the one described in the Eternal Champion exploit.

Below, are the contents of the SMB_COM_TRANSACTION message containing leaked pool memory. The leaked TRANSACTION structure starts at offset 0xb0. We can see that it contains, among other things, the transaction TID, PID, UID and OtherInfo. Also, pointers such as InData (offset 0x130) allow the attacker to determine the base memory address of the transaction.

0000   ff 53 4d 42 a0 00 00 00 00 98 03 c0 00 00 00 00  .SMB............
0010   00 00 00 00 00 00 00 00 00 08 37 ca 00 08 56 15  ..........7...V.
0020   12 00 00 00 04 00 00 00 c0 10 00 00 00 00 00 00  ................
0030   48 00 00 00 04 00 00 00 58 01 00 00 48 00 00 00  H.......X...H...
0040   b8 10 00 00 00 59 01 00 fc 84 36 3a 10 77 98 5a  .....Y....6:.w.Z
0050   00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
0060   00 01 02 03 46 72 61 67 00 00 00 00 00 00 00 00  ....Frag........
0070   20 51 00 00 00 00 00 00 00 00 00 00 00 00 00 00   Q..............
0080   02 01 01 00 46 72 65 65 00 00 00 00 00 00 00 00  ....Free........
0090   01 01 eb 03 4c 53 74 72 30 a1 07 00 83 fa ff ff  ....LStr0.......
00a0   8c 0e 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
00b0   0c 02 8c 0e 00 00 00 00 d0 2d e6 00 83 fa ff ff  .........-......
00c0   90 bb e5 00 83 fa ff ff d0 2d 46 02 a0 f8 ff ff  .........-F.....
00d0   d0 8d 41 02 a0 f8 ff ff 48 00 56 02 a0 f8 ff ff  ..A.....H.V.....
00e0   48 c0 55 02 a0 f8 ff ff 00 00 00 00 00 00 00 00  H.U.............
00f0   00 00 02 00 00 00 00 00 68 b2 57 02 a0 f8 ff ff  ........h.W.....
0100   6d 39 00 00 ff ff ff ff 00 00 00 00 00 00 00 00  m9..............
0110   6c b2 57 02 a0 f8 ff ff 00 00 00 00 00 00 00 00  l.W.............
0120   6c b2 57 02 a0 f8 ff ff fc b2 57 02 a0 f8 ff ff  l.W.......W.....
0130  6c b2 57 02 a0 f8 ff ff fc bf 57 02 a0 f8 ff ff  l.W.......W.....
0140   00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
0150   00 0d 00 00 00 00 00 00 90 00 00 00 00 00 00 00  ................
0160   00 00 00 00 01 01 00 00 00 00 00 0837 ca 00 08  ............7...
0170   5a 15 00 00 00 00 00 00 00 00 00 00 00 00 00 00  Z...............
0180   00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
0190   01 01 01 00 00 00 00 00 00 00 00 00 00 00 00 00  ................

 

Figure 13 - The exploit tries to allocate neighboring control-victim transactions.

 

Figure 14 - Network packets during control-victim transaction allocation.

 

A series of SMB_COM_TRANSACTION messages are sent in order to allocate a pair of neighboring control-victim transactions. Specifically, "groom” packets contain SMB messages crafted to create the packet confusion, or in other words, eligible to be a control transaction. "Bride" packets create transactions that are candidates for corruption, that is, victim transactions.

Figure 15 - The exploit triggers the message type confusion.

 

Figure 16 - Network packets showing the SMB_COM_WRITE_ANDX message that exploits the message type confusion.

 

The exploit takes control of a neighboring victim transaction to be used for the R/W primitives.

Figure 17 - The exploit locates and leaks the srv!SrvTransaction2DispatchTable memory address.

 

The read primitive is exercised multiple times, in order to discover the location of the srv!SrvTransaction2DispatchTable global pointer, used trigger shellcode execution.

 

Figure 18 - The exploit locates and leaks the scratch page.

 

Figure 19 - Network traffic during 3 remote read operations.

 

The read primitive is again exercised multiple times to discover the base of ntoskrnl.exe. The RWX memory found above is used as a scratch page, where shellcode is written and executed and return values are stored. This page exists due to a RWX section in the ntoskrnl.exe. It is worth noting that ntoskrnl.exe on Windows 8.1 and above does not have an RWX section.

kd> ?? 0xfffff802846f4000-0xfffff80284483000
unsigned int64 0x271000

kd> !dh nt -s
SECTION HEADER #3
  RWEXEC name
    1000 virtual size
  271000 virtual address
       0 size of raw data
       0 file pointer to raw data
       0 file pointer to relocation table
       0 file pointer to line numbers
       0 number of relocations
       0 number of line numbers
E80000A0 flags
         Code
         Uninitialized Data
         Not Paged
         (no align specified)
         Execute Read Write

 

Figure 20 - Final stage of the exploit where payload is copied and executed.

 

Figure 21 - SMB packets during this final stage where both Write and Read primitives are exercised.

 

This is when shellcode is copied and executed on the victim machine. First, using the write primitive, the exploit shellcode is copied to the scratch page. This shellcode is only a stub function that allocates memory in the pool, using nt!ExAllocatePoolWithTag. Then, a SMB_COM_TRANSACTION2 message is sent to execute the shellcode. The return value is saved at a fixed offset on the scratch page and leaked back to the attacker using the Read Primitive. We can see the stub function below:

;
; Retrieve the _KPRCB structure
;
fffff802`846f400 65488b042520000000 mov   rax,qword ptr gs:[20h]

;
; Access the PPNxPagedLookasideList.AllocateEx member
;
fffff802`846f4009 4805b0080000       add     rax,8B0h
fffff802`846f400f 31c9               xor     ecx,ecx

;
; Set NumberOfBytes (0xe4b) for size argument
;
fffff802`846f4011 8b151e000000       mov     edx,dword ptr [fffff802`846f4035]
fffff802`846f4017 4883ec20           sub     rsp,20h

;
; Call nt!ExAllocatePoolWithTag
;
fffff802`846f401b ff10               call    qword ptr [rax]
fffff802`846f401d 4883c420           add     rsp,20h

;
; Check for errors
;
fffff802`846f4021 85c0               test    eax,eax
fffff802`846f4023 7407               je      fffff802`846f402c

;
; Save the allocated memory address
;
fffff802`846f4025 48890501000000     mov     qword ptr [fffff802`846f402d],rax
fffff802`846f402c c3                 ret

Lastly, the scratch page is cleared, the attacker-provided shellcode is written to the pool-allocated page and a message is sent to trigger execution.

Impact of Mitigations on Exploit

The techniques used in this exploit as-written are not directly applicable to newer platforms due to several kernel security improvements. In particular:

  1. Hypervisor-enforced Code Integrity (HVCI) prevents unsigned kernel pages from being executed, and prevents attackers from copying and executing code, even in the presence of RWX memory.
  2. Control Flow Guard (CFG) prevents invalid indirect function calls, such as calling a corrupted function pointer, a technique used in this exploit to trigger code execution.

Final Words

I’d like to thank the following people for their work during the initial investigation of CVE-2017-0143: Swamy Shivaganga Nagaraju, Nicolas Joly (MSRC Vulnerabilities & Mitigations Team)  and Viktor Brange (Windows Offensive Security Research Team).

 

Georgios Baltas
MSRC Vulnerabilities & Mitigations Team


The NLB Deployment Reference – All you need to know to implement and deploy Microsoft Network Load Balancing

$
0
0

Hello everyone! Dante again, this time with some good stuff related to NLB. Based in our experience, we could identify that the most common issue with NLB is that people is not informed enough about the technology, therefore the deployments and implementations usually lack some mandatory settings, or don’t get into consideration the most important factor in every network: the bandwidth consumption.

In this blog note I will try to summarize every important aspect to be considered at the deployment time, and also will share with you some third-party documentation that will help you troubleshoot the most common NLB-related problem scenarios in our support life.

If you would like to know how NLB works, I would recommend to use the following blog note which covers almost all the basics. I will not touch those topics (well, maybe some of them), so please ensure you get them ready in your head to better understand what I’ll talk about 😊

What is the best configuration method for NLB?

As you all may know, NLB has 3 modes to be configured: Unicast, Multicast and Multicast with IGMP. The real answer for that question is: it depends. All three modes work very well if they are properly configured, but they can cause you a real headache if they are not.

From a NLB standpoint, the configuration is straightforward: install the role, open the console, create a new cluster, select the nodes, the NLB method, the ports and affinity and that’s all.

You have also some tools to prepare yourself for each method, basically on the MAC Addressing of each of them. The most important tool for this is NLB IP2MAC. This tool is available on any machine with NLB installed, and is very easy to be used. Just type the command with the desired IP Address to be used as Virtual IP address and you will get the list of possible MAC Addresses for each method. The command is as follows:

NLB IP2MAC <VIP of NLB>

As noted in the screenshot, we can get the MAC Address for each mode easily with this command. You can do your math also, considering the following:

1-  Unicast MAC will always start with 02-BF identifying the Unicast MAC of NLB, and then the IP address converted, by octet, to Hexadecimal.

2- Multicast MAC will always start with 03-BF as the Multicast MAC of NLB, and then, again, the IP address converted, by octet, to Hexadecimal.

3-  IGMP Multicast MAC is different than the other ones. It will always have 01-00-5E-7F as MAC identifier for NLB IGMP Multicast, and the last 2 parts are the last 2 octets of the IP address.

Now the problem comes when you choose a NLB method but your network infrastructures is not prepared for that. Let’s explain each method.

Unicast

Unicast is the easier way to configure the NLB. Why? Because you don’t need to do anything else in your network infrastructure… In theory. As Unicast mode replaces the original hardware MAC address for a NLB Unicast MAC in each node of the cluster, physical Switches will get crazy and start sending the NLB traffic to all its switch ports, to ensure the traffic is sent to the correct nodes.

In some cases, you would like to have 2 NICs on your server. If you are running Windows 2008 or higher you should consider this other blog note to ensure the traffic gets routed properly with this configuration.

Each NLB Heartbeat packet contains about 1500 bytes of data (only the heartbeats!). By default, the nodes send heartbeat packets each second and wait for 5 of those packets to be received until it considers the node as converged. And that’s per node. You need to increase that number by the number of nodes you have in place. Pretty much data huh?

Now imagine what would happen if you have a 24 or 48 ports Switch in which you have only 2 NLB nodes connected. Since the Unicast mode replaces the MAC of both nodes, for the switch both servers will have the same MAC, and it will not be able to update the MAC Address table properly, causing what is called a Unicast Flood. So, you have now the traffic coming to the NLB from the clients AND the NLB heartbeat traffic being sent to all the switch ports. What a mess, right?

To avoid this, you have 2 options: First is to get a HUB. Yes, a HUB, I’m not kidding. I know it may be hard to get one nowadays, but this is the best way to isolate the traffic. Connecting the HUB to the switch, the Switch will learn the NLB Unicast MAC (which is 02-BF-XX-XX-XX-XX) only in the port the HUB is connected, and the HUB will replicate the traffic to the nodes (because that’s what a HUB is intended for, right?). So, the flooding is being made by the HUB and the other servers on the switch get NLB-Traffic-Free. The second option is to create a separate VLAN for the NLB Servers, but ensuring this VLAN is reachable from other subnets. This way you will keep the traffic isolated only to the Switch ports assigned to that VLAN, and not bothering the rest of the servers on it, reducing congestion.

Ok, but what happens if we have a virtual environment? Usually the virtual switches in Virtual Environments prevent the Unicast Flooding by default (which makes totally sense), so you will need to do some extra settings on your Virtual Environment to get it compliant with Unicast.

If you’re using the best virtual environment ever (Hyper-V, of course) you have it easy: Go to the Hyper-V console, machine settings, select the NIC settings and enable the checkbox for “Enable Spoofing Of MAC Addresses”. Click OK and you’re done.

Now, for VMWare, it’s a little bit more complex, but luckily we have the following VMWare note which explains how to resolve that from their side. Remember to contact them if you have questions with their documentation.

In case you have other virtual environments (like XenServer, VirtualBox, or any other) and are experiencing similar issues, go ahead and contact them for guidance, they should be able to help you. Of course, you can contact us also, we will help you identify where to go next for sure.

Multicast

This mode is my preferred one. It’s kind of tricky, but when you get to understand it you’ll realize it’s the best way. Be warned that your networking team will hate you because you will make them work and will try to convince you to not configure this mode, but stay strong, it worth it.

Multicast mode is different than Unicast but similar at the same time. The main difference is that the Switch may drop the packets or flooding them if it doesn’t know where to send them. But with
Multicast
we have more efficient ways to avoid this. Let’s see how.

When you configure the Multicast mode on NLB, you will not get the Hardware MAC address changed on servers, so you will be able to access them through the same NIC all the time, but it configures the NLB VIP to a NLB Multicast MAC Address instead. This MAC has the format of 03-BF-XX-XX-XX-XX. If you do an IPCONFIG /all you will not find it in the network adapters. Since the MAC is not directly attached to the NICs, the switch cannot learn this MAC, but all the traffic going to the NLB Virtual IP address (VIP) will have this MAC as destination. Because the Switch cannot learn this NLB Multicast MAC directly, it will drop the packets (or flood all ports in the same way as Unicast), and the clients will have issues to reach the NLB nodes, as well you will experience Convergence issues. I’ve seen a mix of both things happening several times.

This behavior usually cause confusion, because at the beginning of the configuration it almost always starts working “without issues”, but after some time configured, you will notice the nodes go down in the console, or start experiencing application issues while going through the VIP of the NLB. All of these are symptoms of packet drops due to the lack of network compliance for NLB Multicast.

To avoid this, Microsoft has a mandatory requirement (yes, I used the mandatory word) to have this mode in place. That requirement is the addition of a Static ARP entry and a Static MAC address table entry on your network infrastructure. And that’s where your network team will complain.

Again, from NLB side it’s very straightforward: Install the role, open the NLB console, create a new cluster, select the nodes, the NLB method, the ports and affinity and that’s all. That’s all you need to do from Windows side, nothing else is required to have NLB in place in each NLB cluster. But in the case of Multicast and Multicast with IGMP, you need to manually “teach” the switches (and routers) where the NLB nodes are.

 The following notes explain clearly what you need to do to get this properly configured based on your network infrastructure vendor. Bear in mind that we don’t maintain these notes so they may have inaccurate information, please contact each vendor in case of any doubts, our intention is to help you get the most complete list of notes for your information in a single place, but we are not accountable of what’s included on them.

VMWare:

Sample Configuration - Network Load Balancing (NLB) Multicast mode over routed subnet - Cisco Switch Static ARP Configuration (1006525)

Cisco:

Catalyst Switches for Microsoft Network Load Balancing Configuration Example

Microsoft Network Load Balancing on Nexus 7000 Configuration Example

Juniper:

EX Series Switches and Microsoft Network Load Balancing (NLB) in multicast mode

[EX/QFX] Example - Workaround for using Microsoft network load balancing on EX4300 and QFX5100 switches

HPE:

HP Switch 5500/5500G - How to implement Microsoft Network Load Balancing using multicasts on the Switch 5500 and 5500G

Dell:

Dell Configuration Guide for the S4048–ON System 9.9(0.0)

Dell Networking Force10 switches and Microsoft Network Load Balancing

Huawei:

Example for Connecting a Device to an NLB Cluster (Using Multi-Interface ARP)

D-Link:

D-Link Layer 3 Switch Microsoft NLB in Multicast Mode Configuration Example Howto

Avaya:

Technical Configuration Guide for Microsoft Network Load Balancing

H3C:

05-Layer 3 - IP Services Configuration Guide

If you find out that your vendor is not listed here please let us know by a comment so we can try to add it. We are not going to take much care about different models, since usually same vendor share same logic for configuration, but we want you to have at least an idea of how it should look like. If you need assistance for any specific model, please contact your manufacturer.

Again, we need to put the disclaimer since we don’t own any of those notes. In case you have any doubts on the content of the above-mentioned notes please contact the respective manufacturer for support or assistance.

Finally, to cover the Virtualization part, you may be wondering how would you configure the static MAC Address table for a Virtual Machine. Well, we need to understand first how the Virtual Infrastructure is composed. Usually, in enterprises, you have a cluster of a bunch of servers acting as hypervisors (Hyper-V, VMWare, etc.) which share the same VMs and have High Availability features to move the VMs between them to avoid service disruption. How can we ensure the NLB VMs will receive the correct traffic? Here comes another complain from the Networking team. To ensure the VMs will receive the traffic no matter in which virtual host they are, you need to ensure that all the switch ports connected to all the virtual hosts are assigned to the NLB Multicast MAC on the Static entry for the MAC Address table. So, if you have 8 hosts with 2 NICs each connected to the network, you should have 16 interfaces assigned to the NLB Multicast MAC address in the MAC Address Table of the switches. This way you can do Live Migration (or vMotion in case of VMWare) without worrying to which host the virtual machine will get into.

Multicast with IGMP

Finally, we have the last mode for NLB: IGMP Multicast (or Multicast with IGMP). This mode is completely dependent on your network infrastructure, since you need to have IGMP capable switches to get it to work. This mode is basically the same than Multicast, but with an automatic method for NLB Traffic detection, based on IGMP Multicast traffic.

When you enable this method, the NLB nodes will start sending IGMP Join messages to the Multicast Address 239.255.XX.XX where the X correspond to the last 2 octets of the Virtual IP of NLB. In our example of the screenshot, for the IP 10.0.0.1, the Multicast Address used for the cluster nodes to send the IGMP traffic would be 239.255.0.1.

Did I say you need IGMP capable hardware on your network for this method to work? Yes, because your switch should be capable of snoop the traffic to find the ports which are sending the IGMP traffic to auto configure their MAC Address table accurately and send the traffic to the corresponding ports.

Some of the notes listed above in the Multicast section have the correct configuration parameters for their devices, but in case you need confirmation your equipment is capable of this mode, please contact your Hardware vendor for confirmation.

As summary:

  •           Unicast mode is the easiest way to configure NLB in simple environments. You don’t need to do much to maintain the cluster, but be aware that this configuration may cause you network performance issues on other systems due to the traffic flooding.
  •           Multicast is the best method (in my opinion) because you specifically direct the traffic to the nodes you need and don’t impact nearby servers. The bandwidth is efficiently used and you have a low risk of performance issues as collateral damage for other systems. As a counterpart, needs a lot more of network equipment knowledge and involvement, but good things are not usually free, right?
  •          Multicast with IGMP is a good choice also if you have capable networking equipment. It has the benefits of the Multicast with the improvement of the auto configuration of the switches if they are IGMP Snooping enabled. The disadvantage part is, again, a potential limitation on the network equipment. Your network infrastructure may not be capable of this IGMP Snooping feature.

So far, we have reached the end of the blog today, so I hope you enjoyed the explanation and, of course, hope this helps you get your NLB back on track!

I would like to say a big THANKS to Daniel Mauser, who helped me get this piece of knowledge here for you. We'll miss you in the team Daniel!!

See you next entry!

New Windows Server preview release available to Windows Insiders!

$
0
0

On June 15th we announced some very exciting news – Windows Server will now have more frequent releases providing customers who are innovating quickly an opportunity to take advantage of new operating system capabilities at a faster pace, both in applications, particularly those built on containers and microservices, as well as in the software-defined datacenter.

Today we’re extremely happy to announce that the very first preview release of this new cadence is available for Windows Insiders! To access the new Windows Server preview release, you can register for the Windows Insiders for Business* program. If you don’t have an Office 365 or an Azure AD account, you can also register for the Windows Insiders program with your Microsoft ID. New preview builds will be available on a regular basis, and the latest Windows Server build is always available for download. Matching Windows Server container images will be available via the Docker Hub. Get more information about Windows Server containers and Insider builds.

Watch the Windows Server blog over the coming weeks as we’ll be blogging in detail about the new features and capabilities coming in these previews and ultimately available for production use in the Semi-annual Channel release this fall.

It’s all about your feedback!

The most important part of a frequent release cycle is to hear what’s working and what needs to be improved, so your feedback is extremely valued. Don’t forget to register a Windows 10 device with the Windows Insider Program. That way you’ll be able to provide feedback via the Feedback Hub App. In the app, choose the Server category and then the appropriate subcategory for your feedback. Please indicate what edition and build number you are providing feedback on.

You can also send your feedback via the Windows Server Insiders space on the Tech Communities.

We’re very excited about this release and really looking forward to hear what amazing things you’re doing with the Semi-annual Channel releases of Windows Server!

Follow us on Twitter @MSHybridCloud!

* If you signed up for Windows Insiders for Business using an AAD account, there is a temporary issue with access to the Windows Server Download page using AAD accounts. If you registered using an MSA account at the Windows Insider program, your MSA account may be used to access the page and to download builds until this is resolved.

How Microsoft EMS can support you in your journey to EU GDPR compliance – Part 5

$
0
0

Protecting data at the device and app level with Microsoft Intune

Over the past month, the Enterprise Mobility + Security (EMS) team has been blogging about Microsoft’s broad commitment to making sure our products and services comply with the GDPR and making sure that you – our customers – understand how our technologies can assist you with your GDPR compliance efforts. We’ve outlined the four key steps that we recommend you take to get started:

  1. Discover: Identify what personal data you have and where it resides.
  2. Manage: Govern how personal data is used and accessed.
  3. Protect: Establish security controls to prevent, detect, and respond to vulnerabilities and data breaches.
  4. Report: Execute on data requests, report data breaches, and keep required documentation.

Microsoft Enterprise Mobility + Security delivers multiple capabilities that provide you with crucial advantages in each step. This is the fifth blog in a series about those capabilities. With this blog, we will focus on the capabilities delivered by Microsoft Intune to help you manage the use and access of data and to help in the protection of that data – both key in fulfilling GDPR requirements.

Manage and protect your data with Intune

Organizations that use Intune have access to sophisticated mobile device management, mobile application management, and PC management capabilities from the cloud. These capabilities allow you to provide your users with access to company applications, data, and resources from virtually anywhere on almost any device in a way that helps you to keep company data (including data that may contain personal and sensitive information) secure.

These capabilities are critical if you consider how many companies deal with personal and sensitive data as a standard part of doing business. Take, for example, an automaker who maintains a record of every customer who has purchased a car in recent years. The automaker likely does so in files that include customer names, emails, identifier numbers, addresses, credit scores, etc. Employees of the automaker may regularly share personal data like this among themselves as they model future sales figures or try to determine how to build better cars based on customer feedback – and they may be accessing this data on their mobile devices. An organization using Intune can create a secure container for this file with policies that protect company data at the device and app level. That container can be wiped at any moment if necessary. Intune also has tools you can use to inform your end-users about terms and conditions and about which data is collected and visible on managed devices.

This unique functionality can help you meet the GDPR expectation that personal data is adequately and appropriately protected, given the circumstances and risks. The ability to control this data is enhanced when you include Azure Information Protection to encrypt the data and Cloud App Security to ensure that it’s stored appropriately in a cloud app. With all this, EMS is well suited to enable the data protection demands of GDPR.

 

End-user transparency

Before we go into the specifics of how Intune helps you protect company data, it’s worth stating how strongly we believe in end-user empowerment. This is exemplified by the productivity experience we deliver to end users, and includes making sure that end users have full visibility into what data the IT team can access and affect in managed-device scenarios.

With Intune, you can provide users with access to your company’s privacy statement, as well as present your own custom terms and conditions to inform them of your data processing activities and data collection. Once these elements of your IT practices are defined, you can embed these notifications into the enrollment process, to inform end users about the implications of their enrollment.

Controlling access and protecting data at the device level

Intune’s mobile device management capabilities and device compliance policies ensure that devices attempting to access your organization’s data or apps (which may contain personal and sensitive information) first meet your team’s security requirements and standards. Administrators can set a number of device compliance policies, such as enforcing device enrollment, requiring domain join, requiring strong passwords, and automatic encryption. These policies may also be set to require that the device operating system (as well as key apps) be current and have the latest updates installed before access is granted.

You can use the compliance policy settings in Microsoft Intune to evaluate the compliance of employee devices against a set of rules you create. In cases where devices don't meet the conditions you set up in the policies, Intune can guide the end user though enrolling the device (if it’s not already enrolled) and fixing the compliance issue.

To understand how robust these compliance policies are, consider these four ways Intune enforces advanced security polices for mobile devices, apps, and PCs:

  1. Intune delivers comprehensive settings management for mobile devices and PCs – including iOS, Android, Windows, and MacOS.
  2. It provides the ability to deny specific applications or URL addresses from being accessed on mobile devices and PCs.
  3. It enables the execution of remote actions, like passcode reset, device lock, and remote wipe.
  4. It enables the enforcement of strict “lock down” policies for Supervised iOS devices, Android devices using Kiosk Mode, and Windows 10 devices using Assigned Access.

App protection policies give you granular control of what happens after data is accessed

Once mobile apps are granted access to company data, it’s critical to control what happens after the data is accessed. This is where Intune’s mobile application management capabilities and app protection policies have an impact. These policies can protect the data at the app level (which includes app-level authentication) as well as copy/paste control and save-as control. Intune’s application policies give you fine-grained control of what your users can do with the data they access in apps – and this gives you extraordinary power to secure your data.

Also, because Intune leverages the user’s identity in its approach, it can enable multi-identity usage of apps – e.g., where app policies are intelligent enough to only apply to data that’s applicable to corporate accounts.

It’s also important to note that Intune’s application management capabilities enable granular control of the data within Microsoft Office mobile apps on iOS and Android devices, and it helps enforce conditional access policies to Exchange Online, Exchange on-premises, SharePoint Online, and Skype for Business.

Summary

Six key ways Intune supports your GDPR compliance:

  1. You can enable your employees to securely access company information using mobile apps, as well as ensure that your data remains protected after it’s been accessed via restrictions on actions like copy/cut/paste/save-as.
  2. You can apply app protection policies to protect data with or without device enrollment.  This allows you to protect company information even on unmanaged devices.
  3. Intune applies mobile application management policies to your existing line-of-business (LOB) applications using the Intune App Wrapping Tool without making code changes.
  4. It enables users to securely view content on devices within your managed app ecosystem using the Managed Browser and Azure Information Protection Viewer.
  5. You can encrypt company data within apps using the highest level of device encryption provided by iOS and Android.
  6. It allows you to protect your company data by enforcing PIN or credential policies.

With Intune, you can also selectively remove company data (apps, email, data, management policies, networking profiles, and more) from user devices and apps while leaving personal data intact.

Intune’s Mobile Device Management and Mobile App Management capabilities help you protect access to data that may be considered as personal or sensitive as defined by the GDPR, and it ensures that your data remains protected even after it’s been accessed by users.

GDPR is great news for people demanding more digital privacy, and Intune as part of EMS is a great tool for the organizations adjusting the way they gather, use, and protect data.

Manage Office 365 Distribution Groups via Excel spreadsheet or CSV

$
0
0

A consultant friend of mine posed an interesting question to me this week--one of his customers wanted to be able to let his users administer a cloud-managed Office 365 distribution group by uploading a CSV or Excel spreadsheet.  From an administration perspective, I have done an incredible amount of directory management tasks using CSVs, so this didn't seem like that difficult of a task.

Handing control of it over to a user, however--that seemed daunting.  Thus, began my first real foray into PowerShell forms apps.  Yes, I'm way behind, mainly because I haven't had a need to do this.  I picked up a copy of PowerShell Studio and got to work learning how to build forms apps.

Of course, as with most projects, as soon as I'm halfway through, I come up with more ideas, so I'm going to keep tinkering with this until I have something that I think is really cool.  In the meantime, I'd love to hear what you think about it and ideas for features or changes.

The basic idea of the tool is this:

  1. Launch the tool, and go to File | Connect to Office 365.
  2. Enter credentials when prompted.  After you have successfully entered a credential, it populates the Username: area with the identity of the logged-in credential.
  3. Click the Refresh Group List button to retrieve a list of groups.  It uses the "ManagedBy" property on distribution groups to determine what groups you have the ability to manage (since it's designed for use by end-users who don't have administrative privileges).
  4. Click File | Open File... and browse to either a CSV or XLS(x) file that has at least one column with the header EmailAddress.
  5. The Filename field has been populated and the Members area shows the number of lines in the file.
  6. Click a group name in the Groups that you can manage list, and then click Refresh Members button.
    The Current Group Members list is populated by running Get-DistributionGroupMember on the group selected in the Groups that you can manage column.  The Users to Remove and Users to Add are populated via hash tables--by converting both the input list file and the results of Get-DistributionGroupMember in hash tables, I can quickly perform a -notin both directions and export those lists to new arrays:
    # Build the lists of users
    # $ExistingGroupMembers will contain all of the members of the currently select group, represented by $SelectedItem
    # $NewGroupmembers will contain all of the users imported from the CSV/XLS file
    
    $ExistingGroupMembers = Get-DistributionGroupMember $SelectedItem
    $ExistingGroupMembersHash = @{ }
    $NewGroupMembers = Import-Csv $FileName
    $NewGroupMembersHash = @{ }
    
    # Build the ExistingGroupMembersHash table
    ForEach ($obj in $ExistingGroupMembers)
        {
            $ExistingGroupMembersHash[$obj.PrimarySmtpAddress] = $obj.PrimarySmtpAddress
        }
    
    # Build the NewGroupMembersHash table    
    ForEach ($obj in $NewGroupMembers)
        {
            $NewGroupMembersHash[$obj.EmailAddress] = $obj.EmailAddress
        }
    # Users to Remove
    [array]$UsersToRemove = $ExistingGroupMembersHash.Values | ? { $_ -notin $NewGroupMembersHash.Values }
        
    # Users to Add
    [array]$UsersToAdd = $NewGroupMembersHash.Values | ? { $_ -notin $ExistingGroupMembersHash.Values }
    
    
  7. The result is that $UsersToRemove has the list of users that were in Get-DistributionGroupMember but not in the import file $FileName, and $UsersToAdd has the users in $FileName that were not in the results of Get-DistributionGroupMember.
  8. Click the Update Group Membership button to run the Remove-DistributionGroupMember and Add-DistributionGroupMember operations on the group, adding or removing the appropriate names.
  9. Click File | Exit to log out of the PowerShell session and exit the application.

Ideas that I'm kicking around:

  • Exporting group membership (in case you need to go back to one)
  • Managing Office 365 Groups (Unified Groups)
  • Adding / removing users in the list boxes
  • Allowing for header-less files

I look forward to hearing your comments and ideas. 🙂

You can download the tool at https://gallery.technet.microsoft.com/Office-365-Distribution-756ebab7.

Check out the new and expanded top product questions page for presales and deployment assistance

$
0
0

You have questions, and we have answers! Explore the top trending questions that partners like you have recently been asking during the technical presales and deployment phases of their customer projects. Additionally, we’ve expanded this self-help resource page to include new questions across all products areas, so be sure to check out the latest wave of updates.

Top 5 trending questions for July:

  1. Where can I get more information on how Azure helps my business?
  2. How do I migrate from IMAP/Staged/Cutover/Hybrid, on-premises system or third-party solution and data to Exchange Online?
  3. How do I best sell Power BI and integrate into solutions/applications of my or our customers company?
  4. How can I deploy Azure?
  5. Where can I get more information on Azure backup and disaster recovery?

Find answers to the trending questions and bookmark for future reference during the presales and deployment phases of your customer projects: http://aka.ms/TopProductQuestions.

Don’t forget to leverage the full suite of technical presales and deployments services, available as part of your MPN technical benefits. Discover the technical trainings, one-on-one consultations and chat options available by visiting http://aka.ms/TechnicalServices.

 

Core Network Stack Features in the Creators Update for Windows 10

$
0
0

By: Praveen Balasubramanian and Daniel Havey

This blog is the sequel to our first Windows Core Networking features announcements post.  It describes the second wave of core networking features in the Windows Redstone series.  The first wave of features is described here: Announcing: New Transport Advancements in the Anniversary Update for Windows 10 and Windows Server 2016.  We encourage the Windows networking enthusiast community to experiment and provide feedback.  If you are interested in Windows Transport please follow our Facebook feedback and discussion page: @Windows.10.Data.Transport.

 

TCP Improvements:

TCP Fast Open (TFO) updates and server side support

In the modern age of popular Web services and e-commerce , latency is a killer when it comes to page responsiveness. We're adding support in TCP for TCP Fast Open (TFO) to cut down on round trips that can severely impact how long it takes for a page to load.  Here's how it works: TFO establishes a secure TFO cookie in the first connection using a standard 3-way handshake.  Subsequent connections to the same server use the TFO cookie to connect without the 3-way handshake (zero RTT).  This means TCP can carry data in the SYN and SYN-ACK.

What we found together with others in the industry is that middleboxes are interfering with such traffic and dropping connections. Together with our large population of Windows enthusiasts (that's you!), we conducted experiments over the past few months, and tuned our algorithms to avoid usage of this option on networks where improper middlebox behavior is observed.  Specifically, we enabled TFO in Edge using a checkbox in about:flags.

To harden against such challenges, Windows automatically detects and disables TFO on connections that traverse through these problematic middleboxes.  For our Windows Insider Program community, we enabled TFO in Edge (About:flags) by default for all insider flights in order to get a better understanding of middlebox interference issues as well as find more problems with anti-virus and firewall software.  The data helped us improve our fallback algorithm which detects typical middlebox issues.  We intend to continue our partnership with our Windows Insider Program (WIP) professionals to improve our fallback algorithm and identify unwanted anti-virus, firewall and middlebox behavior.  Retail and non WIP releases will not participate in the experiments.  If you operate infrastructure or software components such as middleboxes or packet processing engines that make use of a TCP state machine, please incorporate support for TFO.  In the future, the combination of TLS 1.3 and TFO is expected to be more widespread.

The Creators Update also includes a fully functional server side implementation of TFO. The server side implementation also supports a pre-shared key for cases where a server farm is behind a load balancer. The shared key can be set by the following knob (requires elevation):

reg add HKLMSYSTEMCurrentControlSetServicesTcpipParameters /v TcpFastopenKey /t REG_BINARY /f /d 0123456789abcdef0123456789abcdef
netsh int tcp reload

We encourage the community to test both client and server side functionality for interop with other operating system network stacks. The subsequent releases of Windows Server will include TFO functionality allowing deployment of IIS and other web servers which can take advantage of reduced connection setup times.

 

Experimental Support for the High Speed CUBIC Congestion Control Algorithm

CUBIC is a TCP Congestion Control (CC) algorithm featuring a cubic congestion window (Cwnd) growth function.  The Cubic CC is a high-speed TCP variant and uses the amount of time since the last congestion event instead of ACK clocking to advance the Cwnd.  In large BDP networks the Cubic algorithm takes advantage of throughput much faster than ACK clocked CC algorithms such as New Reno TCP.  There have been reports that CUBIC can cause bufferbloat in networks with unmanaged queues (LTE and ADSL).  In the Creators Update, we are introducing a Windows native implementation of CUBIC.  We encourage the community to experiment with CUBIC and send us feedback.

The following commands can be used to enable CUBIC globally and to return to the default Compound TCP (requires elevation):

netsh int tcp set supplemental template=internet congestionprovider=cubic
netsh int tcp set supplemental template=internet congestionprovider=compound

*** The Windows implementation of Cubic does not have the "Quiescence bug" that was recently uncovered in the Linux implementation.

 

Improved Receive Window Autotuning

TCP autotuning logic computes the "receive window" parameter of a TCP connection as described in TCP autotuning logic.  High speed and/or long delay connections need this algorithm to achieve good performance characteristics.  The takeaway from all this is that using the SO_RCVBUF socket option to specify a static value for the receive buffer is almost universally a bad idea.  For those of you who choose to do so anyways please remember that calculating the correct size for TCP send/receive buffers is complex and requires information that applications do not have access to.  It is far better to allow the Windows autotuning algorithm to size the buffer for you.  We are working to identify such suboptimal usage of SO_RCVBUF/SO_SENDBUF socket options and to convince developers to move away from fixed window values.  If you are an app developer and you are using either of these socket options please contact us. 

In parallel to our developer education effort we are improving the autotuning algorithm.  Before the Creators Update the TCP receive Window autotuning algorithm depended on correct estimates of the connection's bandwidth and RTT.  There are two problems with this method.  First, the TCP RTT estimate is only measured on the sending side as described in RFC 793.  However, there are many examples of receive heavy workloads such as OS updates etc.  The RTT estimate taken at the receive heavy side could be inaccurate.  Second, there could be a feedback loop between altering the receive window (which can change the estimated bandwidth) and then measuring the bandwidth to determine how to alter the receive window. 

These two problems caused the receive window to constantly vary over time.  We eliminated the unwanted behavior by modifying the algorithm to use a step function to converge on the maximum receive window value for a given connection.  The step function algorithm results in a larger receive buffer size, however, the advertised receive window size is not backed by non-paged pool memory allocation and system resources are not used unless data is received and queued so the larger size is fine.  Based on experimental results, the new algorithm adapts to the BDP much more quickly than the old algorithm.  We encourage user and system administrators to also take note of our earlier post: An Update on Windows TCP AutoTuningLevel.  This should clear misconceptions that autotuning and receive window scaling are bad for performance.

TCP stats API

The Estats API requires elevation and enumerates statistics for all connections.  This can be inefficient especially on busy servers with lots of connections.  In the Creators Update we are introducing a new API called SIO_TCP_INFO.   SIO_TCP_INFO allows developers to query rich information on individual TCP connections using a socket option. The SIO_TCP_INFO API is versioned and we plan to add more statistics over time.  In addition, we plan to add SIO_TCP_INFO  to .Net NCL and HTTP APIs in subsequent releases.

The MSDN documentation for this API will be up soon and we will add a link here as soon as it is available.

IPv6 improvements

The Windows networking stack is dual stack and supports both IPv4 and IPv6 by default since Windows Vista.  Over the Windows 10 releases, we are actively working on improving the support for IPv6.  The following are some of the advancements in Creators Update.

RFC 6106 support

The Creators Update includes support for RFC 6106 which allows for DNS configuration through router advertisements (RAs).  RDNSS and DNSSL ND options contained in router advertisements are validated and processed as described in the RFC.  The implementation supports a max of 3 RDNSS and DNSSL entries each per interface.  If there are more than 3 entries available from one or more routers on an interface, then entries with greater lifetime are preferred.  In the presence of both DHCPv6 and RA DNS information, Windows gives precedence to DHCPv6 DNS information, in accordance with the RFC.

In Windows, the lifetime processing of RA DNS entries deviates slightly from the RFC.  In order to avoid implementing timers to expire DNS entries when their lifetime ends, we rely on the periodic Windows DNS service query interval (15 minutes) to remove expired entries, unless a new RA DNS message is received in which case the entry is updated immediately.  This enhancement eliminates the complexity and overhead of kernel timers while keeping the DNS entries fresh.The following knob can be used to control this feature (requires elevation):

The following command can be used to control this feature (requires elevation):
netsh int ipv6 set interface <ifindex> rabaseddnsconfig=<enabled | disabled>

Flow Labels

Before the Creators update, the FlowLabel field in the IPv6 header was set to 0.  Beginning with the Creators Update, outbound TCP and UDP packets over IPv6 have this field set to a hash of the 5-tuple (Src IP, Dst IP, Src Port, Dst Port).  Middleboxes can use the FlowLabel field to perform ECMP for in-encapsulated native IPv6 traffic without having to parse the transport headers.  This will make IPv6 only datacenters doing load balancing or flow classification more efficient.

You can use this admin only knob to enable/disable IPv6 flow labels :
netsh int ipv6 set flowlabel=[disabled|enabled] (enabled by default)

The following knob can be used to control this feature (requires elevation):
netsh int ipv6 set global flowlabel=<enabled | disabled>

ISATAP and 6to4 disabled by default

IPv6 continues to see uptake and IPv6 only networks are no longer a rarity. ISATAP and 6to4 are IPv6 transition technologies that have been enabled by default in Windows since Vista/Server 2008. As a step towards future deprecation, the Creators Update will have these technologies disabled by default. There are administrator and group policy knobs to re-enable them for specific enterprise deployments. An upgrade to the Creators Update will honor any administrator or group policy configured settings. By disabling these technologies, we aim to increase native IPv6 traffic on the Internet. Teredo is the last transition technology that is expected to be in active use because of its ability to perform NAT traversal to enable peer-to-peer communication.

Improved 464XLAT support

464XLAT was originally designed for mobile scenarios since mobile operators are some of the first ISPs with IPv6 only networks.  However, some apps are not IP-agnostic and still require IPv4 support.  Since a major use case for mobile is tethering, 464XLAT should provide IPv4 connectivity to tethered clients as well as to apps running on the mobile device itself. Creators Update adds support for 464XLAT on desktops and tablets too. We also enabled support for TCP Large Send Offload (LSO) over 464XLAT improving throughput and reducing CPU usage.

Multi-homing improvements

Devices with multiple network interfaces are becoming ubiquitous.  The trend is especially prevalent on mobile devices, but, 3G and LTE connectivity is becoming common on laptops, hybrids and many other form factors.  For the Creators Update we collaborated with the Windows Connection Manager (WCM) team to make the WiFi to cellular handover faster and to improve performance when a mobile device is docked with wired Ethernet connectivity and then undocked causing a failover to WiFi.

Dead Gateway Detection (DGD)

Windows has always had a DGD algorithm that automatically transitions connections over to another gateway when the current gateway is unreachable, but, that algorithm was designed for server scenarios.  For the Creators update we improved the DGD algorithm to respond to client scenarios such as switching back and forth between WiFi to 3G or LTE connectivity.  DGD signals WCM whenever transport timeouts suggest that the gateway has gone dead.  WCM uses this data to decide when to migrate connections over to the cellular interface.  DGD also periodically re-probes the network so that WCM can migrate connections back to WiFi.  This behavior only occurs if the user has opted in for automatic failover to cellular.

Fast connection teardown

In Windows, TCP connections are preserved for about 20 seconds to allow for fast reconnection in the case of a temporary loss of wired or wireless connectivity.  However, in the case of a true disconnection such as docking and undocking this is an unacceptably long delay.  Using the Fast Connection Teardown feature WCM can signal the Windows transport layer to instantly tear down TCP connections for a fast transition.

Improved diagnostics using Test-NetConnection

Test-NetConnection (alias tnc) is a built-in cmdlet in powershell that performs a variety of network diagnostics.  In Creators Update we have enhanced this cmdlet to provide detailed information about both route selection as well as source address selection.

The following command when run elevated will describe the steps to select a particular route per RFC 6724. This can be particularly useful in multi-homed systems or when there are multiple IP addresses on the system.

Test-NetConnection -ComputerName "www.contoso.com" -ConstrainInterface 5 -DiagnoseRouting -InformationLevel "Detailed"

アイデアから次なる大きなトレンドを生み出すには【7/14 更新】

$
0
0

(この記事は 2017 年 7 月 12 日にMicrosoft Partner Network blog に掲載された記事 Turn Your Great Idea Into the Next Big Thing の翻訳です。最新情報についてはリンク元のページをご参照ください。)

 

今週開催されている Microsoft Inspire では、ビジネスの成功の裏には「何」があるのかという視点で多くのセッションが行われています。Satya Nadella (英語) はセッションの中で、地球上のあらゆる人々とあらゆる組織がより多くの成果を達成できるようにマイクロソフトが「何」を開発しているのかを説明しました。また、Judson Althoff (英語) は、世界中のあらゆる業界でデジタル改革を推進するために、パートナー様とお客様が協力して「何」を行っているのかについてご紹介しました。

 

私がここで注目したいのは、「どうすれば」という点です。「どうすれば」それを実現できるのか、「どうすれば」優れたアイデアをビジネスの発展につなげられるかという点についてご説明していきたいと思います。

優れたアイデアを思い付くことは、始まりに過ぎません。アイデアを思い付いたら、それを実現するための持続可能で収益性の高いビジネス モデルを定義し、お客様にリーチするための市場投入計画を策定する必要があります。優れたアイデアを思い付きで終わらせないためには、それ以上に頭を使って行動に移すことが重要なのです。

効果的なアプローチは 1 つだけではなく、パートナー様ごとに違っているものです。サービス事業の構築に注力しようとするパートナー様もいれば、チャネルの開発を重視するパートナー様もいます。また、従来の ISV のアプローチを採用しようとするパートナー様もいます。しかし、どのビジネス モデルが効果的だと考えるかはパートナー様ごとに違っても、心がけるべきポイントというのはある程度共通している考えています。それでは、私が考えるポイントをここでご紹介します。

 

ターゲットとなるお客様と市場についてよく理解する

技術的な知識を持つだけでは限界があります。それよりも重要なのは、お客様の状況を十分に理解することです。信頼できるアドバイザーとして認識されるためには、お客様が重視していること、解決したいと考えている課題、お客様の業界に間もなく登場するであろうイノベーションを把握することが重要です。マイクロソフト パートナーの一社である日立コンサルティング様では、そうした実際のニーズに沿ってアドバイスを提供できる業界エキスパートを迎え入れました。その分野に精通したエキスパートの力を借りることで、ターゲットとなるお客様に非常に説得力のある提案を行うことが可能になります。以下の動画をご覧ください。

 

また、お客様の本当のニーズを知るために、そのお客様の諮問委員会や地域の業界団体とのつながりをうまく利用しているパートナー様もいます。数か月前にお会いしたあるパートナー様は、地域のビジネス界とつながり、人脈を構築するために、近所の動物園や美術館の委員を務めていました。このようなパートナー様こそが、IT 分野にとどまらずお客様の市場をよく理解して、緊密な関係を築くことができるのです。

 

技術スキルを磨く

アイデアからビジョンを描いてもまだ不十分です。重要なのは、それを実現することです。ビジョンを実現し、変革をもたらすプロジェクトの進行に立ちはだかる課題を解決するためには、技術的なスキルと能力を備えた適切な人材を見つける必要があります。そのような人材プールは、一度で構築し切れるものではありません。継続的なトレーニングを可能にすることをビジネス上の優先事項とする必要があります。なぜなら、イノベーションは以前よりもずっと早いペースで進んでおり、それに確実について行けるようにする必要があるからです。

専門知識を得る 1 つの術として、パートナーシップを活用する方法があります。たとえば、マイクロソフト パートナー コミュニティ、IAMCP (英語)Dynasource (英語) などのリソースを利用して、特定のプロジェクトやニーズに合ったパートナー様を見つけることができます。そうしたリソースと共同で市場投入に取り組むことによって、新たな市場や業種でパートナー様自身のビジネスも拡大することができます。今日ではさまざまなお客様が存在し、そのお客様ごとにニーズも異なります。そのすべてに単独で対応できる企業はほとんど存在しません。しかし、それぞれの分野の専門知識を持ついくつものパートナーと協力することで、独自のパートナー エコシステムを構築することができるのです。

 

反復可能でスケーラブルなソリューションを構築する

提供モデルと運用モデルはスケーラブルなものを作成するようにします。クラウド テクノロジを活用するメリットの 1 つは提供コストを削減できることですが、お客様ごとにそのお客様しか使えないカスタム ソリューションを構築してしまっては、コスト メリットを活かすことはできません。ソリューションが反復可能なものであれば、プロジェクトを提供するたびに販売コストの削減と負担の軽減を実現し、収益性を高めることができます。

反復可能にすることを必ず組み込むようにするのが重要です。たとえば、パッケージ アプリをマーケットプレイスに公開することもそうですが、反復可能なサービス内容のプラクティスを開発したり、マネージド サービスに関するセンター オブ エクセレンスを構築したり、ソリューションをスケーリングできる独自のチャネルを構築したりすることもそれに該当します。

 

イノベーションを推し進める

月曜日の Satya Nadella のセッション (英語) では、今日のテクノロジ関連のパラダイム シフトについて説明がありました。時代は急速に変化しています。3 年前に私がパートナー様に話していたのは、クラウド テクノロジとは何かということでしたが、今ではクラウドがビジネス プランの一部であることがほぼ当たり前になりました。昨年、Satya は次のトレンドとして、AI、ボット、IoT (モノのインターネット) を挙げました。これらのテクノロジをいち早く取り入れた企業は、もう既に次のビッグ プロジェクトに取り組み始めています。

来年には、量子コンピューティングや DNA ストレージがトレンドになっていてもおかしくありません。

つまり、ビジネスの中にイノベーションと学習を行うための余裕を確保しておくことが、今後さらに重要になってくるということです。

これからの 1 年で、パートナー様は大きな成功をつかむことができると思います。その過程ではさまざまな問題に直面することもあるでしょう。パートナー様とマイクロソフトは互いに学び合い、広い視野をもって考えを巡らせる必要があります。今日のパラダイム シフトによって、企業が販売するデジタル製品のあらゆるところにテクノロジが組み込まれるようになりました。そのため、私たちには、IT 分野を超えてビジネス チャンスをつかめる可能性が生まれています。

このビジネス チャンスを活かすには、私たちは共に協力し合って大きなアイデアを生み出す必要があるのです。

これを実践しているのが、Dodo Pizza というパートナー様です。以下の動画をご覧ください。

皆様の素晴らしいアイデアをぜひシェアしてください。また、他のパートナー様とつながるために、マイクロソフト パートナー コミュニティを存分にご活用ください。

 

 

 

 

 

 

 


Update for Surface Pro (13 July 2017)

$
0
0

Today we've released a Surface System Aggregator firmware update for the Surface Pro. This firmware update revises system power reporting.

For Surface Pro, the updates are available in MSI and ZIP format from the Surface Pro Drivers and Firmware page in the Microsoft Download Center. Click Download to download the following file:

•    SurfacePro_Win10_15063_1706007_0.msi

For your reference, here is a full list of the driver version that is updated in this release and the improvements it provides:

Surface Pro

•    Surface System Aggregator (v233.1763.257.0) revises system power reporting

ブート パフォーマンス データ取得方法

$
0
0

はじめに

PFE にて、ブートパフォーマンスの調査を行う場合には、PC メーカーがインストールされている OS をサポートしている事が前提となります。

昨今 Windows 10 (バージョン毎の対応状況も含む) のサポート対象機種でないにも関わらずインストールされ、パフォーマンスが出ないとか、ダウングレード権があるものの、ダウングレードした OS のメーカー サポートがない状態にも関わらず、パフォーマンスが出ないという相談が増えておりますので、まずは PC メーカーがサポート対象としている OS とそのバージョンであるか、対象であった場合にはドライバーは正しくアップデートしているかをお確かめください。

例: 2013 年発売のとある PC は Windows 10 の 1703 以降はサポート対象外でドライバーも用意されていない  →  調査対象外となります。

Windows Vista 以降の Windows ではこのようなブート メカニズムとなっており、分析結果レポートで記載する時間は Windows Performance Toolkit  でデータ取得が可能な [カーネル 初期化] 以降となります (今は XPERF という名称はあまり使いません)。

また、HDD の場合には ReadyBoot (Prefetch を含む)ブート最適化のプロセスがあります。これは最低 6 回再起動が行われている必要があり、行われていない場合にはブート最適化が十分でない可能性があります (SSD の場合には十分高速なため無効化されます)。

 

パフォーマンスの考え方

パフォーマンス調査にはベースラインの存在が重要になります。なにをもってブート パフォーマンスが悪いと判断するのかですが、前の OS と比べてや、別の PC と比べてや、何となくや、体感的にといった指標では正確な判断はできません。

そのため、同じ PC で同じ OS のバージョンの状態でのパフォーマンスの変化を計測します。
例として Windows 10 プリインストールの PC を買ってきたとします。
その場合次のような形で取得します。
1. PC の電源を入れ、OOBE を進み(この時にはドメインに参加しない)、デスクトップが表示される
→ 通常はここをベースラインとし、データを 3 回* 収集します。
2. その後、ドメインに参加し、アプリのインストールを行い、企業で使える状態にします。
→ この時点が遅いと感じていると仮定して、ここでデータを 3 回* 収集します。

1 と 2 が数秒の違いしかなければ、パフォーマンスは変わっていないことになりますし、データ的に明らかに  1 と 2  で差異があるのであれば、それが判明します。

1 の段階で体感的に遅いと感じているようであれば、PC メーカーへの問い合わせとなります。場合によっては選定した機種がそもそもお客様の期待に合わないという事もあります。

* 3  回取得するのはデータのブレに対応するためです。環境によりますが、3  回とも同じようなデータにならない場合があり、大きく開きが出る場合もありますし、2 の段階で著しくパフォーマンスが落ちてしまい、データそのものの取得に失敗する場合もあるので、念のため 3 回取得します。

データ取得手順

Windows Performance Toolkit のインストール

データ分 析を行う PC で、次の場所から Windows ADK をダウンロードし、その中で Windows Performance Toolkit をインストールします

  • 2017/07/14 時点では バージョン1703 が最新です
    https://developer.microsoft.com/ja-jp/windows/hardware/windows-assessment-deployment-kit
  • インストールの画面でインストールする項目が選べますので、”Windows Performance Toolkit” のみを選択します。
  • インストールが完了すると、C:Program Files (x86)Windows Kits10Windows Performance ToolkitRedistributables に再頒布用のパッケージが作成されます。
  • データ収集を行う PC で、アーキティクチャに合った再頒布用パッケージを実行します。
    (再頒布用パッケージはサイレント インストールとなります)

 

(オプション)ブート強制最適化 (OS 起動ドライブが SSD の場合は必要ありません)

  1. Administrators に含まれるユーザーでログインを行い、管理者権限でコマンドプロントを開始します (Users に含まれるユーザーでログインを行った後に、Administrators に含まれる別ユーザーを使って管理者権限でコマンドプロンプトを起動した場合には、再起動後のログインの後失敗します)
  2. Windows Performance Toolkit がインストールされているディレクトリで次のコマンドを実行します
    Xbootmgr.exe –trace rebootCycle –prepSystem

再起動が複数回実行され、2回目の再起動の後デフラグが行われますので、この実行には1時間~2時間ほどかかります。
(再起動後のログインはAdministrators に含まれる処理を開始した同じユーザーでログインします)
HDD システムで事前にこれを行っていない場合には取得するタイミングで最適化が進むため、何も変更していなくても起動時間が短くなる傾向となります。

データの取得

  1. Administrators に含まれるユーザーでログインを行います。
    (Users に含まれるユーザーでログインを行った後に、Administrators に含まれる別ユーザーを使ってWPRUI を起動した場合には、再起動後のログインの後で失敗し、データの収集が正常にできません)
  2. [スタート] メニューから [Windows Performance レコーダー] また、WPRUI.EXE を起動します。
  3. パフォーマンスシナリオを “ブート” にし、必要な追加分析項目にチェックを入れて (通常は Registry I/O activityまでで問題ありません)、[開始] をクリックします。
    回数は既定で 3 となっており、3回分自動的に再起動が行われ収集しますが、テストで実行される際には下図のように 1 にして実行します
  4. 再起動が行われ、ログの取得が開始します。
    (再起動後のログインはAdministrators に含まれる処理を開始した同じユーザーでログインします)

データの確認

取得された拡張子 etl ファイルをデータ分 析を行う PC でダブル クリックすると、Windows Performance Analyzer が起動し、分析を開始することができますが、このとき次のようなダイアログが表示される場合には、情報の欠落が発生しています。

この状態のまま分析を進めることも可能ですが、情報が欠落しているため分析できない箇所が発生します。主な原因は HDD が高負荷となった事による影響、3rd Party 製ドライバーによる影響など様々です。
この状況が発生した場合には、次のようにデータ量を削減して取り直しを行ってみてください。

データ量を削減する

スタック トレース情報が無くなるため、詳細分析はできなくなりますが傾向を掴むことは可能です。
[Detail level] をLight に変更します。
それでも引き続きメッセージが表示されるようであれば、さらに収集する情報を少なくするため、[Select additional profiles for performance recording] のチェックを [Fist level triage] のみとします。

データの比較

次のようにベースラインと見比べることで、どこで時間がかかっているのかが判断できます。

 

詳細データ分析

詳細なデータ分析が必要な場合には有償にて分析しますので、弊社 TAM まで取得したデータ (必ずベースラインと、遅いと思われるデータのそれぞれ 3 回分)を ZIP 圧縮した状態でお送りください。

なお、ご自身で分析手法を学習されたい場合には、別途 Workshop Library On Demand (WLOD) にて、ビデオによる解説を行っておりますので、WLOD を契約済みの場合には、リンクからご覧いただけます。

微軟停止支援 Exchange Server 2007,企業電子郵件伺服器走向雲端已是不可避免的趨勢

$
0
0

微軟近日才宣佈旗下的企業電子郵件伺服器服務 Exchange Server 2007,已於 4/11 停止一切支援,同時也呼籲企業用戶應該盡速升級到 Exchange Server 2016,或是直接更新到雲端版本 Exchange Online 或訂閱功能更全方位的 Office 365。

微軟推出 Exchange Server 2007 至今也已過了 10 年,這些年間,Exchange Server 不斷的演進與改善,尤其到了 Exchange Server 2016,不僅運行效率提升、處理速度加快,安全性也改進了許多。此外,微軟也把 Exchange Server 服務移植到雲端,推出 Exchange Online 版本及整合更多功能的 Office 365 訂閱服務,讓客戶的員工資料管理更方便,同時安全性也大大提升。若客戶不放心或有其他考量不願意將 Server 移往雲端,也有混合雲的形式可供選擇,也就是同時留有本地的伺服器,同時能享有雲端化的優點,可充分藉由混和雲的方式,減少企業建置成本,並提升管理架構彈性。

不過對於企業主來說,到底為什麼要升級到最新版本?而且最新版本又有什麼樣的優勢?與舊版本差在哪?Exchange Online 適合每一間公司使用嗎?為什麼要雲端化,而不建立本地伺服器?

Exchange 新舊版本差在哪?

首先,要比的當然是處理速度,這是企業員工使用上最大的感受。與 Exchange Server 2007 相比之下,Exchange Server 2016 的架構簡化許多,在 2007 年版本中,Exchange Server 就有五大角色,架構上不僅複雜許多,維運也顯得困難,需要花更多時間;但 2016 年版本只有兩個角色,架構大大簡化,不再分得細雜,系統運作自然也大大變快。在高可用性上也有所改善,Exchange Server 2016 的消耗頻寬量比起 2007 年的版本要少了 40%, 這也要歸功於架構簡化,連線速度也跟著提升。

此外,在複寫機制上也大有不同,Exchange Server 2016 採用了資料庫可用性群組(DAG 架構)的方式,而且還原機制的抄寫量比起 2007 版本提升了約 30%。

除了本地版,雲端版能力更強

Exchange Server 可被視為一間麵包店的烘焙區,而 Outlook 就是賣麵包的販售區,以員工來說,只有負責烘焙的麵包師傅才能進去烘焙區,而門市人員就負責銷售的部分。同樣的邏輯之下,只有公司的管理人員才會接觸到 Exchange Server,一般員工則是使用 Outlook 收發信件,不會知道背後的運行邏輯。也因此,對員工而言,Outlook 好不好用、方不方便、直不直覺就顯得相當重要了。

除了本地版的 Exchange Server 之外,微軟還有推出 Exchange Online 及 Office 365 訂閱服務。自微軟推出 Office 365 以來,將所有自家服務整合在裡面,包含了大家常用的 Office 軟體(像是 Word、PowerPoint、Excel 等等)、SharePoint Online、Skype for Business、Microsoft Teams 等,以及本篇提到的 Exchange 服務。若企業用戶訂購 Office 365 版本,可以讓使用者享用以上提到完整的協同合作功能;相比較只擁有單一 Exchange 功能上,費用上雖然相差無幾,Office 365 完整版本卻可替企業用戶帶來更多便利。

▲Office 365 的最強大優勢就是將所有微軟自家推出的應用程式囊括在一起,只要訂閱就能享有所有應用程式的使用權,而且可在多元裝置上執行。

舉例來說,在 Outlook 中,微軟已經將 Skype for Business 的服務整合進去,公司員工可以隨時看到同事的上線狀況,如果有空就亮綠燈,簡報中會出現投影片的小圖示,顯示這個人可能正在會議中做簡報、不在位置上等等也都會有相關的提示信號,在工作協調上更加方便也快速。神奇的是,Outlook 與 Skype for Business 整合的非常好,可以直接將某封郵件作為聊天主題,與同事開啟新的對話群組,該聊天室的名稱就是郵件主題,就算同事臨時找你談事情,你一眼就能知道對方所來為何。

▲若公司同仁顯示正在線上,員工就可以立即「敲」對方,使用 Skype for Business 快速溝通工作事項。

雲端版本的好處不只在 Office 365 的強大整合力,同時因為伺服器架設在微軟的雲端當中,企業用戶不需要自架伺服器,光是這一點就可以省下相當大的人力及各方面成本,而且日後擴充、縮編也都相當輕而易舉,只要取消或增加人員數就好。再者,如果架設本地伺服器,在安全性的防護上須由企業自行煩惱,但雲端版本的部分,微軟已經幫大家準備好一切防護措施,有基礎防護也有進階防護,下面我們將接著介紹這一塊。

Exchange 的安全性如何?

如果你在想,「為什麼 Exchange Server 2007 的使用者必須升級到最新版本,甚至改為雲端版本,好處是什麼?」最核心的理由就是「安全性」。任何系統架構都一樣,隨著時間的演進,推陳出新,舊版本對於新興的病毒、勒索軟體的抵抗力顯得非常的低,尤其微軟之後將不會針對 Exchange Server 2007 版本提供任何支援、更新,意味著舊版本的 Exchange 當中的所有安全性防範也不會隨之更新,等同門戶大開,隨時都有被攻擊的可能性。

Exchange Online Protection 可執行最基本的防護

現行的 Exchange Online 備有 Exchange Online Protection(EOP)機制,可以辨識外來郵件是否為可疑郵件,替使用者進行過濾,這是只有訂閱 Exchange Online 才有的獨家服務。

▲EOP 扮演著協助企業過濾可疑信件、垃圾郵件,及病毒防護。配合網路防火牆、客戶端防火牆,可以抵禦大多數的郵件攻擊及可疑信件。

雲端化後的 Exchange,安全性再向上提升許多,一來是因為微軟的 Exchange Online 有許多企業加入使用,從每天處理這麼多的郵件數量就可以蒐集到更多的病毒樣本數,既然樣本數變多,相對的病毒碼資料庫也就更齊全,更可以保障使用者的郵件安全,只要郵件中的附件或連結符合病毒碼資料庫中的任一項,就會直接幫使用者擋掉,避免使用者中毒。

為了防止公司敏感資料外洩,Exchange Server 2016 及 Online 版本都具備資料外洩防護(DLP)機制,包含更多內建的範本類型,像是台灣身分證和護照號碼,若員工要寄出身分證資料,信件會直接跳出郵件提示,警告員工正寄出敏感資料。

進階威脅防護機制可確保病毒幾無漏洞可鑽

▲基本的 EOP 防護在圖中第一個漏斗處就會篩選過濾可疑信件,若加上 ATP,則會進入左側的沙箱,由虛擬機器測試信件開啟後的狀況,若不安全就直接刪除,或讓管理者自行選擇進一步的處理;若安全,則順利寄出。(圖片:scholarbuys

 

此外,Exchange 還有進階威脅防護機制(Advanced Threat Protection,ATP)(可單買或內含於 Office 365 企業版 E5),加入了機器學習(Machine Learning)技術,除了既有的 EOP 之外,再加入沙箱機制。所有郵件都必須經過沙箱,當中的三台虛擬機器將會在沙箱中將郵件開啟,並模擬各種開啟郵件後的使用情境,若發現該郵件的附件、連結有問題,就會直接擋掉該信件,而不安全的連結就算使用者點入也會被導回 Office 365 安全網站,於該網站內告知使用者此連結為不安全的連結。進階威脅防護機制的一切過程都在雲端進行,也不會佔用企業內部資源,比起現有的 EOP 安全進階版更加安全,而且效率極高,使用者幾乎不會感覺速度上的差異。

Exchange 升級麻煩嗎?

講到這,或許有的人已經想要放棄 Exchange Server 2007 了,但在升級到最新版本的過程中,會不會很麻煩?升級過程會不會導致信件、資料遺漏?不置可否,若是要從 10 年前的 Exchange Server 2007 升級到最新的 2016 版本或是直上雲端版,當然免不了繁複的手續,但是俗話說得好,「長痛不如短痛」,麻煩這一次,之後就可以享受更方便的使用體驗,而且日後升級也就不會這麼「痛苦」了。

另外,現有的 Exchange Server 2016 可以透過智慧精靈(Office 365 Hybrid Configuration Wizard)快速與雲端的 Exchange Online 連結,形成一種本地與雲端資料共存的結構,就是俗稱的混和模式,而這過程比起傳統移轉方式來說要方便許多,部屬也相對快速。在這種混合式環境當中,使用者可以維持內部部署與 Office 365 信箱混用的彈性。而以 Exchange Server 2007 來說,要與 Office 365 整合幾乎是不可能的事,也就是連混合雲形式都沒有辦法,更遑論整個上雲端了。

Exchange Server 2007 不升級會不會怎樣?

或許因為種種考量,某些企業主還是決定暫不升級,不過也要提醒大家,如前所述,因為微軟已經停止支援 Exchange Server 2007,安全性上將會大受影響。除此之外,隨著設備的演進、系統的進化,一款 10 年前的軟體將會逐漸遭受到相容性問題的考驗,在這樣的情況下,反倒會替企業與系統管理者帶來更大的不便。

誰適合使用 Exchange OnlineOffice 365

在本地架設伺服器,相信大家都能猜想到那得耗費多少人力物力及金錢,而且維護不易。如果能夠採行雲端化的作法,將可以替企業節省下龐大的人力及金錢成本。也因此,Exchange Online 及 Office 365 特別適合中小型企業,因為中小型企業資金有限,不可能另闢一個部門管理信件伺服器。另外,若是中小型企業將來面臨擴編的狀況,雲端版本也可以隨時增加名額,不會有儲存空間容量的問題,而且也不必受限於地點及金錢,企業要隨時增減人員都很便利。

▲微軟提供多種不同的 Exchange Online 方案,方案 2 可視為方案 1 的升級版,增加了 DLP 防護機制及其他特色,且封存無空間限制。更多方案比較請參考微軟官網

▲若選用 Office 365 商務系列版本,除了商務進階版之外,皆可以獲得完整安裝的 Office 程式,不僅可以在 PC、Mac 上操作,也能使用行動版本作業。除此之外,Office 365 企業版 E5 更加入了 ATP(Advanced Threat Protection,進階威脅防護),可對抗釣魚郵件、APT 社交工程攻擊、零時差攻擊等新型態威脅,對於企業的資料安全防護來說可是一大助力。更多方案比較請參考微軟官網

 

Exchange Server 2016、Exchange Online 整體看來都大大的替企業省下了不少功夫及解決各種困擾,像是速度上的提升,可以增進企業內部、外部溝通的效率,減少人力資源的損耗;二來,與 Office 365 的完美整合性也讓 Outlook 更容易使用、更方便使用、更有效率的使用,比起先前的 Exchange Server 2007 版本來說,Office 365 還會隨著每一代版本推出就自動更新,帶給企業用戶更便利的使用體驗;最後,在安全性的部分,Exchange Online 更是令人刮目相看,除了既有的保護機制外,E5 方案還提供進階威脅防護機制,大大降低企業郵件遭到病毒、駭客攻擊入侵的機會,企業用戶也可以放心的讓雲端處理一切過程,而且本地頻寬速度絲毫不受影響。

隨著所有作業都逐漸雲端化的這個世代,我們認為公司作業雲端化已經是不可避免的趨勢,包含了大家大量使用的電子郵件系統。如果公司想要更輕鬆的管理員工電子郵件、減少更多人力損耗、降低成本,讓 Exchange 走上雲端,是最佳的解方。尤其選擇訂閱 Office 365 的話還能享有更多 Office 應用程式,還可替公司省下購買相關軟體的開銷,同時也能獲得最新的支援,無論是小公司或是大企業,直上雲端就是未來的趨勢。

 

Microsoft Inspire UK Regional Session: Partner Growth

$
0
0

The fourth and final day of Microsoft Inspire featured our UK regional session. Hosted by Glenn Woolaghan, Partner Development Lead, and Laura Bouchard, Partner Sales Director, the session focussed on partner growth, specifically around the market opportunity, Microsoft and partner investments, and partner success. The theme of the session was the Partner Growth game, featuring a live illustrator capturing the key points and takeaways for you to take back digitally to your team.

Glenn and Laura began by thanking you, our partners, for the passion and energy you have demonstrated throughout the event, and indeed in driving the partner relationship day after day. Reflecting on the last financial year, Laura and Glenn pulled out the cloud workshops and the UK Partner Summit as two highlights for them both. With over 100 partners running more than 300 workshops, targeting 2400 unique customers (46% of which have transitioned into a sales lead or opportunity for our partners), it is clear to see why the workshops were a particular highlight of us working together.

 

"Your success is our success", Glenn Woolaghan

 

Market Opportunity

 

Clare Barclay, Chief Operating Officer for Microsoft UK, was next on stage, looking at the market opportunity. Despite all the changes we have heard about this week - both internally, at an industry level, and indeed globally, our partners are continuing to innovate. The UK is definitely a market full of opportunity and optimism, as shown in the results we have delivered collectively.

 

In the commercial sector our UK business grew 21% in the last year, with 73% powered by cloud growth. We have seen massive growth in O365 and triple digit growth in Azure. These results would not be possible without the support of our partners, driving usage, consumption and adoption within our customers, and providing such an exciting prospect for the opportunities ahead.

 

In line with the internal Microsoft changes, Clare then introduced Joe Macri as the new Partner Lead for Microsoft UK.

Looking at the year ahead, Joe commented that the innovation around the intelligent cloud, the intelligent edge, around all our products that Satya laid out on Monday, provides an incredible opportunity to build more capacity and innovation. We talk a lot about go-to-market, but with you, our partners, we are going to make the market within the UK and we are really investing in some key growth areas to enable this:

Partner Investments

 

Next on the agenda was partner investments, with Mark Smith, Azure Lead UK and Andy Pratt, President, The Marsden Group, joining us on stage. The Marsden Group demonstrates a great partner who has really invested in Azure and building out their IP, as well as investing in their culture to fully embrace the market opportunity. Andy discussed the importance of building a team culture that is agile, able to deliver a rapid proof of concept and then filling in any gaps by partnering with SI's and support companies.

 

The session then moved on to a partner panel, led by Laura and featuring the following partner representatives:

 

 

Both Keith, John and Guy really echoed the significance of aligning by industry, and as a partner, the importance of building out a managed service offering. Keith's tips for other partner were to invest in solutions that sit on top of your own services, to align sales and marketing teams, and to invest in building out expertise and skills in your organisation.

 

Microsoft Investments

 

As we moved onto Microsoft investments, the session turned to discuss building out digital talent. Hugh Millward, Director, Corporate External & Legal Affairs, and Nicola Young UK Skills Lead, took to the stage to run through the existing skills shortage and how we can address it to reach our cloud potential.

 

With Azure having a new release every 36 hours, the question was asked as to how can we continually learn, as and when we need it?

 

The answer is to move to a model of Learning as a Service, and Microsoft is investing in this area to enable the community to do this. With the launch of our skills initiative, and our Massively Open Online Courses, we cover a vast array of free and accessible Azure training. By offering our professional courses, we have taken this one step further to really help partners take this content and attach it to their cloud offering to add real value to their customers. We then heard from Microsoft partner, Fast Lane, who have developed their own LaaS solution to deliver the skills that the market needs.

 

Our Apprentice Programme also plays a huge part in building out new skills and talent, and the recently launched Partner Pledge has had over 50 partners signing up during this week alone, pledging over 500 apprentice places.

 

Mark Johnston and Scott Allen concluded this section, outlining how we will create demand for you, and with you. They ran through the messaging priorities for the year ahead, as we continue to focus on Digital Transformation and security, running both campaigns across the four workloads we have talked about this week.

 

How can partners stay up to date with all of this?

 

We are creating a far more simplified structure within our marketing department, and aim to keep our marketing:

 

1) Outcome focused
2) Relevant to all partners
3) Scalable

 

As a partner, you can stay updated through our social channels, and leverage the following resources to build out your strategy:

 

1) Concierge
2) Cloud workshops
3) Co-marketing.

 

Glenn then rounded up the session with the following resources for you to leverage:

 

 

 

 

Partner Success

 

The session ended by congratulating our finalists and winners of the Partner of the Year awards, with CGI, the Country Partner of the Year onstage with Clare. Catch up on the video to find out more about their well-deserved win.

As mentioned above, this session along with the three daily keynotes, were captured by a live illustrator, and we have created a digital resource showing the final illustrations and  outlining all of the resources and tools mentioned in today's session. Visit aka.ms/mpnpriorities to take a look. 

 

One-Liner: PowerShell Pi Mnemonic

$
0
0

I love this. I came across it when looking into how to calculate Pi with PowerShell.

 


"$('How I wish I could calculate Pi better'.split(' ') | % {$_.length})"

 

 

I've tweaked it ever so slightly.

 


$('How I wish I could calculate Pi better' -Split " " | % {[string]$a += $_.length}; $a.Insert(1,"."))

 

 

 

Viewing all 34890 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>