Quantcast
Channel: TechNet Blogs
Viewing all 34890 articles
Browse latest View live

Our journey to peer content distribution

$
0
0

The lead up

              Here within Microsoft we pride ourselves as being the “first and best” consumer of ConfigMgr features.  With that comes early learning and a little bit of pain as we try features early in the development cycle, and with code that is sometimes not fully tested or implemented as full as envisioned.  With the advent of the ConfigMgr (a.k.a SCCM) peer cache functionality we saw the opportunity to get in early on the roll out to aid us in some of our other projects (like a move to Azure and a DP removal for cost reduction).  Sarat Chandra was the man tasked to bring the idea of peer content sharing from the whiteboard to the production environment and while he had to deal with a few bumps in the road, he got us to a nice solid implementation.  That early lead also led to some fun tests as we worked with the development team to use and improve the feature to give you what you have available today.  One memorable learning from those early days was when a workstation (the workstation of our boss) served out over 3 TB of data to nearly 200,000 other machines, at the same time becoming useless for an entire day due to CPU and NIC consumption.  We also got a reminder that laptops move and while they might start the day in one office, later serving content to people in that office, when the laptop has moved to a Starbucks down the road, is not so good for making happy users and network admins.

Results today

              Much refinement has gone into both the product and our implementation of the feature to bring us to a much nicer point now.  We have it rolled out across the company and we have better reporting that is now showing that only 10-20% of our content requests are coming from our distribution points.  We did that through a combination of the ConfigMgr Peer cache functionality and the built-in OS BranchCache functionality.  We have also reduced our distribution point footprint in branch offices significantly, with a goal to completely remove them al in the next year.

              Getting this success required finding the right targeting for creating peer cache source servers.  Today we have some general targeting that works well for most locations, with some rule exceptions to handle a few remote offices that didn’t have a good result set for our peer source servers.  The combination that gave us the best success for defining a peer source was created as a collection which we then target with the appropriate client agent settings.  That combo is:

·       Not a high-level executive (We don’t want to risk any impacts to our CEO, for example)

·       Desktops only (they don’t move much and provide reliability)

·       Hardwire connection (more dependable throughput than wireless)

·       Not a VM

In our corporate headquarters we had many more machines to choose from for peer sourcing so we further improved content deliverability and reduced our selection set for peer source servers by also targeting only those machines with 20GB of free disk space.  This was done as an optimization to make sure we didn’t have a lot of cache churn on peer source servers.  Other locations had fewer potential targets, so we didn’t use this limitation on those locations.  Here is what the last 24hours looked like for us:

 

 

We also built custom reports to measure efficiency of Peer Content Source using rejection data that sources send based on the throttling set at the client agent settings and create an alert. This helps us track and remove Peer sources that are not efficient and help minimize content delivery delays. Below is the sample report:

Our Future:

              All this said, the ConfigMgr and OS products continue to improve, and we are looking to integrate those improvements as well.  We are about to start a pilot on the new partial download functionality that is available in the 1806 release.  We also have been really excited by what we have seen and heard about LEDbat implementation possibilities.  This new peer content process is a small change for ConfigMgr infrastructures but can be a big change in design and resiliency for your content distribution.


Ignite 2018 – Windows Server 2019 Security and Identity Session Recordings

$
0
0

Continuing with another post of Windows Server 2019 sessions from Ignite, this one includes some of the security and identity sessions. There will be coverage of additional hybrid Active Directory sessions in an upcoming post, but I've included the AD FS ones here alongside the security recordings.

Elevating your security posture with Windows Server 2019


Windows Server 2016 started the journey helping customers elevate their security posture starting with the operating system. Windows Server 2019 brings many enhancements to these features as well as a whole new set of capabilities. This session provides a rich overview of many of the security capabilities that are built-in to Windows Server with a specific focus on what’s new to Windows Server 2019.  The National Institute for Standards and Technology (NIST) will also join us to discuss Microsoft’s participation in their trusted cloud project which outlines security best practices for hybrid clouds--a solution built on Hyper-V Shielded VMs. Additionally, the SQL Server team will demonstrate how they leverage Virtualization-based Security and health attestation in Windows Server 2019 to protect data at rest and at runtime in their imminent release of SQL Always Encrypted with Enclaves.

What's new in Active Directory Federation Services (AD FS) in Windows Server 2019

Active Directory Federation Services (AD FS) continues to be the #1 federation provider to login to Office 365 and has grown to power logins for over 77M users globally! AD FS is also actively used to build modern applications to power the next generation of line-of-business applications that cater to the digital transformation for modern workplaces. Learn about the exciting new and upcoming capabilities in Windows Server 2019 to securely and seamlessly sign-in users from anywhere on a variety of devices. We primarily focus on securing extranet access and enabling logins without passwords, and discuss additional security features to protect password-based logins for extranet access. We focus on new capabilities introduced to support modern applications built using OpenID Connect and OAuth. We also discuss advances made to enable smooth sign-in experiences for end users.

There are no compensating controls for an insecure Active Directory


Today everything needs to be secure, but you need to start with Active Directory. Because if AD isn’t secure – nothing else in your organization is regardless of operating system, products, or procedures. That’s a strong statement, but one that Randy Franklin Smith can back up with facts. In this fast-paced session Randy spotlights the multitudinous ways that virtually any component or information on your network can be compromised if the attacker first gains unauthorized access to AD. The good news is AD was designed well and has stood the test of time. AD security is a matter of design, comprehensive management, and monitoring and is the basis for the list of fundamentals for securing AD shared in this session.

Secure access to Office 365/Azure Active Directory with new features in AD FS in Windows Server 2019

Active Directory Federation Services (AD FS) continues to be the #1 federation provider to login to Office 365 and has grown to power logins for over 77M users globally! In this session, learn about the exciting new and upcoming capabilities in Windows Server 2019 to securely and seamlessly sign in users from anywhere on a variety of devices. We primarily focus on securing extranet access and enabling logins without passwords, and discuss additional security features to protect password-based logins for extranet access. We focus on the new Azure AD Password Protection feature to ensure strong passwords for all users. We also discuss advances made to enable smooth sign-in experiences for end users.

 

 

ITIL and the DoD RMF – Part 2 of 3 – Security Controls

$
0
0

In Part 1, a basic overview of the United States Department of Defense (DoD) Risk Management Framework (RMF) may be found.  Now we turn to the “so what” - this entry examines how process consultants may apply their knowledge and skills to assist organizations’ efforts to realize the desired outcome of the RMF.  It is important to note that the RMF and the underlying security controls from the applicable NIST publications may be used by anyone, not just DoD!  Indeed, NIST SP 800-37r1 states, in section 1.2, “State, local, and tribal governments, as well as private sector organizations are encouraged to consider using these guidelines, as appropriate.”

The heart of the interfaces between ITIL and the RMF are the security controls.  Many of them are process-oriented!  Recall from Part 1 that these controls must be selected, implemented, and assessed.  Process consultants may be of great use in the implementation and assessment of controls.

NIST SP 800-53r1, Appendix F, contains some details about the various security controls that may be applied to systems and, most importantly to us, it provides requirements and recommendations for these controls.  There are many controls that may be influenced by IT processes.  Account management and privileged access are two, as well as contingency planning, and others.  The one that is most obvious to me is Configuration Management, and it has an entire family of controls and requirements under it, so let’s look at that.

If you were to open the NIST publication just referenced and find your way to Appendix F, you’d be able to find, somewhere in the neighborhood of page F-64, an overview of the Configuration Management family of controls.  All the controls may be implemented through a Configuration Management Plan (CMP), which is a common product of process consultants.  The task is to ensure that the CMP meets the requirements of this control family.

When assessments find artifacts and evidence that a control is implemented correctly, and that the result is an enhanced security posture, a higher degree of assurance is attained.  Enter the process consultant – we have all seen plans and process documentation wherein it is stated that things will be done a certain way, only to find that there is little to no evidence (in the form of process artifacts) that this is the case in practice.  A process consultant is particularly well-suited to review a CMP and ensure that it meets the requirements for this family of controls and, if they are diligent, to provide assurance that this is so.

CM-3 is of particular note – Change Management.   A process consultant may evaluate the organization’s change management plan to ensure that it

a)       Categorizes changes and clearly defines the categories

b)      Provides for an information security specialist be on the change review board and that this role possesses either change decision recommendation rights or voting rights

c)       Requires change decisions to be documented

d)      Requires that change records not be closed until verification of change implementation is complete

e)      Provides for the archival of change decisions and implementation verifications

f)        Provides for audit and review of change-related activities

g)       Defines change review board policies and procedures

All of this is merely to say that the plan meets the basic requirements of the control.  The text goes on to provide even more detail as to enhancements that may be made to the control.  After a review of the CMP, the act of providing assurance requires that proof be found – plans are just plans.  Therefore, the consultant should

a)       Review change records to ensure that changes are being categorized and that the system in question is distinguishable in change records from other systems (“I’d like to see all the change records in the last year for [system].”

b)      Attend several change review meetings and verify that an information security specialist is in attendance and that they either make recommendations or vote.

c)       Review change management artifacts to ensure that change decisions have been captured.

d)      Review change records to see if there is evidence that they are closed only upon verification that the change is implemented.

e)      Ensure that the artifacts of change decisions and change records in general are archived in accordance with policy.

f)        Locate evidence of change implementation and ensure that it records who made the change and is tied to the change record.  Also, incident records should be attributable to a change when applicable.

g)       Attend several change review meetings and ensure that they are conducted in accordance with policy/plan.

There are scores of controls defined in the NIST publication, many of which are either directly or indirectly influenced by “traditional” IT processes such as asset management and change/configuration management, as well as the more obviously security-related ITIL processes of Security Management, IT Service Continuity Management, and Access Management.

We have reviewed in this entry an example of how a process consultant may assist an organization in the implementation of the security controls required by the RMF.  In Part 3 I will provide an example from my work in which a customer complained of a control not being met and how I approached the matter.

What’s new and what’s coming w/ SharePoint & OneDrive Security, Compliance, & Administration – October 2018

$
0
0

What’s new and what’s coming with SharePoint & OneDrive Security, Compliance, and Administration – October 2018 Edition

In today’s complex and regulated environment, businesses need to focus on building more secure solutions that deliver value to their customers, partners, and shareholders—both in the cloud and on-premises.

Microsoft has been building enterprise software for decades and running some of the largest online services in the world. We draw from this experience to keep making SharePoint and OneDrive more secure for users, by implementing and continuously improving security-aware software development, operational management, and threat-mitigation practices that are essential to the strong protection of your services and data.

SharePoint and OneDrive are uniquely positioned to help you address these evolving security challenges. To begin with, Microsoft has continued to evolve with new standards and regulations. This has been a guiding principle as we think about security for SharePoint and OneDrive. Right alongside that principle is this one: There is no security without usability. If security gets in the way of productivity, users will find a different, less secure way to do their work.

At Microsoft Ignite 2018 we announced many of the new capabilities that are available now and coming soon to Office 365.

NOTE This is the first of regular monthly updates for what’s new and what’s coming with security, compliance, and administration in SharePoint and OneDrive.

Unified Labels

Unified labels in Microsoft 365 provide you a more integrate and consistent approach when creating labels and configuring and applying policies to protect and govern information across devices, applications, cloud, and on-premises locations. Unified labels provide a single location to create and configure data sensitivity labels for both Azure Information Protection and Office 365, so you can set up protection and retention labels and policies in the same place.

Unified labels in Microsoft 365 are available now.

SharePoint site classification labels

Across your organization, you probably have different types of content that require different security requirements to comply with industry regulations and internal policies.

Using Microsoft Information protection labels you can now apply consistent security and access policies to SharePoint Sites based on the sensitivity of the site. You can create sensitivity labels and associate them with policies in the new Microsoft 365 Security and Compliance Center. You can then apply these labels to files, emails, groups, Sites and Teams to automatically enforce consistent policies across your content.

SharePoint site classification labels will begin rolling out to Targeted Release in December 2018.

Automation application of retention labels

Data is your company’s most important asset, with the automatic application of retention labels you can ensure your most important assets are compliant to meet your corporate or regulatory requirements.  These retention labels can be created by importing the content types that you already use in SharePoint to help streamline the application of retention policies across all your content in SharePoint.

Content type to label support will begin rolling out in November 2018.

Label analytics

Information is growing at exponential rates and we’re making it easier for you to stay informed on how retention and sensitivity labels are being used to classify, retain, and protect your organization’s content in the cloud.

Using label analytics you can now get insights into how content is being labeled, including which labels are used most, and what emails and files they’re being applied to and also explore user activity to identify who’s been applying labels, investigate unusual trends, and more.

Label analytics will begin rolling out in Q4 2018.

File plans

Office 365 already provides data governance labels to establish rules for records management and retention.  Later this year we’ll be augmenting those with hierarchical file plans, allowing you to manage a range of retention labels with identifiers, departments, categories, statutory references and more.  File plans can be exported from Office 365 for easy editing in Excel, and then reimported to update label rules.

Files plans will begin to be available in Q4 2018.

Files Restore for SharePoint and Microsoft Teams

Data loss is non-negotiable, today we announced Files Restore for SharePoint and Microsoft Teams.

Files Restore is now available for SharePoint document libraries, protecting your shared files in SharePoint, Teams, Outlook groups, and Yammer groups connected to Office 365 groups and uses the same recovery capabilities that protect your personal files in OneDrive for Business.

Files Restore is a complete self-service recovery solution that allows site administrators restore document libraries from any point in time during the last 30 days and rewind changes using activity data to find the exact moment to revert to.

Files Restore for SharePoint and Microsoft Teams will begin rolling out to Targeted Release in December 2018.

Multi-geo capabilities for SharePoint

Multi-geo capabilities with SharePoint support your global data residency needs by storing SharePoint data in more than one selected Office 365 data center regions or countries. Microsoft commits to provide in-geo data residency, business continuity and disaster recovery for your core customer data at rest.

With multi-geo capabilities for SharePoint you can have a single Office 365 tenant that can span across multiple geos and enable a unified communication and collaboration experience across your global organization. You can migrate various on-premises satellites data silos into a single Office 365 tenant and at the same time meet your data residency needs. Your users are now connected to the people and content that matter most, regardless of where they work.

For IT, you can use powerful Office 365 admin tools to easily create and manage satellite sites and if needed move user data between geos to meet your data residency business needs. Get reports on where each user’s data is stored and audit trail of activities of all users in your global enterprise. Tailor sharing, security, and compliance policies separately for each geo—all from a familiar admin experience.

To learn more about Multi-Geo Capabilities in Office 365 see https://products.office.com/en-us/business/multi-geo-capabilities.

Multi-Geo capabilities with SharePoint Online are available now.

External sharing integration with Azure AD B2B

Last year at Ignite we introduced a new external sharing experience where recipients could access the shared content in a secure way by entering a one-time passcode sent to their email address without the need of creating or remembering passwords. This year, we're taking it a step further by integrating the one-time passcode sign-in experience with the Azure AD B2B platform. This enables external users to exist in your Azure AD directory as Guests which can be managed in the way you are already familiar with. This integration also brings the one-time passcode experience when sharing SharePoint sites and lists with external user.

SharePoint admin center updates

At Microsoft Ignite, in addition to our security and compliance news, we announced several exciting new features coming to the new SharePoint admin center.

Make the new admin center your default admin center…

The new SharePoint admin experience provides a completely revamped SharePoint admin center that draws heavily on our modern principles… an administrative console designed to help IT achieve more, so their users can achieve more. If you’ve enjoyed using the new SharePoint admin center up until today, you now have the option to make the new SharePoint admin center your default experience while still being able to go back to the classic admin center if you need to.

Improved management experience for group-connected sites

Office 365 Groups is a service that works with the Office 365 tools you use already so you can collaborate with your teammates when writing documents, creating spreadsheets, working on project plans, scheduling meetings, or sending email. Now we’re making it easier to manage group-connected sites by allowing SharePoint administrators manage ownership, change sharing settings, and delete and restore sites.

Simplified hub site creation and association

Sites and data grow as your organization grows. With SharePoint hub sites, you can bring flexible, dynamic building blocks to your organization’s intranet – connecting collaboration and communication.  Now in the SharePoint admin center, you can manage existing hub sites in addition to creating hub sites and associating existing sites with a hub site.  These capabilities also extend to multi-geo scenarios.

Quickly customize and control the site creation experience

Creating sites is one of the most common tasks an administrator performs in many SharePoint environments, and we’ve made it easier to customize and control how sites are created.

New site creation options allow you to create sites on behalf of users and configure common settings such as language, time zone, and storage limit and for classic and communication sites you can now also specify their managed path.

In addition to these site creation controls, you now can specify global settings that apply to all site when they're created too such as the time zone and site creation path and for organizations who want to control the site creation experience, you can enable or disable self-service site creation.

Improved site management experience

In response to your feedback, we’ve added more management controls across site management and storage, including a simplified view of your tenant-level storage usage and limit and the ability to switch to manual site storage management.

Additionally, in many cases you may want or need more than one or two administrators for a site collection.  In response to your feedback, we’ve now enabled the use of security groups as a site collection administrator in SharePoint Online.

Finally, we’re making it simpler to execute site actions by moving many of the common actions to the command bar rather than the site information panel.

Keep your information secure with improved access control and policies options

The freedom to work fluidly, independent of location has become an expectation as has the freedom to access email and documents from anywhere on any device—and that experience is expected to be seamless.  However, data loss is non-negotiable, and overexposure to information can have lasting legal and compliance implications.  IT needs to make sure that corporate data is secure while enabling users to stay productive in today’s mobile-first world, where the threat landscape is increasingly complex and sophisticated.

New updates to the SharePoint admin center include a consolidated view of access control policies to help safeguard your information.   On the new access control page, you can configure policies for unmanaged or non-compliant devices, configure the idle-session sign-out experience for users, as well as configure location policies to restrict or allow access to SharePoint Online from known IP ranges.

SharePoint admin center improvements will begin rolling out to Target Release in October 2018.

Learn more about how we secure your data with SharePoint and OneDrive in Office 365 and how customers are achieving success at https://aka.ms/SharePoint-Security.

 

 

Windows Server バックアップを取る際に警告がでる事象について

$
0
0

こんにちは。

Windows プラットフォーム サポート担当の佐々木です。
今回は、クォータを設定したボリュームのバックアップを Windows Server バックアップで取る際に警告がでる事象についてご説明させていただきます。

■クォータ設定とは?

複数のユーザが共有するコンピュータにおいて、各ユーザが自由に利用できるハードディスクの容量の上限を設定することです。
OS の機能として提供されており、ファイル サーバー リソース マネジメント(FSRM) という機能の中のひとつです。

詳しくは以下のリンクをご参照ください。
ファイル サーバー リソース マネージャー(FSRM)の概要

■発生事象

Windows Server バックアップを利用したバックアップで、クォータを設定したボリュームに対し、
フォルダ指定や除外設定をしてバックアップを開始すると『バックアップを完了しましたが、警告が発生しました。』という警告が出る事象が発生します。

<イベントログに記載されているエラー内容>
E:System Volume InformationSRMquota.md のバックアップで書込み操作中にエラーが発生しました: エラー [0x80070020] プロセスはファイルにアクセスできません。別のプロセスが使用中です。

■環境

Windows Server 2012 以降

■発生条件

本事象が発生する条件は、以下の2パターンに限定されます。

<パターン1>
Windows Server バックアップで、クォータが設定されたボリューム内のファイルを指定してバックアップを取得する場合

<パターン2>
Windows Server バックアップで、クォータが設定されたボリューム内のファイルをバックアップ対象外に指定して、ボリュームのバックアップを取得する場合

※クォータを設定していても、サーバー全体をバックアップする場合や、ボリューム内のファイルをバックアップ対象から除外設定せずに丸ごとバックアップする場合は、この事象は発生しません。

■結論

バックアップ時に警告が表示されますが、バックアップとして問題はございません。
整合性のあるバックアップ データですので、リストアも可能です。
クォータ設定にも影響はございませんので、ご安心ください。

この警告を表示させないようにするためには、発生条件となるバックアップの取得方法を避けていただきますようお願いいたします。

■参考)検証

<警告が発生する事象の再現手順>

①クォータを設定したボリュームに対し、フォルダ指定や除外設定をかけて Windows Server バックアップを開始します。

※今回は、ボリューム E にクォータを設定しています。
※バックアップは、ボリューム E の test_folder2 フォルダを除外してバックアップを取得します。
 除外・選択するファイルやフォルダは任意です。

②バックアップ完了時、状態欄に、『バックアップを完了しましたが、警告が発生しました。』という警告がでます。

<リストアの手順>

①発生した警告を無視し、バックアップデータをリストアします。
完了すると以下のように表示されます。

②リストアが完了したボリュームを見ると、バックアップデータが正しく回復されていることが確認できます。

いかがでしたでしょうか。
本ブログが少しでも皆様のお役に立てば幸いです。

Windows 10 RS5 (1809) における移動ユーザー プロファイルの問題について

$
0
0

こんにちは。Windows サポート チームの矢澤です。
Windows 10 RS5 (1809) における移動ユーザー プロファイルの問題が報告されておりますので、本 blog にてご案内いたします。


1. 事象
RS5 の端末において、%USERNAME% などの環境変数を利用して移動ユーザー プロファイルを構成すると移動ユーザープロファイルが利用できません。

 

2. 原因
%USERNAME% などの環境変数が testuser などのユーザー名に変更されず、%USERNAME% という文字列として認識されてしまいます。

 

3. 回避策
グループ ポリシーにて移動ユーザー プロファイルを設定している場合には回避策はございません。
Active Directory 上のユーザー オブジェクトのプロパティに直接設定している場合には環境変数を利用せず、直接実ユーザー名や実サーバー名を入力することで回避が可能です。
ユーザー数が多い場合には、以下のサンプル スクリプトにてプロパティを設定してください。

 

for /f "delims=" %i in ('dsquery user "<移動プロファイル対象ユーザーの OU の DN 名>"') do dsmod user %i -profile <移動プロファイルのパス>

例) test.local ドメインの test OU 配下のユーザーの移動プロファイルのパスを一括で変更する場合
for /f "delims=" %i in ('dsquery user "OU=test,DC=test,DC=local"') do dsmod user %i -profile \FILESVRprofile$username$

 

4. 対処策
マイクロソフトでは本事象が重大な問題であると認識しており、この不具合の修正を優先度を上げて進めております。
修正プログラムのリリース時期が決定次第、本 blog 内で再度ご案内いたします。

Exchange 2013 のメールボックス サーバー上で自己署名証明書の有効期限が切れた際の注意点

$
0
0

今回は Exchange 2013 のメールボックス サーバー (クライアント アクセスとメールボックスが共存するサーバーも含む) 上で自己署名証明書の有効期限が切れた際に発生してしまう問題についてご紹介いたします。
この問題は最近見つかった不具合となりますが、Exchange 2013 は既に延長サポート フェーズの製品であるため、残念ながら本問題に対する修正は見送られております。
なお、Exchange 2016 では内部動作が一部変更されていることから本問題は発生いたしません。

 

現象

メールボックスの役割がインストールされた Exchange 2013上で実施されている 2 つのプローブ (OutlookRpcSelfTestProve と OutlookRpcDeepTestProbe) が自己署名証明書の有効期限が切れた後からエラーで失敗するようになります。
具体的にはメールボックスの役割がインストールされた Exchange 2013 上で IIS の Web サイト (Exchange Back End) にバインドされている自己署名証明書の有効期限が切れた以降から 2 つのプローブが失敗するようになり、その結果として可能性管理の機能により該当サーバー上で以下の事象が発生してしまいます。

・アプリケーション プール MSExchangeRpcProxyAppPool のリサイクル
・Microsoft Exchange RPC Client Access サービスの再起動

なお、Exchange 2013 CU2-v2 など初期のバージョンでは上記以外にデータベースのフェールオーバーが実施される場合もございます。
ただし、Exchange 2013 CU3 以降では OutlookRpcSelfTestProve や OutlookRpcDeepTestProbe のプローブが失敗してもデータベースのフェールオーバーは実施されなくなっています。

- 補足
本不具合が発生している場合、ProbeResult に関するイベント ログにはそれぞれ以下の例外内容でエラーが記録されます。

<< OutlookRpcSelfTestProve のエラー >>

[詳細] タブの ExecutionContext 項目には以下の例外が記録
・Exception = System.Net.WebException: 基礎になる接続が閉じられました: SSL/TLS のセキュリティで保護されているチャネルに対する信頼関係を確立できませんでした ---&gt; System.Security.Authentication.AuthenticationException: 検証プロシージャによると、リモート証明書は無効です。

<< OutlookRpcDeepTestProbe のエラー >>

[詳細] タブの ExecutionContext 項目には以下の例外が記録
・Exception = Microsoft.Exchange.Rpc.ServerUnavailableException: Error 0x6ba (RPC サーバーを利用できません。) from ClientAsyncCallState.CheckCompletion: RpcAsyncCompleteCall

 

原因

Exchange 2013 のメールボックス サーバーにインストールされる自己署名証明書は組織内の Exchange Server から自動的に信頼されるように設計されています。
ただし、Exchange 2013 のメールボックス サーバー上で実施される 2 つのプローブ (OutlookRpcSelfTestProbe および OutlookRpcDeepTestProbe) に関しては自己署名証明書を自動的に信頼できない不具合があり、この不具合により自己署名証明書の有効期限が切れたタイミングで該当のプローブが失敗するようになります。

なお、Exchange 2013 のインストール時に作成される自己署名証明書は [コンピュータ アカウント] の [信頼されたルート証明機関] に登録されていますが、自己署名証明書を更新しただけでは [コンピュータ アカウント] の [信頼されたルート証明機関] には自動的には登録されません。
そのため、Exchange 2013 のメールボックス サーバー上で自己署名証明書を更新して IIS の Web サイト (Exchange Back End) にバインドする作業を実施しても、[コンピュータ アカウント] の [信頼されたルート証明機関] に登録していない場合にも該当のプローブが失敗する現象が発生いたします。

 

対処方法

証明書の更新作業は一般的にはクライアントからの要求を受け付けるクライアント アクセス サーバーでのみ必要となります。
ただし、本不具合を回避するためにも、Exchange 2013 をご利用のお客様は後述の手順を参考にメールボックス サーバー上の自己署名証明書を更新してくださいますようお願いいたします。
なお、クライアント アクセスの役割とメールボックスの役割を共存させているサーバー上でも本作業を明示的に実施いただく必要がございますのでご注意ください。

- 作業概要
[0] IIS の Web サイト (Exchange Back End) にバインドされている自己署名証明書の確認
[1] 自己署名証明書を新規に発行
[2] 新規に発行した自己署名証明書を [コンピュータ アカウント] の [信頼されたルート証明機関] に登録
[3] 新規に発行した自己署名証明書を IIS の Web サイト (Exchange Back End) にバインド
[4] (必要に応じて) 他のサービスに割り当てている自己署名証明書を変更

 

IIS の Web サイト (Exchange Back End) にバインドされている自己署名証明書の確認
以下の手順は更新対象のメールボックス サーバー上で実施してください。

1. インターネット インフォメーション サービス (IIS) マネージャーを起動します。
2. [<サーバー名>] - [サイト] - [Exchange Back End] を右クリックして [バインドの編集] をクリックします。
3. "種類" が https で "ポート" が 444 のエントリを選択した状態で [編集] をクリックします。
4. [SSL 証明書] に設定されている証明書を確認します。(必要に応じて [表示] をクリックして証明書の詳細についても確認してください)

 

自己署名証明書を新規に発行
Exchange 管理シェルから以下のコマンドを実行して自己署名証明書を新規に発行します。

Get-ExchangeCertificate -Server "<MBX サーバー名>" -Thumbprint "<更新対象の自己署名証明書の拇印>" | New-ExchangeCertificate -Service None

なお、更新対象の自己署名証明書の拇印 (Thumbprint) に関しては以下のコマンドを実行することでも確認することができます。

Get-ExchangeCertificate -Server "<MBX サーバー名>" | fl

 

新規に発行した自己署名証明書を [コンピュータ アカウント] の [信頼されたルート証明機関] に登録
以下の手順は更新対象のメールボックス サーバー上で実施してください。

1. Microsoft 管理コンソールを起動します。([ファイル名を指定して実行] で "mmc" と入力して [OK] をクリックしてください)
2. [ファイル] メニューから [スナップインの追加と削除] を選択します。
3. [証明書] (コンピュータ アカウント) を追加して [OK] をクリックします。
4. 左ペインから [証明書] - [個人] - [証明書] を選択します。
5. 新規に発行した自己署名証明書を右クリックして [コピー] を選択します。
6. 左ペインから [証明書] - [信頼されたルート証明機関] - [証明書] を選択します。
7. 中央ペインの空白部分を右クリックして [貼り付け] を選択して該当の自己署名証明書をコピーします。

* [コンピュータ アカウント] の [信頼されたルート証明機関] に登録後は Get-ExchangeCertificate で該当の自己署名証明書を確認すると RootCAType パラメータは "Registry" と表示されます。

 

新規に発行した自己署名証明書を IIS の Web サイト (Exchange Back End) にバインド
以下の手順は更新対象のメールボックス サーバー上で実施してください。

1. インターネット インフォメーション サービス (IIS) マネージャーを起動します。
2. [<サーバー名>] - [サイト] - [Exchange Back End] を右クリックして [バインドの編集] をクリックします。
3. "種類" が https で "ポート" が 444 のエントリを選択した状態で [編集] をクリックします。
4. [SSL 証明書] に新規に発行した自己署名証明書を設定して [OK] をクリックします。

 

(必要に応じて) 他のサービスに割り当てている自己署名証明書を変更
メールボックスの役割のみをインストールしているサーバーなどでは以下の箇所も自己署名証明書のまま利用されている場合もあるかと存じます。

・IIS の Web サイト (Default Web Site) にバインドしている証明書
・Transport サービスの内部トランスポート証明書

上記に割り当てている自己署名証明書の有効期限が切れても特に問題はございませんが、これらの証明書も新規に発行した自己署名証明書に変更したい場合は Exchange 管理シェルから以下のコマンドを実行してください。

Enable-ExchangeCertificate "<MBX サーバー名>" -Thumbprint "<新規に発行した自己署名証明書の拇印>" -Service IIS,SMTP

The New Intelligence

$
0
0

Logbucheintrag 181017:

2500 Partner werden in der kommenden Woche auf Microsofts Deutscher Partnerkonferenz in Leipzig erwartet. Dies zeigt, wie groß der Zuspruch für unser Ökosystem inzwischen geworden ist. Microsofts Partnernetz ist inzwischen größer als das von AWS, Salesforce und Google zusammen. Allein in Deutschland gibt es 31.500 Partnerunternehmen, die mit ihren Services und Lösungen den Erfolg von Microsoft überhaupt erst möglich machen.

Und umgekehrt macht Microsoft mit seinen Cloud-Lösungen erst den Erfolg der Partner und Kunden möglich. Gerade auf der Azure-Plattform schreitet das Wachstum stürmisch voran. Dazu trägt nicht allein die hohe Verfügbarkeit der Cloud-Infrastruktur bei, sondern auch die Tatsache, dass immer mehr Lösungen über die Cloud für den Kunden erreichbar sind – von IoT-Lösungen bis zur künstlichen Intelligenz. Deshalb steht die DPK in Leipzig auch unter dem Motto „The New Intelligence“, weil rund um intelligente Lösungen derzeit das größte Wachstumsplus entsteht.

Das zeigt sich vor allem in Deutschland, wo die Migration in die Cloud auf breiter Front läuft. Längst geht es nicht mehr um das „Ob“, sondern um das „Wie“ und „Wann“. Das schlägt sich in wirtschaftlichen Erfolgszahlen nieder: Viele Partner erzielen inzwischen mehr als die Hälfte ihrer Erlöse über Managed Services. Mit jedem Euro, den Microsoft einnimmt, verdienen Managed Service Provider das Zwölffache. Und 90 Prozent des gesamten Microsoft-Umsatzes wir durch Partner erwirtschaftet. Das allein zeigt, wie lukrativ das Ökosystem für Partner und Kunden ist.

Ein Beispiel dafür, wie interessant Cloud-basierte Lösungen für das Geschäft bei Kunden und Partnern gleichermaßen ist, ist die Cloud-basierte Datenbank Cosmos, die jetzt auch für verteilte Datenhaltung in der Cloud optimiert ist. Gerade bei global tätigen Unternehmen sind Datenreplikation und Datenupdates entscheidend. Wie sonst sollen Bestellungen in Asien und in Europa, die gleichzeitig getätigt werden können, reserviert, gebucht und verarbeitet werden? Während Microsoft Azure die Infrastruktur für eine solche Herausforderung liefert, ist Cosmos DB für verteilte, massive Datenmengen prädestiniert. Ein ideales Betätigungsfeld für Partner, die ihren Großkunden einen weltumspannenden Service bieten müssen.

Auch mit Azure IoT Edge lassen sich verteilte Infrastrukturen in der Fertigung aufbauen, wobei die Daten aus Performancegründen nicht unmittelbar in die Cloud, sondern an ihren Rand, an der Edge, gespeichert werden. Auch hier sind komplexeste Managed Services gefragt, für die Microsoft mit Partnern die Lösungen vorhält.

Und auch im Umfeld der künstlichen Intelligenz sind die Angebote für Machine Learning und Conversational Computing inzwischen so weit ausgereift, dass Partner auf der ganzen Welt eigene Lösungen damit realisieren. Dabei wächst nicht nur das Angebot an Funktionalität – auch das Knowhow bei Partnern und Kunden wächst massiv. So haben wir bei LinkedIn herausgefunden, dass die Zahl der Professionals, die KI-Expertise zu ihren Skills hinzugefügt haben, um 190 Prozent seit 2015 gewachsen ist. Dies zeigt, wie rasant sich das Microsoft Ökosystem entwickelt.

Deshalb sollte kein deutscher Microsoft Partner die DPK in Leipzig verpassen, um sich die neuesten Updates rund um „The New Intelligence“ zu sichern. Um die Vielzahl an Neuerungen gerecht zu werden, haben wir die Veranstaltungsdauer auf drei Tage verlängert und dabei auf Wunsch unserer Partner auch Kunden zu dieser Veranstaltung eingeladen. Nach den globalen Veranstaltungen Inspire und Ignite im vergangenen Quartal ist die DPK der ideale Ort, um sich alle Voraussetzungen für ein künftiges Wachstum mit Microsoft zu schaffen. Wir sehen uns!

 


Costing Error Detection report as AL extension

$
0
0

I have converted 'Report 60010 Costing Error Detection New [Version List=NAVDIAG17.10.06]' to 'D365 BC extension (Cloud)'. I used fresh Business Central (Fall 2018) cloud tenant.
The Costing Error Detection report is old tool that can be download from partnersourse site. This report can help you find common costing data problems.

As first step I imported this report to Dynamics 365 Business Central on premises and after that converted this object to AL object with use The Txt2Al Conversion Tool.

I used new ID (50130) for this experimental object.
Remember.. for AL extension you have free range 50.000-99.999, Object Ranges in Dynamics 365 Business Central.

You could download .app file from Github project: https://github.com/finn777/ALF_Costing_Error_Detection/blob/master/AL/ALF_Costing_Error_Detection/Alexey%20Finogenov_ALF_Costing_Error_Detection_1.0.0.0.app

Some screenshot about install:
// Deploying a Tenant Customization

And .. if you see an empty report then all Okay…
For emulate error I’ve launched my extension on Development Azure VM (Container Sandbox) and added some problems via Object Designer.
// More about Sandbox Environment

As result… report see it my direct data manipulation:

 

(SCCM) Tip of the Day: System Center Updates Publisher adds support for new OSes

$
0
0

Today's tip...

SCUP (System Center Updates Publisher) enables independent software vendors or line-of-business application developers to manage custom updates.

Using SCUP, you can:

  • Import updates from external catalogs (non-Microsoft update catalogs).
  • Modify update definitions including applicability, and deployment metadata.
  • Export updates to external catalogs.
  • Publish updates to an update server.

An updated version of System Center Updates Publisher (SCUP) is now available and can be downloaded HERE.  This release of SCUP adds support for Windows 10 and Windows Server 2016.  For detailed information about supported operating systems and prerequisites, see the Install Updates Publisher topic.

The SCCM team also created a video tutorial for SCUP.  The video is part a series focusing on software updates in Configuration Manager current branch. This session focuses specifically on System Center Updates Publisher (SCUP).  Steven Rachui covers understanding and configuring SCUP, working with SCUP and integrating SCUP into and using SCUP in a Configuration Manager environment.

To see additional SCCM videos in this series you can view their Software Updates Playlist

References:

【お客様事例】サービスの単なる電子化はデジタル化に非ず。事業スピードを加速したベネッセの「パブリック クラウド ファースト」にみる、デジタル化の本質【10/17更新】

$
0
0

いま、企業活動を支える IT は「第三のプラットフォーム」へと移行しつつあります。従来のクライアント / サーバーでの運用から、持ち運び可能なモバイルや、ネットワーク経由でサービスを利用できるクラウドといった、より複合的な情報基盤への転換がおこなわれているのです。

ベネッセコーポレーション(以下、ベネッセ)もまた、第三のプラットフォームを活用したデジタル トランスフォーメーションを進める 1 社です。同社は 2016 年度より、データセンターで運用している 2,500 台規模の IT 基盤について、全体の 70% を 推奨パブリック クラウドとして選定した Microsoft Azure 中心に移行する計画を推進。これと並行して、すべての事業部門、サービス基盤を対象に、PaaS を利用したアーキテクチャ設計を標準化したことにより、事業スピードを大きく加速しています。

 

続きはこちら

Disabling Basic authentication in Exchange Online – Public Preview Now Available

$
0
0

Several months ago we added a feature to the Microsoft 365 Roadmap which generated a lot of interest. The feature was named Disable Basic Authentication in Exchange Online using Authentication Policies and as the roadmap items stated - it provided the capability for an Admin to define protocols which should allow Basic Authentication.

Why was that so interesting? Well as you probably know, Basic authentication in Exchange Online accepts a username and a password for client access requests and blocking Basic authentication can help protect your Exchange Online organization from brute force or password spray attacks. Lately there has been an increase in the occurrence of these types of attacks, and so we are accelerating our release of this feature as it helps prevent them.

If your organization has no legacy email clients or doesn’t want to allow legacy email clients, you can use these new authentication policies in Exchange Online to disable Basic authentication requests. This forces all client access requests to use modern authentication, which will stop these attacks from impacting your organization.

We are still working on some aspects of this feature, and we’ll highlight those for you here, but in response to the increase of attacks we are seeing, we want to make authentication policies available to you now, and are therefore rolling this out worldwide immediately.

There is already an excellent article describing how this feature works and we strongly suggest you read, understand and follow the article before enabling this feature.

There are three important caveats to this feature:

  • There is a lack of telemetry for tenant admins allowing them to report on which users are using Basic Auth (and with which protocol) and once a block is enabled, whether such traffic was blocked. In other words, we can’t really tell you how well the block is working.
  • A policy change can take up to 24 hours to take effect, unless the admin calls a cmdlet (such as Set-User) to ‘tickle’ each user. (Note that ‘tickling’ is a technical term, first used here). So the block might not kick in right away, and you might have to take some action if you want it to happen faster.
  • If a user’s identity has not been replicated to Azure AD/Exchange Online, they will not be blocked and so any request received by Exchange Online will be routed to the authoritative Security Token Service (STS) where it is likely to fail. This same behavior also means that any authentication requests for unknown users in a tenant (such as might happen during a password spray attack) will also be forwarded to the authoritative STS for the domain.

We had been holding back on moving from private to public preview primarily due the first two of these - a tenant admin could misconfigure something and not realize until it’s too late due to the lack of reporting and the delayed effect of policy change.

However, given the increasing frequency of these types of attacks we would rather give you access to the capability, knowing you will all carefully read the documentation before configuring. We’ll continue to work on improving the feature set, but you don’t need to wait for us.

We acknowledge that for large customers, tickling every user using Exchange Online PowerShell (which can be unreliable for long running scripts) is challenging, but again we feel the benefit outweighs the negatives at this stage.

It’s in all our interests to prevent these types of attacks from compromising our data and users, and we hope you find these tools useful and helpful. Use them wisely!

The Exchange Team

Cloud Platform Release Announcements for October 17, 2018

$
0
0

Azure App Service | Price reductions for App Service on Linux basic and premium tiers

We’ve recently reduced prices for Azure App Service on Linux for the basic and premium tiers.

Azure App Service allows you to quickly build, deploy, and scale enterprise-grade web, mobile, and API apps running on any platform. Meet rigorous performance, scalability, security, and compliance requirements while using a fully managed platform to perform infrastructure maintenance and build enterprise-grade applications.

See full details.

Azure Digital Twins | Now in preview

Azure Digital Twins is now available in preview. Digital Twins is a cloud, AI, and IoT platform that uniquely enables customers and partners to create a comprehensive digital model of the physical environment that includes people, places, and things, as well as the relationships and processes that bind them.

Create comprehensive digital models and spatially aware solutions that can be applied to any physical environment. Build secure and contextually aware solutions that optimize energy efficiency, space utilization, improve employee and occupant satisfaction, and better serves peoples’ needs. The platform significantly accelerates and simplifies the creation of digital twin solutions attuned to specific industry needs and is equipped with multi-tenancy and nested tenancy capabilities that enable you to securely repeat your solutions to scale your business.

The release of Digital Twins in preview is another powerful example of how Microsoft continues to deliver on its commitment to simplify IoT so any customer, no matter where they’re starting from, can create trusted, connected solutions for digital transformation. By removing layers of complexity and accelerating the creation of innovative spatial intelligence solutions, Azure Digital Twins provides organizations with the foundation they need to create the next wave of innovation in IoT.

To learn more, read the full blog post and visit the Azure Digital Twins webpage.

Azure Cognitive Services available in new regions

Pricing | Azure Cognitive Services webpage

New regions are now available for Azure Cognitive Services. The Content Moderator, Computer Vision, Face, Text Analytics, Translator Text, and Language Understanding (LUIS) Services are now generally available in US Government regions.

See the pricing page for additional details.

Azure SQL Database | Dev/test pricing now available as part of the Azure Enterprise dev/test offer

Now generally available for vCore-based resource types in Azure SQL Database (excluding Managed Instance), dev/test pricing provides a cost-effective way to run your development and testing workloads on Azure SQL Database. With dev/test pricing for Azure SQL Database, you can save up to 55 percent versus list prices.

Get started today with a Visual Studio subscription and visit the Azure Dev/Test pricing page for more information.

Determine your savings with our pricing calculator.

Azure Database for MySQL | Read replica in preview

Azure Database for MySQL now supports continuous asynchronous replication of data from one Azure Database for MySQL server (master) to up to five Azure Database for MySQL servers (replicas) in the same region. This allows read-heavy workloads to scale beyond the capacity constraints of one Azure Database for MySQL server and be balanced across replica servers according to the users' preference. Replica servers are read-only except for writes replicated from data changes on the master. Stopping replication to a replica server causes it to become a standalone server that accepts reads and writes.

Learn more about Read replicas in Azure Database for MySQL.

Azure Media Services and Video Indexer | Azure Media Services v3 API now available

Azure Media Services v3 API is now generally available. This API allows developers and media companies to build media applications that include encoding, content protection with DRM, video indexing, live streaming, dynamic packaging, and content delivery at scale. The API now features a simplified development and management model, and a new live streaming entity. There are also new updates to Azure Media Player.

For more information about the capabilities, read the announcement.

Machine Reading at Scale – Transfer Learning for Large Text Corpuses

$
0
0

This post is authored by Anusua Trivedi, Senior Data Scientist at Microsoft.

This post builds on the MRC Blog where we discussed how machine reading comprehension (MRC) can help us “transfer learn” any text. In this post, we introduce the notion of and the need for machine reading at scale, and for transfer learning on large text corpuses.

Introduction

Machine reading for question answering has become an important testbed for evaluating how well computer systems understand human language. It is also proving to be a crucial technology for applications such as search engines and dialog systems. The research community has recently created a multitude of large-scale datasets over text sources including:

  • Wikipedia (WikiReading, SQuAD, WikiHop).
  • News and newsworthy articles (CNN/Daily Mail, NewsQA, RACE).
  • Fictional stories (MCTest, CBT, NarrativeQA).
  • General web sources (MS MARCO, TriviaQA, SearchQA).

These new datasets have, in turn, inspired an even wider array of new question answering systems.

In the MRC blog post, we trained and tested different MRC algorithms on these large datasets. We were able to successfully transfer learn smaller text excepts using these pretrained MRC algorithms. However, when we tried creating a QA system for the Gutenberg book corpus (English only) using these pretrained MRC models, the algorithms failed. MRC usually works on text excepts or documents but fails for larger text corpuses. This leads us to a newer concept – machine reading at scale (MRS). Building machines that can perform machine reading comprehension at scale would be of great interest for enterprises.

Machine Reading at Scale (MRS)

Instead of focusing on only smaller text excerpts, Danqi Chen et al. came up with a solution to a much bigger problem which is machine reading at scale. To accomplish the task of reading Wikipedia to answer open-domain questions, they combined a search component based on bigram hashing and TF-IDF matching with a multi-layer recurrent neural network model trained to detect answers in Wikipedia paragraphs.

MRC is about answering a query about a given context paragraph. MRC algorithms typically assume that a short piece of relevant text is already identified and given to the model, which is not realistic for building an open-domain QA system.

In sharp contrast, methods that use information retrieval over documents must employ search as an integral part of the solution.

MRS strikes a balance between the two approaches. It is focused on simultaneously maintaining the challenge of machine comprehension, which requires the deep understanding of text, while keeping the realistic constraint of searching over a large open resource.

Why is MRS Important for Enterprises?

The adoption of enterprise chatbots has been rapidly increasing in recent times. To further advance these scenarios, research and industry has turned toward conversational AI approaches, especially in use cases such as banking, insurance and telecommunications, where there are large corpuses of text logs involved.

One of the major challenges for conversational AI is to understand complex sentences of human speech in the same way humans do. The challenge becomes more complex when we need to do this over large volumes of text. MRS can address both these concerns where it can answer objective questions from a large corpus with high accuracy. Such approaches can be used in real-world applications like customer service.

In this post, we want to evaluate the MRS approach to solve automatic QA capability across different large corpuses.

Training MRS – DrQA Model

DrQA is a system for reading comprehension applied to open-domain question answering. DrQA is specifically targeted at the task of machine reading at scale. In this setting, we are searching for an answer to a question in a potentially very large corpus of unstructured documents (which may not be redundant). Thus, the system must combine the challenges of document retrieval (i.e. finding relevant documents) with that of machine comprehension of text (identifying the answers from those documents).

We use Deep Learning Virtual Machine (DLVM) as the compute environment with two NVIDIA Tesla P100 GPU, CUDA and cuDNN libraries. The DLVM is a specially configured variant of the Data Science Virtual Machine (DSVM) that makes it more straightforward to use GPU-based VM instances for training deep learning models. It is supported on Windows 2016 and the Ubuntu Data Science Virtual Machine. It shares the same core VM images – and hence the same rich toolset – as the DSVM, but is configured to make deep learning easier. All the experiments were run on a Linux DLVM with two NVIDIA Tesla P100 GPUs. We use the PyTorch backend to build the models. We pip installed all the dependencies in the DLVM environment.

We fork the Facebook Research GitHub for our blog work and we train the DrQA model on SQUAD dataset. We use the pre-trained MRS model for evaluating our large Gutenberg corpuses using transfer learning techniques.

Children's Gutenberg Corpus

We created a Gutenberg corpus consisting of about 36,000 English books. We then created a subset of Gutenberg corpus consisting of 528 children’s books.

Pre-processing the children’s Gutenberg dataset:

  • Download books with filter (e.g. children, fairy tales etc.).
  • Clean the downloaded books.
  • Extract text data from book content.

How to Create a Custom Corpus for DrQA to Work?

We follow the instructions available here to create a compatible document retriever for the Gutenberg Children’s books.

To execute the DrQA model:

  • Insert a query in the UI and click the search button.
  • This calls the demo server (flask server running in the backend).
  • The demo code initiates the DrQA pipeline.
  • DrQA pipeline components are explained here.
  • The question is tokenized.
  • Based on the tokenized question, the document retriever uses Bigram hashing + TF-IDF matching to match the most documents.
  • We retrieve the top 3 matching documents.
  • The Document Reader (a multilayer RNN) is then initiated to retrieve the answers from the document.
  • We use a pretrained model on the SQUAD Dataset.
  • We do transfer learning on the Children's Gutenberg dataset. You can download the pre-processed Gutenberg Children’s Book corpus for the DrQA model here.
  • The model embedding layer is initiated by pretrained Stanford CoreNLP embedding vector.
  • The model returns the most probable answer span from each of the top 3 documents.
  • We can speed up the model performance significantly through data-parallel inference, using this model on multiple GPUs.

The pipeline returns the most probable answer list from the top three most matched documents.

We then run the interactive pipeline using this trained DrQA model to test the Gutenberg Children’s Book Corpus.

For environment setup, please follow ReadMe.md in GitHub to download the code and install dependencies. For all code and related details, please refer to our GitHub link here.

MRS Using DLVM

Please follow similar steps listed in this notebook to test the DrQA model on DLVM.

Learnings from Our Evaluation Work

In this post, we investigated the performance of the MRS model on our own custom dataset. We tested the performance of the transfer learning approach for creating a QA system for around 528 children’s books from the Project Gutenberg Corpus using the pretrained DrQA model. Our evaluation results are captured in the exhibits below and in the explanation that follows. Note that these results are particular to our evaluation scenario – results will vary for other documents or scenarios.

In the above examples, we tried questions beginning with What, How, Who, Where and Why – and there’s an important aspect about MRC that is worth noting, namely:

  • MRC is best suited for “factoid” questions. Factoid questions are about providing concise facts. E.g. "Who is the headmaster of Hogwarts?" or "What is the population of Mars”. Thus, for the What, Who and Where types of questions above, MRC works well.
  • For non-factoid questions (e.g. Why), MRC does not do a very good job.

The green box represents the correct answer for each question. As we see here, for factoid questions, the answers chosen by the MRC model are in line with the correct answer. In the case of the non-factoid “Why” question, however, the correct answer is the third one, and it’s the only one that makes any sense.

Overall, our evaluation scenario shows that for generic large document corpuses, the DrQA model does a good job of answering factoid questions.
Anusua
@anurive  |  Email Anusua at antriv@microsoft.com for questions pertaining to this post.

Azure Log Analytics for Windows Telemetry data

$
0
0

 

 

I blogged about this last year here

 

 

As best practice, the Upgrade Analytics script checks for far more than just injecting the workspace key and telemetry value.

This could also be managed in an SCCM Compliance setting.

 

Don't forget to assess if you want IE data collection!

 

 

Simple method to update machines to send Windows telemetry data:

 

 

PowerShell script

From PowerShell as Administrator

Set-Location HKLM:

 

$registryPath = "HKLM:SOFTWAREMicrosoftWindowsCurrentVersionPolicies"

$Name = "DataCollection"

$Name2 = "AllowTelemetry"

$CommercialID = "00000000-0000-0000-0000-000000000000"

$value = "2"  # Values from 0-3 accepted

$vIEDataOptInPath = "HKLM:SOFTWAREMicrosoftWindowsCurrentVersionPoliciesDataCollection"

$IEOptInLevel = "2"  # Values from 0-3 accepted

 

If ( (Test-Path $registryPath$Name) ) { write-host -f green "Registry keys already exist" }

If ( ! (Test-Path $registryPath$Name) )

{

New-ItemProperty -Path $registryPath -Name $name

New-ItemProperty -Path $registryPath -Name $CommercialID

New-ItemProperty -Path $vIEDataOptInPath -Name IEDataOptIn -Type DWord -Value $IEOptInLevel

New-ItemProperty -Path $registryPath$Name -Name $name2 -Value $value `

    -PropertyType DWORD -Force | Out-Null

Write-host -f green "Registry keys added for Telemetry"

}

 

 

 

 

References

Configure telemetry

Get Started link

Win 7,8 Opt in link


Leap Seconds for the IT Pro: What you need to know

$
0
0

Hi Everybody – Program Manager Dan Cuomo here to tell you, the IT Pro, everything you need to know about Leap Seconds on Windows. If you saw our recent blog series on the Top 10 Networking Features, you may have already noticed an announcement about Leap Second support included in Windows Server 2019 and Windows 10 October 2018 Update.

Note: If you’re an Application Developer, stay tuned for our future post Leap Seconds for the Application Developer: What you need to know

For most IT Professionals, you may not be concerned about Leap Seconds. However, if you’re a customer with time-sensitive applications or in a regulated industry requiring high accuracy time, a measly little second could hurl you into an auditing and compliance frenzy. Whether you call it a v-team or tiger-team nobody wants to have to write those status reports, After Action Reports, or Root Cause Analysis (or whatever your organization calls them) to explain just what exactly went wrong. A leap second comes and goes quickly, but the effects could last some time.

So in this article, we’ll attempt to explain everything the IT Pro needs to know so you can explain, test, and deploy Windows Server 2019 and Windows 10 October 2018 Update with confidence for your time-sensitive scenarios.

Note: Leap Seconds are only included in Windows Server 2019 and Windows 10 October 2018 Update and later releases so this content is not applicable to operating systems prior to this release.

What are Leap Seconds

Lets first understand what a leap second is. A leap second is an occasional 1-second adjustment to UTC. As the earth’s rotation slows (e.g. tidal forces, earthquakes, hurricanes, etc.) UTC diverges from mean solar time or astronomical time.  Leap seconds are added to keep the difference between UTC and astronomical time to less than 0.9 seconds. Don’t worry, we don’t need to start colonizing new planets (yet 😉).  But still, wish we found out how that jump across galaxies worked out for the Stargate Universe crew…

An organization called the International Earth Rotation and Reference Systems Service (IERS) oversees the announcement of Leap Seconds. They release several bulletins; Bulletin C is released every 6 months to confirm whether there will be a leap second or not.

Note: At the time leap seconds were introduced in 1972, a necessary correction of ten seconds was made to UTC. There have since been 27 leap seconds added to UTC for a total of 37 one-second corrections. Leap seconds are added, on average, every 1.5 yrs (NIST FAQ).

Leap Seconds on Windows Overview

Now let’s talk about some of the high-level principles needed to understand Leap Seconds on Windows.

UTC-Compliant Leap Seconds

If you are in a regulated industry, you must not only implement leap seconds, but you must do so in a UTC-compliant manner. This means that the leap second must be added to the last minute of the UTC day. During this minute, the clock goes from 0 to 60 seconds (for a total of 61 seconds).

Windows Server 2019 and Windows 10 October 2018 Update implements the leap second in a UTC-compliant manner enabling customers to meet the requirements in regulated industries.

Industry experts have gone on record to denounce leap second “smearing” – an alternative approach that carves the leap second into smaller units and inserts them throughout the day. Leap second smearing is not UTC-compliant and as such, Windows does NOT implement leap second smearing.

Built for compatibility

The majority of Windows users will not need Leap Second information; either their workloads do not depend on that high of accuracy or are not under industry regulations. If this description sounds like you, feel free to tweet a link to this blog, might I recommend...

...And feel free to stop reading. While the system (kernel) is tracking leap seconds, they will not affect your every day life as applications are never notified that a leap second is occurring unless an application has specifically “opted-in.”  Applications are, by default, none the wiser unless action is taken.

This is important both for customers who have heterogeneous operating system environments to interoperate seamlessly as they always have prior to this release as well as for application compatibility. Many applications expect seconds to be between 0 and 59. If the application isn’t expecting a 60, apps could fail, cats and dogs living together, mass hysteria!

Previous Leap Seconds

For these same reasons, we do not track prior leap seconds. Our goal is to enable customers needing high accuracy time moving forward. Regulations requiring high accuracy, UTC-compliant time, did not come into affect until relatively recently, and therefore prior leap seconds are not necessary to track. For reference the last leap second prior to the release of leap-second aware Windows was December 31st 2016, that is, at the time of writing, we have not had a leap second since this date. Leap seconds after this date, will be tracked by Windows Server 2019 and Windows 10 October 2018 Update.

What happened to previous leap seconds

There’s a logical question of how previous operating systems treated leap seconds. If previous operating systems didn’t track leap seconds, are they 37 seconds off from UTC?

No, although previous operating systems did not track leap seconds, when they synchronized their time at the next interval, they recognized that they were one-second behind and time was moved forward to match the current UTC time.

A Tale of Two Timelines

"It was the best of times, it was the worst of times…It was the epoch of belief, it was the epoch of incredulity." Since leap seconds are new in Windows 10 October 2018 Update and Windows Server 2019, prior operating systems will not know about this augmented time scale. As a result, the timelines under the hood of Windows will begin to diverge between these two operating systems as leap seconds occur.

So when the next leap second rolls in, we’ll begin an alternate timeline for Windows 😊

Unless your application is leap second aware, it is unlikely that you will notice this delta. However if you were to view an event log from a leap-second aware system on a machine that is not aware of the leap seconds, the time displayed for the event will be off by the number of leap seconds known by the system (mmc.exe is opted-in by default).

Revert to Prior OS Behavior

As a reminder, applications must opt-in to receiving leap second notifications so leap seconds will not affect any applications by default and is likely unnecessary to modify the default behavior.

However, if you have a heterogenous time-sensitive environment you can revert to the prior operating system behavior and disable leap seconds across the board by adding the following registry key:

HKLM:SYSTEMCurrentControlSetControlLeapSecondInformation

Type: "REG_DWORD"

Name: Enabled

Value: 0 Disables the system-wide setting

Value: 1 Enables the system-wide setting

Next, restart your system.

How Leap Seconds Propagate

Every four years, we have a leap year - this is known and predictable. Leap seconds however, are different in that they are not on a regular cadence. Instead, leap seconds are announced by IERS only 6 months in advance. From there, GPS distributes the leap second notification to time servers and ultimately to Windows systems. So let’s talk about some of the mechanisms in-place to make sure that you get the leap second notification.

Time Server Distribution

The Windows Time service includes a server provider that allows a Windows system to operate as a time server. For example, when you add a domain controller to your forest this domain controller can serve time to other clients on the network through this mechanism. This is not the only method of installing a time server; you can check to see if your system is operating as a time server by using the command (Enabled: 1):

w32tm /query /configuration

The Windows Time server distributes the leap second notification to time clients. As GPS distributes time (and the leap second notification) to the Windows Time server, it will pass that notification onto clients; to be clear, your system doesn’t need to be a domain controller to do this.

Windows Update

But what if your system is when the notification comes? Or more likely what if you re-image your system? You’ll want to make sure that new systems know about the upcoming leap second and if the new system is created after a leap second, you’ll want to make sure that this system is synchronized with the other machines on the network.

To make sure this is possible, we’ll distribute leap second notifications through Windows Update as well. This provides a simple mechanism for reporting (nodes that have the latest updates have the leap second information as well).

Best Practice: The simplest and most effective manner for distributing and verifying leap second information across your environment is through Windows Update.  If you're on the latest updates, you'll have the notifications!

Hyper-V VMIC

If you have Hyper-V virtual machines, the Hyper-V virtual machine integration components will also provide leap second notifications to those virtual machines.  If the virtual machine is not one of the leap-second aware operating systems (or later) this will have no affect.

Verify that your system got the leap second

In addition to verifying updates across your system, you can also use the following command to view the leap seconds known by a specific system. In the screenshot below, a positive (+) leap second will be inserted after 23:59:59 on 6/30/2019

w32tm /leapseconds /getstatus /verbose

 

Testing Applications

Applications must be written to consume and process leap seconds – As you're read a number of times already, we assume that applications are not leap-second aware. You can search every application’s documentation to find out if it’s leap second aware, or if you’re an IT Pro in one of these regulated industries, we anticipate that you will want to test and verify your application or system images for leap seconds.

If you want to manually test and opt-in an application, identify the process name, for example:

Next open the registry editor and navigate to

HKLM:SOFTWAREMicrosoftWindows NTCurrentVersionImage File Execution Options

Add a key which is the same name as the process you want to opt-in to leap seconds. In this example, we’ve opted-in the winword.exe process by creating a Registry Key (folder icon).

Next create a REG_DWORD named GlobalFlag2 with a value of 1.

Now restart the process and insert leap seconds as before then test critical application functionality.

If your application doesn’t support leap seconds, please contact the application owner and tell them to check our future post, Leap Seconds for the Application Developer: What you need to know.

Testing Systems

Instead of testing an individual application one-by-one, you may want to test a holistic system. To do this, open the registry editor and navigate to:

HKLM:SYSTEMControlSet001ControlSession Manager

Next create a REG_DWORD named GlobalFlag2 with a value of 1 as shown here.

Restart the system then insert leap seconds as before and test critical application functionality. Note any application or system events in the event log.

Summary

Most IT Professionals may not need to be concerned about Leap Seconds. However, if you’re a customer in a regulated industries requiring high accuracy time or have time sensitive applications, you need to ensure your systems apply and maintain time accurately through a leap second. Windows Server 2019 and Windows 10 October 2018 Update brings support for, true UTC-compliant leap seconds. To make sure that these are properly implemented on your systems, you should verify your patch management strategy, application compatibility, and more.

Please give this a shot, and of course let us know how it went!

Dan "my leap seconds land on 60" Cuomo

Horário de verão no Brasil inicia em 04 de Novembro de 2018 (lista de KBs)

$
0
0

O governo federal decidiu manter o início do horário de verão para o dia 4 de novembro, quando os relógios serão adiantados em uma hora em vários estados do País. A partir desta comunicação, a configuração do horário de verão fica definida como:

Inicio do Horário de verão: 04 de Novembro de 2018 (Primeiro domingo de novembro) - Adiado em duas semanas, do dia 21/10 para 04/11 pelo decreto abaixo.
Fim do Horário de verão finaliza: 17 de fevereiro de 2019 (Terceiro domingo de fevereiro)

Decreto Oficial com as datas de horário de verão:
https://www.planalto.gov.br/ccivil_03/_Ato2007-2010/2008/Decreto/D6558.htm

Portanto, as atualizações referentes ao início de horário de verão permanecem as mesmas já disponibilizadas desde abril.

Em Abril/18, a Microsoft liberou as atualizações para que as diversas versões de sistemas operacionais suportadas tivessem esta atualização implementada.

Time zone and DST changes in Windows for Brazil, Morocco, and São Tomé and Príncipe
https://support.microsoft.com/en-us/help/4093753/time-zone-and-dst-changes-in-windows-for-brazil-morocco-and-sao-tome-a

A tabela abaixo descreve quais foram os primeiros Monthly Quality Rollups que contem a atualização do horário de verão do Brasil. Como esses Monthly Quality Rollups são cumulativos, qualquer rollup mais recente que esses abaixo contemplam a correção:

OS Release Date Update Rollup KB
1809 - RS5 RS5 RTM RS5 RTM
1803 - RS4 2018.06 B KB4284835
1709 - RS3 2018.04 B KB4093112
1703 - RS2 2018.04 B KB4093107
1607 - RS1 2018.04 B KB4093119
1511 - TH2 2018.04 B KB4093109
Windows 2016 RTM 2018.04 B KB4093111
Windows server 2012 R2 / 8.1 2018.04 C KB4093121
Windows Server 2012 2018.04 C KB4093116
Windows Server 2008 SP2 N/A N/A
Windows Server 2008 R2 / 7 2018.04 C KB4093113
  • Lembrando que os KBs que são Security-only não contém as alterações de horário de verão.
  • O Windows Server 2008 SP2 não tem Monthly Quality Rollups, por esse motivo a única opção é instalar os KBs avulsos abaixo.

Além dos Monthly Quality Rollups descritos na tabela acima, os sistemas operacionais Windows Server 2008 até 2012 R2 e Windows client 7 até 8.1 tem a opção de instalar os KBs individuais abaixo:

KB4093753 (Lançado 16/04/2018)
KB4130978 (Lançado 17/05/2018 – substitui o KB 4093753)
KB4339284 (Lançado 24/07/2018 – substitui o KB 4130978)

As mudanças do KB4093753, já foram incluidas nos Monthly Quality Rollups mais recentes, por isso é esperado que você receba mensagens de que o KB avulso não é aplicável caso a máquina já possua os Monthly Quality Rollups mais recentes.

Uma forma simples de identificar se a máquina já possui a correção é com o comando w32tm /tz .O comando exibe a diferença do valor antigo (M:10 D:3), para o novo (M:11 D:1):

Antes do KB instalado:
C:>w32tm /tz
Time zone: Current:TIME_ZONE_ID_STANDARD Bias: 180min (UTC=LocalTime+Bias)
[Standard Name:"E. South America Standard Time" Bias:0min Date:(M:2 D:3 DoW:6)]
[Daylight Name:"E. South America Daylight Time" Bias:-60min Date:(M:10 D:3 DoW:6)]

Após KB instalado:
C:>w32tm /tz
Time zone: Current:TIME_ZONE_ID_STANDARD Bias: 180min (UTC=LocalTime+Bias)
[Standard Name:"E. South America Standard Time" Bias:0min Date:(M:2 D:3 DoW:6)]
[Daylight Name:"E. South America Daylight Time" Bias:-60min Date:(M:11 D:1 DoW:6)]

 

Informações adicionais:
A data crítica nessa mudança de horário de verão será o dia 21/10/2018. Os clientes que não instalarem nenhum KB descrito acima, terão o horário de servidores e estações incorretamente adiantados em uma hora na virada do Sábado para o Domingo dia 21/10/2018, pois era a data que o horário de verão estava programado para começar antes do decreto de Dez/2017.

https://www.planalto.gov.br/ccivil_03/_Ato2007-2010/2008/Decreto/D6558.htm

 

Rapidly grow your Intelligent Communications knowledge base

$
0
0

Expand your Intelligent Communications technical knowledge. With these new webinars, available to you as a Partner Network member at no cost, you will increase your technical familiarity of Intelligent Communications, giving you ample ability to hold valuable discussions with your customers.

What’s New in Office 365 Intelligent Communications

  • Microsoft Intelligent Communications continues to rapidly evolve. This technical webinar series will assist MPN partners in staying up-to-date on the latest developments and service abilities, feature updates and releases. The information presented will be mostly technical in nature, but occasionally our Microsoft technical experts will provide marketing and business news pertaining to building a Cloud practice. During this webcast, you may also ask questions as they pertain to your practice.

Technical Deep Dive on Microsoft Teams Direct Routing

  • Discover how users can easily be transitioned to Calling in Microsoft Teams and learn that in using Direct Routing, call center agents can continue to use their applications while transitioning other users. Direct Routing is a capability of Phone System in Office 365 to help customers connect their SIP trunks to Microsoft Teams. In the simplest deployment model, customers start with SIP trunks from their telecommunications provider. Next, customers will use and configure a supported Session Border Controller (SBC) from one of our certified partners. Finally, they will connect their SBC to Microsoft Teams and Phone System. By integrating with an existing PBX, pilot users can be moved to Calling in Teams while users remain on their legacy PBX; The call traffic between these users during the transition stay within the organization.

Adopting Microsoft Teamwork Solutions: Teams Calling and Meetings

  • In this webinar, you’ll learn about the key features and functionality of Microsoft Teams, helping you position Microsoft Teams with your customers. After a brief introduction, our Microsoft Partner Technical Consultations with cover a broad range of topics from licensing and Office 365 integration to implementation phases and specific customer scenarios. We'll walk you through product demos and share the latest Microsoft Teams roadmap as this service continues to expand and grow. 

Explore the full suite of technical webinars and consultations available for the Intelligent Communications technical journey at aka.ms/IntelligentCommsTechJourney.

Horário de verão no Brasil inicia 04 de novembro de 2018 (lista de KB)

$
0
0

Horário de verão no Brasil inicia em 04 de Novembro de 2018 (lista de KBs)

Inicio do Horário de verão: 04 de Novembro de 2018 (Primeiro domingo de novembro) - Adiado em duas semanas, do dia 21/10 para 04/11 pelo decreto abaixo.
Fim do Horário de verão finaliza: 17 de fevereiro de 2019 (Terceiro domingo de fevereiro)

Decreto Oficial com as datas de horário de verão:
https://www.planalto.gov.br/ccivil_03/_Ato2007-2010/2008/Decreto/D6558.htm

Portanto, as atualizações referentes ao início de horário de verão permanecem as mesmas já disponibilizadas desde abril.

Em Abril/18, a Microsoft liberou as atualizações para que as diversas versões de sistemas operacionais suportadas tivessem esta atualização implementada.

Time zone and DST changes in Windows for Brazil, Morocco, and São Tomé and Príncipe
https://support.microsoft.com/en-us/help/4093753/time-zone-and-dst-changes-in-windows-for-brazil-morocco-and-sao-tome-a

A tabela abaixo descreve quais foram os primeiros Monthly Quality Rollups que contem a atualização do horário de verão do Brasil. Como esses Monthly Quality Rollups são cumulativos, qualquer rollup mais recente que esses abaixo contemplam a correção:

OS Release Date Update Rollup KB
1809 - RS5 RS5 RTM RS5 RTM
1803 - RS4 2018.06 B KB4284835
1709 - RS3 2018.04 B KB4093112
1703 - RS2 2018.04 B KB4093107
1607 - RS1 2018.04 B KB4093119
1511 - TH2 2018.04 B KB4093109
Windows 2016 RTM 2018.04 B KB4093111
Windows server 2012 R2 / 8.1 2018.04 C KB4093121
Windows Server 2012 2018.04 C KB4093116
Windows Server 2008 SP2 N/A N/A
Windows Server 2008 R2 / 7 2018.04 C KB4093113
  • Lembrando que os KBs que são Security-only não contém as alterações de horário de verão.
  • O Windows Server 2008 SP2 não tem Monthly Quality Rollups, por esse motivo a única opção é instalar os KBs avulsos abaixo.

Além dos Monthly Quality Rollups descritos na tabela acima, os sistemas operacionais Windows Server 2008 até 2012 R2 e Windows client 7 até 8.1 tem a opção de instalar os KBs individuais abaixo:

KB4093753 (Lançado 16/04/2018)
KB4130978 (Lançado 17/05/2018 – substitui o KB 4093753)
KB4339284 (Lançado 24/07/2018 – substitui o KB 4130978)

As mudanças do KB4093753, já foram incluidas nos Monthly Quality Rollups mais recentes, por isso é esperado que você receba mensagens de que o KB avulso não é aplicável caso a máquina já possua os Monthly Quality Rollups mais recentes.

Uma forma simples de identificar se a máquina já possui a correção é com o comando w32tm /tz .O comando exibe a diferença do valor antigo (M:10 D:3), para o novo (M:11 D:1):

Antes do KB instalado:
C:>w32tm /tz
Time zone: Current:TIME_ZONE_ID_STANDARD Bias: 180min (UTC=LocalTime+Bias)
[Standard Name:"E. South America Standard Time" Bias:0min Date:(M:2 D:3 DoW:6)]
[Daylight Name:"E. South America Daylight Time" Bias:-60min Date:(M:10 D:3 DoW:6)]

Após KB instalado:
C:>w32tm /tz
Time zone: Current:TIME_ZONE_ID_STANDARD Bias: 180min (UTC=LocalTime+Bias)
[Standard Name:"E. South America Standard Time" Bias:0min Date:(M:2 D:3 DoW:6)]
[Daylight Name:"E. South America Daylight Time" Bias:-60min Date:(M:11 D:1 DoW:6)]

 

Informações adicionais:
A data crítica nessa mudança de horário de verão será o dia 21/10/2018. Os clientes que não instalarem nenhum KB descrito acima, terão o horário de servidores e estações incorretamente adiantados em uma hora na virada do Sábado para o Domingo dia 21/10/2018, pois era a data que o horário de verão estava programado para começar antes do decreto de Dez/2017.

https://www.planalto.gov.br/ccivil_03/_Ato2007-2010/2008/Decreto/D6558.htm

 

Ignite 2018 – Windows Server 2019 Azure Integration Session Recordings

$
0
0

One of the key hybrid messages with Windows Server 2019 is extending your current capabilities with Azure, even when workload migration to the cloud isn't necessarily the highest priority for your organisation. In the videos below you will see how you can leverage cloud based security, Azure backup and recovery capabilities, and extending your network into Azure.

Securing your hybrid cloud environments with Azure ATP and AAD Identity Protection


Protect users from identity threats with Azure Advanced Threat Protection and Azure AD Identity Protection. Learn about the top types of attacks against identities and users and how Microsoft 365 can help secure your environment.

Microsoft security: How the cloud helps us all be more secure

The IT environment you are responsible for is changing: cloud apps, hybrid infrastructure, mobile work, and digital connections with customers and partners to name just a few. Meanwhile, cyber-attacks are more frequent and damaging. The cloud is your secret weapon in this new security battlefield. See how unique intelligence and new innovations from Microsoft can help you be more secure across your entire digital estate.

Deploying Azure File Sync


With Azure Files and Azure File Sync, centralizing file shares in Azure not only is possible, it’s practical. But how do you actually get started with your existing file server or SAN? Never fear! We show you just how easy it is to get started, including leveraging still-relevant existing file servers and migrating off of ancient SANs and NAS devices.

Backup your data with Microsoft Azure Backup

Organizational data is susceptible to corruption, accidental deletion and ransomware. In this session, you will discover how Azure can securely backup and restore your data across multiple workloads running in the cloud as well as on-premises. Come join this jam-packed demo session and witness how Azure Backup significantly reduces complexity and cost through a zero-infrastructure solution for backing up resources. You will learn Azure Backup's native support of Windows Admin Center, Azure Files, Azure VMs, as well as SQL running in Azure VMs.

Implement Cloud Backup and Disaster Recovery at Scale in Azure


Organizations increasingly need to scale their IT operations to protect their data and bolster disaster recovery(DR) strategy. Join this session to learn how Azure Backup and Site Recovery (ASR) help solve typical problems of managing backups, as well as recovering applications at scale. Learn about capabilities like PowerShell/CLI automation, policy management, RBAC, template-based deployments, monitoring and reporting that are critical to manage large scale deployments in enterprise environments. We will also present practical examples of real-world deployments in this session.

Establishing hybrid connectivity with Windows Server 2019 and Microsoft Azure

Windows Server 2019 has the most advanced networking capabilities ever shipped in a Windows operating system. See how we’re using Windows Admin Center to make Windows Server 2019 the easiest OS to connect to your Azure virtual network. In this session we  also cover advancements in the data plane, transports, security (802.1x), container networking, and time accuracy.

 

 

Viewing all 34890 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>