Quantcast
Channel: TechNet Blogs
Viewing all 34890 articles
Browse latest View live

カスタムイメージを使用して仮想マシンスケールセットをデプロイする

$
0
0

こんにちは、Azure サポートチームの米田です。
今回は、ユーザが作成したカスタムのOSイメージを用いて仮想マシンスケールセット(Virtual Machine Scale Sets:VMSS)をデプロイする方法をご案内します。

※本手順は ARM (リソースマネージャー モデル / V2)を対象にした記事になります。
※本情報の内容(添付文書、リンク先などを含む)は、作成日時点でのものであり、予告なく変更される場合があります。

(前提)仮想マシンスケールセットとは


この記事を見ている方は、多少なりともVMSSに興味を持たれているかと思いますので、VMSSなにそれ?という人はいないかと思いますが、念のため振り返ります。

VMSSは、同一の仮想マシン(以下、VM)を複数同時に展開するためのCompute リソースです。VMSSリソースを作成すると、配下にVMインスタンスが展開されます。このVMインスタンスを必要に応じて数を増やしたり(スケールアウト)、減らしたり(スケールイン)できる技術です。現時点で1つのVMSS配下にVMインスタンスを最大1000台まで展開することができます。
一般的には、このインスタンス増減を、あらかじめ設定した条件に合致した時に自動実行する自動スケールと呼ばれる仕組みと併用されます。
例えば、Webサイトを運用しており、サイトアクセスによって各VMインスタンスのCPU使用率が一定値以上になったら台数をxx台自動的に増やすといったシナリオや、特定の日時においてサイトアクセスの上昇が見込まれるので予め台数増加の実行をスケジューリングしておきたいといったシナリオでも利用されます。他にも大規模計算を実行するためのHPC基盤としても利用されることもあります。

VMSSは独立した技術で利用されるよりも、多くのシナリオで他の独立したAzureの機能(リソース)と組み合わせて利用されます。そのため、まったく触れたことのない方にはとっつきにくい部分があるかも知れません。
これからVMSSの利用を検討されるという方は、現在 Azure ポータルからで基本的な構成でデプロイすることが可能なので試しに作成することをお勧めします。

Azure Portal を使用して Virtual Machine Scale Set を作成する方法
https://docs.microsoft.com/ja-jp/azure/virtual-machine-scale-sets/virtual-machine-scale-sets-portal-create/

こちらを作成するとVMSSだけでなく、ロードバランサ、仮想ネットワーク、パブリックIPアドレスといった様々なリソースが同時に作成されることが確認できると思います。これらリソースは次のようにそれぞれ仮想マシンスケールセットと連携しています。

目的


AzureポータルからVMSSをデプロイいただいたことがある方なら、デプロイ時にWindows ServerやLinuxディストリビューションなどのOSイメージをご選択した覚えがあると思います。Azure ポータルから選択できるOSイメージは、Azure Marketplaceで公開されているイメージです。
VMSSのスケールアウトによってインスタンスが増える場合は、このVMSSと紐づいたイメージをベースに作成されます。
これはつまり、独自のアプリケーションやミドルウェアをVMSSで利用したい場合は、VMSSのデプロイ時にカスタマイズ設定を施したイメージを事前に指定しておかないといけないということを意味します。イメージをカスタマイズする上で、当イメージを基に VM が展開された場合に、初期設定を自動完了させ、既存のアプリケーションに加わるなどの処理を OS の起動スクリプトを通じて自動化しておくことが重要です。例えば、Web アプリケーションであれば、VM が作成された段階でバックエンドのデータベースに連携し、すぐにサービスが展開出来るようなセットアップを完了させる処理ということになります。
これはVMSSを利用してシステムを実運用する、ほとんどのシナリオで注意しないといけないポイントだと思います。
本記事では、VMSSのデプロイ時に自身で作成したイメージを指定してデプロイする方法をご紹介します

手順


本書では、次の流れに沿って自身で作成したイメージを指定してVMSSをデプロイする方法を次の 3 STEPでご紹介します。

  1. 仮想マシンで一般化おこないOSのマスターイメージを作成する
  2. 一般化済みのイメージをAzureの「イメージ」リソースとして登録する
  3. 登録した一般化イメージからVMSSをデプロイする

...と思いましたが、1と2の手順は既に弊社blogでご紹介していました。

管理ディスク (Managed Disks) の “イメージ” リソースを使用し、仮想マシンを複数台展開する
https://blogs.technet.microsoft.com/jpaztech/2017/05/10/deployvmsfrommanagedimage/

上記の記事は、自身で作成したイメージから通常の仮想マシンを作成していますが、イメージの作成方法はVMとVMSSで共通です。
手順1は、記事内の「1. 仮想マシン内部で一般化する」、手順2は「2. 一般化済みマスター イメージを"イメージ" リソースに登録する。」をご参照ください。

3. 登録した一般化イメージからVMSSをデプロイする
登録された”イメージ”リソースを元にVMSSをデプロイします。
恐縮ですが、この操作は、現在Azure ポータルからUIで実行いただくことができません。
Azure Resource Manager(ARM)テンプレートと呼ばれるテンプレートファイルの作成やコマンドベースの作業が必要となります。
今回はテンプレートによる手順を紹介します。

■手順

  1. Azureポータル(https://portal.azure.com)にアクセスします
  2. [+新規]を押します
  3. 検索ボックスで、Template deployment を検索します。「テンプレートのデプロイ (英語名:Template deployment)」を選択します
  4. [作成] を押します
  5. [カスタム デプロイ] が表示されましたら、[エディターで独自のテンプレートを作成する]を選択します
  6. [テンプレートの編集]にて、リンクのようなテンプレートファイルを入力し、[保存]します
  7. 表示されたパラメータを入力します
    ・vmSSName:VMSSの名称
    ・instanceCount:初期作成時のインスタンス数
    ・vmSize:VMSSのインスタンスサイズ
    ・adminUsername:初期ユーザアカウント名
    ・adminPassword:初期ユーザ名
    ・sourceImageName:
    ・frontEndLBPort:VMSS配下のインスタンスにアクセスするためのロードバランサのフロントエンドポート
    ・backEndLBPort:ロードバランサ配下のVMSSインスタンスのバックエンドポート
  8. [購入]をクリックし、デプロイをおこないます

■テンプレートの解説

上記で利用したtemplateはAzure Resource Manager テンプレートの形式で記述されています。

Azure Resource Manager テンプレートの構造と構文の詳細
https://docs.microsoft.com/ja-jp/azure/azure-resource-manager/resource-group-authoring-templates

作成される各リソースは、resources セクションにて定義されており、定義されているリソースはtypeを
見ることでわかります。

Microsoft.Network/virtualNetworks … 仮想ネットワーク
Microsoft.Network/publicIPAddresses … パブリックIPアドレス
Microsoft.Compute/virtualMachineScaleSets … 仮想マシンスケールセット
Microsoft.Insights/autoscaleSettings … 自動スケール設定

特に今回は事前に作成したイメージリソースを指定しましたが、これは次のimageReferencenにて
イメージリソースのリソースのIDを指定することで実現しています。

{
"type": "Microsoft.Compute/virtualMachineScaleSets",
"sku": {
"name": "[parameters('vmSize')]",
"tier": "Standard",
"capacity": "[parameters('instanceCount')]"
},
"name": "[parameters('vmSSName')]",
"apiVersion": "[variables('computeApiVersion')]",
"location": "[resourceGroup().location]",
"properties": {
"overprovision": "true",
"upgradePolicy": {
"mode": "Manual"
},
"virtualMachineProfile": {
"storageProfile": {
"imageReference": {
"id": "[resourceId('Microsoft.Compute/images', parameters('sourceImageName'))]"
}
},
"osProfile": {
"computerNamePrefix": "[parameters('vmSSName')]",
"adminUsername": "[parameters('adminUsername')]",
"adminPassword": "[parameters('adminPassword')]"
},
"networkProfile": {
"networkInterfaceConfigurations": [
{
"name": "nic1",
"properties": {
"primary": "true",
"ipConfigurations": [
{
"name": "ip1",
"properties": {
"subnet": {
"id": "[variables('subnetRef')]"
},
"loadBalancerBackendAddressPools": [
{
"id": "[variables('lbBEAddressPoolID')]"
}
]
}
}
]
}
}
]
}
}
},
"dependsOn": [
"[concat('Microsoft.Network/loadBalancers/',variables('lbName'))]",
"[concat('Microsoft.Network/virtualNetworks/',variables('virtualNetworkName'))]"
]
},

テンプレートの内容は、デプロイするリソースの設計図のようなものなので、構成を変更する場合、カスタ
マイズしていく必要がございます。テンプレート自体の記述形式や、各リソースをデプロイするにあたり
どのようなパラメータが指定できるかは次にリファレンスがありますので、まずはこのような文章からテン
プレートの作法の理解を確認することをお勧めします。

初めての Azure Resource Manager テンプレートを作成およびデプロイする
https://docs.microsoft.com/ja-jp/azure/azure-resource-manager/resource-manager-create-first-template

Microsoft.Compute/virtualMachineScaleSets template reference
https://docs.microsoft.com/en-us/azure/templates/microsoft.compute/virtualmachinescalesets

最後に


カスタムイメージを使用して作成されたVMSSは、後から紐づいたイメージを変更することができます。
そのため、カスタムイメージ内のアプリケーションやミドルウェアのバージョンを更新した新しいバージョンのOSイメージを作成し、既存のVMSSに反映してシステムをアップデートしていくことが可能です。
今回はカスタムイメージを使用したVMSSのデプロイをご紹介しましたが、また別の機会に既存のVMSSのイメージを更新する方法をご案内させていただきます。


Bulk Assign / remove Office 365 Licenses – AzureAD V2 module based PowerShell script with UI

$
0
0

Managing licenses with office 365 has become easier with Azure-AD group based licensing and multi-select option on the office portal.

But, there are many scenarios where Adminstrators go for direct License assignment and removal for a custom list of users. The powershell scripts my customer had for these purposes uses complex parameters and it didn’t support all the scenarios they needed. Also, any mistake on passing parameters would result in wrong assignment of licenses.

I created this UI tool to help customer manage any kind of direct license management against a custom list of Users from an input file or by running a query against AzureAD (Get-AzureADUser). It is UI based and flexible to use. This script uses new AzureAD V2 module which implements the Graph API in PowerShell and provides access to newer functionality.

The script can be downloaded from https://aka.ms/o365licensemgmtscript

 

  • This script launches Windows Presentation framework based UI form.
  • Provides users with options either to run Online Query or process users from a CSV list.
  • Using a License picker window, users can Add and/or Remove licenses.
  • Apply it on all users or selected users from the list, all at once.
  • This script will only append to or remove the existing user licenses and plans.
  • At Present, this script requires no Input parameters, but default switches like -Verbose can be added to view progress on the Powershell window.
  • It creates a log by default on the script's directory which includes the Licenses and runtime details.

Note: This script will only append to or remove the existing user licenses and plans. It doesn’t replace any Licenses.

I hope you find this script useful when dealing with direct assignment / removal of Office 365 licenses.

Please let me know if you find it useful or for any suggestions you may have.

Thank you

What’s New in SDN for Windows Server 1709

$
0
0

Windows Server version 1709 released today, October 17, 2017 - and with this release there is a new feature for Software Defined Networking (SDN) named Virtual Network Encryption.

Virtual Network Encryption provides the ability for the virtual network traffic to be encrypted between Virtual Machines that communicate with each other within subnets that are marked as "Encryption Enabled."

This feature utilizes Datagram Transport Layer Security (DTLS) on the virtual subnet to encrypt the packets. DTLS provides protection against eavesdropping, tampering and forgery by anyone with access to the physical network.

For more information, see the following topics in the Windows Server technical library.

RAS Gateway GRE Tunnel Throughput and Performance

$
0
0

With today's release of Windows Server, version 1709, comes a new topic about Remote Access Server (RAS) Gateway performance when configured with Generic Routing Encapsulation (GRE). The performance tests discussed in this new topic were completed on Hyper-V hosts and Virtual Machines (VMs) running Windows Server 1709 in a non-Software Defined Networking (SDN) based test environment.

Note: The Windows Server version number 1709 means year 2017, month September, or 09.

RAS Gateway is a software router and gateway that you can use in either single tenant mode or multitenant mode. This topic discusses a single tenant mode, high availability configuration with Failover Clustering. The GRE tunnel performance statistics that are presented in this topic are valid for RAS Gateway in both singele tenant and multitenant modes.

For more information, see RAS Gateway GRE Tunnel Throughput and Performance.

Always On VPN and DirectAccess Features Comparison

$
0
0

Windows Server and Windows 10 version 1709 are now released, and with these releases come some great new features for Virtual Private Networks in Windows 10.

With Windows 10 VPN, you can create Always On VPN connections so that remote computers and devices are always connected to your organization network when they are turned on and Internet connected.

You can use this new topic to gain an understanding of how Windows Server and Windows 10 VPN features map to DirectAccess features, including details about the increased flexibility provided by new VPN features.

You can also use this topic for an overview of how Windows 10 VPN provides some advantages over DirectAccess deployments, such as the ability to support mobile device management and Azure Active Directory joined devices.

For more information, see Always On VPN and DirectAccess Features Comparison.

Windows 10 Build 1709 VLSC Üzerinden İndirilebilir!

$
0
0

Windows 10 Fall Creators Update olarak geçen Build 1709 sürümü Volume License üzerinden indirilebilir durumda.
Yeni sürüm ile birlikte gelen yenilikleri ve güncellemeyi Windows Update üzerinden yüklemenin nasıl yapılacağını aşağıdaki bağlantılardan öğrenebilirsiniz.

Yeni ISO içinde artık tüm sürümler mevcut, güzel olmuş

What's new in the Windows 10 Fall Creators Update
How to get the Windows 10 Fall Creators Update
What's New in Microsoft Edge in the Windows 10 Fall Creators Update

Current Cumulative Updates for Office – Q4 2017

$
0
0

As I mentioned in the Current Cumulative Updates for Office - Q3 2012 post, each quarter I will post information on the latest updates for the Office for Windows and Office for Macintosh products.

The information below is being provided regarding the most currently available updates available for the supported Windows and Macintosh versions of Office as of July 3, 2017.

Release Schedule for Non-Security Updates

  • For the MSI version of Office, non-security updates are released in Microsoft Update and the Windows Server Update Service (WSUS) on the first Tuesday of the month.  This will include all updates that have the Critical or Definition classification.  Updates with the Security classification will continue to release on the second Tuesday as usual.
  • For the Click-To-Run (C2R) version of Office, all updates will release on the second Tuesday of the month.

As a reminder on why I'm providing this information and how it should be used, please see my Keeping Up with Office Updates post which discusses the cumulative updates for Office (and Outlook in particular) that companies need to be aware of and push out to their users.

Office for Windows

Office 2016

Office 2013

Office 2010

Office 2007

Office 2003

  • Office 2003 reached End-of-Life Support on April 8, 2014

Click-to-Run:

June 2017 Versions:

  • Office 2013: 15.0.4971.1002
  • Office 2010: 14.0.7189.5001

Note: Each of the KB articles includes the list/links for all the Office products (Word, Excel, Outlook, etc).  Most of you focus on Outlook and that's the only ones required and is also provided separately but I wanted to provide the larger "Office" list in case you want it.

As a reminder, Microsoft Update does *NOT* make the cumulative updates available to users (unlike the Public Updates) for products prior to Office 2013.  These have to be downloaded and either installed independently or deployed using tools such as WSUS, SCCM, etc.

Note: As of January 1, 2015 the Office product group has made a decision to no longer have both what were known as "Public Updates" (those that you could get through Microsoft Update) and "Cumulative Updates" (separate downloads) for the Office 2013 products, which has always been very confusing (and part of why I started posting this information).  Going forward, all updates will be part of the Public Update releases.  However, I will continue to post this bulletin quarterly so that you have this information to properly manage updates for desktops, etc.

Office for Macintosh

Office 2016

Office 2011

  • Current Service Pack Level: Microsoft Office for Mac 2011 SP7 (released Nov 15, 2016)
    • Office for Mac 2011 mainstream support ended on October 10, 2017
  • Latest cumulative Update: September 12, 2017 - 14.7.7 (http://support.microsoft.com/kb/3212225)

Office 2008

  • Current Service Pack Level: Microsoft Office 2008 for Mac SP2 (released October 2009)
    • Office 2008 for Mac support ended on April 9, 2013
  • Last cumulative Update: March 2013 - 12.3.6 (http://support.microsoft.com/kb/2817449)

Note: Each of the KB articles includes the link for downloading the package which updates ALL Office Products...there are not separate updates for each of the various components of Office as there is with the Windows releases.

Why did I rename my Administrator account?

$
0
0

The question of whether you should rename the built-in administrator account in Active Directory surprisingly resurfaced. I recently had chats about it with customer's security teams and it is a subject that is also coming back internally at Microsoft. It has always been a battle royale with the arguments that "obscurity is not security" on one side and "ROI of implementation wins" on the other. In a nutshell, even if you rename the account, the objectSid of the account remains the same (so easily identifiable) but at the same time, renaming it is virtually free, so why not doing it and slowing down script kiddies.

So here is my take on it: I would do it. And this is in part why:

Yes, an old school KB article.

Some bits before we start...

They might seem unrelated, but you'll see that everything connects at the end like in a Tarantino movie.

  1. The built-in Administrator of the domain is applying the account policy you defined in your domain but does not get locked out once the badPwdcount reaches the threshold of your account lockout policy. Why? Because it is your emergency account to access the environment in a situation when all the other accounts have been locked out (case of an annoying password discovery attack or malware). Well, technically it is possible to make the built-in account behave like the other and locking out, but that's too much of a tangent at this point (see here if you are curious: DOMAIN_LOCKOUT_ADMINS).
  2. The built-in administrator account on Windows has the same name everywhere: Administrator. Unless you have changed it as well.
  3. Although it is not a good practice, some desktop and server administrators are using the built-in Administrator to do local admin stuff on domain joined machines (if you really need to use that account, make sure you at least use LAPS).

Let's connect the dots

Remember back in the day when you had your two Windows XP in a LAN at home and you didn't want to be prompted when you were accessing a network share? Well, you had two ways to do it. You could enable anonymous access... Or you could have the same set of username/password on both machines. And looking at the latter, and following the logic described in the kb 103390, you will try to perform a network logon with the currently used account because you have an account with the same name in the remote machine. Well kinda the same thing happens in a domain environment. With a difference that if the remote machine is domain joined, this access might be forwarded to the domain controller. Let's look at some examples in action:

 

  1. An operator opens a session on a server with the built-in local Administrator account. Or opens a command prompt or a PowerShell console with "runas" as a the built-in Administrator.
  2. In the previous context, the operator runs a script or a console which tries to read Active Directory.
  3. Each attempt will generate a failed authentication on the domain controller for the built-in domain Administrator account. Here are the events we see in the DC's security event log:
    • Event ID:      4625
      Task Category: Logon

      An account failed to log on.
      Subject:
      Security ID:  NULL SID
      Account Name:  -
      Account Domain:  -
      Logon ID:  0x0
      Logon Type:   3
      Account For Which Logon Failed:
      Security ID:  NULL SID
      Account Name:  administrator
      Account Domain:  VFILE01Failure Information:
      Failure Reason:  Unknown user name or bad password.
      Status:   0xC000006D
      Sub Status:  0xC000006A
      Process Information:
      Caller Process ID: 0x0
      Caller Process Name: -
      Network Information:
      Workstation Name: VFILE01
      Source Network Address: 10.10.0.8
      Source Port:  52444
      Detailed Authentication Information:
      Logon Process:  NtLmSsp
      Authentication Package: NTLM
      Transited Services: -
      Package Name (NTLM only): -
      Key Length:  0

    • Event ID:      4776
      Task Category: Credential Validation

      The computer attempted to validate the credentials for an account.
      Authentication Package: MICROSOFT_AUTHENTICATION_PACKAGE_V1_0
      Logon Account: administrator
      Source Workstation: VFILE01
      Error Code: 0xC000006A

Another scenario:

  1. An operator opens a session on a server with the built-in local Administrator account. Or opens a command prompt or a PowerShell console with "runas" as a the built-in Administrator.
  2. The operator tries to access a resource located on VSRV01. In that case it is using NET USE and explicitly specified the name of the account that has to be used (but not the password).
  3. The server VSRV01 doesn't have an account called administrator (or it has one, but it is disabled).
  4. The authentication is forwarded to the DC and an authentication attempt is performed for the built-in domain Administrator account resulting in an event 4625 being logged on VSRV01 and the following event in the DC's security logs:
    • Event ID:      4776
      Task Category: Credential Validation

      The computer attempted to validate the credentials for an account.
      Authentication Package: MICROSOFT_AUTHENTICATION_PACKAGE_V1_0
      Logon Account: Administrator
      Source Workstation: VFILE01
      Error Code: 0xC000006A

Those are just two examples. So you see it coming, these actions will at some point lock out the built-in domain administrator. But wait... I just said it won't lock out. True, but you will still see the event 4740 on the domain controller (and on the ePDC) although the account can still be used. Potentially quite confusing.

Soundproofing the security event logs

The problem of the behavior described above is that it will make you think your built-in domain administrator is under password discovery type of attacks when in fact, it is just some local user with the same name being used somewhere. Are you "affected" by this? Run the following PowerShell cmdLet:

[code language="PowerShell"]$_default_admin = Get-ADUser -Identity "$((Get-ADDomain).DomainSID.Value)-500"
$_metadata_lockoutTime = Get-ADReplicationAttributeMetadata -Object $_default_admin -Properties lockoutTime -Server VDC01
$_metadata_lockoutTime.Version
$_metadata_lockoutTime.LastOriginatingChangeTime[/code]

This will tell you how many times the built-in domain admin has been locked out since the creation of the domain as well as the date and time of the last occurrence.
You will be surprised to see that it is often a very high number and that the last time was a few minutes ago.
So are you under attack?

Maybe. Or maybe you're just observing the behavior described here.

In case you don't have PowerShell handy (shame on you!), here is the equivalent in command line with REPADMIN:

[code]FOR /F %S IN ('dsquery * domainroot -scope base -attr objectSid ^| find /v "objectSid"') DO SET SID=%S
FOR /F "TOKENS=4,5 DELIMS= " %A IN ('repadmin /showobjmeta fsmo_pdc: "<SID=%SID%-500>" ^| find "lockoutTime"') DO ECHO %A %B[/code]

Yes I know, it is a super fancy way to call REPADMIN...

So if you rename the built-in administrator account with something different than what you use on your workstations and servers, you'll avoid that noise and the next lockout for the built-in domain administrator will maybe be worth being investigated.

But really, that's up to you 🙂

 

 


データベース ワークロードのコスト効率を向上させる新しい Azure VM サイズを発表

$
0
0

執筆者: Luis Vargas (Principal Program Manager, SQL Server)

このポストは、9 月 26 日に投稿された Announcing new Azure VM sizes for more cost-effective database workloads の翻訳です。

 

マイクロソフトにはお客様から、「SQL Server や Oracle ではたびたびメモリ、ストレージ、I/O 帯域幅が不足してしまう。でも、CPU コア数は高くなくていい」という声が届いており、実際、お客様が使用しているデータベース ワークロードの多くでも、CPU 負荷はそれほど高くないことがわかっています。つまり、お客様が求めているのは、メモリ、ストレージ、I/O 帯域幅はそのままで、ソフトウェアのライセンス コストを抑えるために今よりも少ない vCPU 数に調整された VM サイズです。

このためマイクロソフトは、人気の VM サイズ (DS/ES/GS/MS) の最新バージョンでは、メモリ、ストレージ、I/O 帯域幅はそのままで、vCPU 数を今の半分または 4 分の 1 に抑えたものを発表しました。これらの新しい VM サイズには有効な vCPU コア数が接尾辞として追加されるため、区別しやすくなっています。

たとえば、現行の Standard_GS5 VM では、vCPU コア数 32 基、メモリ容量 448 GB、ディスク数 64 台 (最大容量 256 TB)、I/O 帯域幅 80,000 IOPS または 2 GB/秒となっている一方、新しい VM サイズの Standard_GS5-16 と Standard_GS5-8 では、有効な vCPU 数がそれぞれ 16 基と 8 基となり、メモリ、ストレージ、I/O 帯域幅は Standard_GS5 のままとなります。

SQL Server や Oracle の製品のライセンス コストは vCPU 数で決まるため、マイクロソフトの新しい VM シリーズではコストを抑えることができるうえ、有効な (課金対象となる) vCPU 数に対する VM の性能は 50 ~ 75% 向上します。新しい VM サイズは Azure でのみ提供されます。コア数に基づくライセンス コストを抑えられる一方、CPU の使用効率を向上させ、OS のライセンスに含まれるコンピューティング コストは元のまま維持できます。

次の表は、SQL Server Enterprise イメージからプロビジョニングされた新しい VM サイズの DS14-4v2 と GS5-8 と、元バージョンの VM サイズを実行した場合の料金を比較したものです。最新の価格については、Azure VM の料金ページを参照してください。

VM サイズ vCPU メモリ 最大ディスク数 最大 I/O スループット SQL Server Enterprise の年間ライセンス コスト 年間の総コスト

(コンピューティングとライセンスの合計)

Standard_DS14v2 16 112 GB 32 51,200 IOPS または 768 MB/秒
Standard_DS14-4v2 4 112 GB 32 51,200 IOPS または 768 MB/秒 75% 削減 57% 削減
Standard_GS5 32 448 64 80,000 IOPS または 2 GB/秒
Standard_GS5-8 8 448 64 80,000 IOPS または 2 GB/秒 75% 削減 42% 削減

お客様の SQL Server ライセンスを新しい VM サイズに移行した場合、マイクロソフトが提供する BYOL イメージを使用する場合と SQL Server を手動でインストールする場合のいずれの場合でも、必要なライセンスは制限後の vCPU の分だけです。BYOL の詳細およびコスト削減に関するその他のアイデアについては、SQL Server の料金ガイダンスを参照してください。

今すぐ新しい VM サイズの使用を開始し、ライセンス コストの削減にお役立てください!

Microsoft Revenue Growth Academy

$
0
0

TimTetrickPhoto

Tim Tetrick

 

image

Increase your Customer’s Value

Revenue Growth Academy
Do you want to grow revenue and retain your existing customers?  We offer a short webcast series, designed for Microsoft partners that shares specific strategies and tactics for maximizing value and satisfaction with your customers.  We find that successful partners are applying these best practices – you can too! 

  • Three-part webcast series – each 45 minutes in length
  • Presented over the next three weeks (Oct 24, 31, Nov 7) covering Expand Selling, Onboarding, Continuous Contact.  Grounded in tactical counsel and re-enforced with Coaching Sessions
  • Recommended attendees: Business Decision Makers, Services Leads

Learn more and register: https://aka.ms/msrga

Join the LinkedIn Microsoft Customer Success conversation

Stay up to date on Customer Success best practices by joining the LinkedIn Group

Sign up: https://aka.ms/LinkedInCustomerSuccessAcademy

10 月から SQL Server 2017 on Windows/Linux/Docker の一般提供開始【10/18 更新】

$
0
0

Windows に加えて Linux や Docker 上でも動作し、オンプレミスからクラウド、ハイブリッドでの動作まで幅広くサポートする SQL Server の最新バージョン SQL Server 2017 が 10 月 2 日から一般提供開始となりました。

SQL Server の Linux 対応の意向表明から 18ヶ月の間で、Active Directory 認証から暗号化、Always On 可用性グループに至るまでのエンタープライズデータベース機能において WindowsとLinuxで同等、かつ互換性を持たせることに注力してきました。SQL Server 2017は、Red Hat Enterprise Linux、SUSE Linux Enterprise Server、およびUbuntuをサポートしています。

この記事では、マイクロソフトが業界を牽引するモメンタム、SQL Server 2017 の優位点の概要、そして主な新機能のリストについてまとめました。

 

マイクロソフトデータプラットフォームのモメンタム~業界を牽引

マイクロソフト データプラットフォーム(*)は、いままでも大規模企業から中堅中小規模企業まで、ISV のアプリケーションパッケージからSI 案件まであらゆる業種、業態のお客様に利用されてきました。

マイクロソフトは国内における RDBMS 市場、ベンダー別売上金額シェアにおいて 41.1% (**) のシェアを獲得し、3 年連続 (***) No.1 シェア、かつシェアは年々増加傾向にあります。(出典:  ITR Market View:DBMS/BI市場2017)

* マイクロソフト データプラットフォームには、SQL Server や Azure SQL Database が含まれます。** 2016年度 (予測)の数字です。*** 2014~2016年度 (予測)の数字です。

また、マイクロソフトは業界でのベンダーの位置づけを評価する Gartner の Magic Quadrant においても、データプラットフォーム関連の 4 つの領域でリーダーやビジョナリーリーダーポジションを獲得しています。特に、Operational Database Management Systems の分野では 4 年連続で業界をけん引しています。

Linux にも対応する SQL Server 2017 がリリースされたことで、いままでカバーがされていなかった Linux サーバーの領域でも、今後導入が進むことが期待されます。

 

SQL Server 2017 の優位点 (1)~最も安全なデータベース

SQL Server は過去 7 年間を通して最もセキュリティの優位性を評価されたデータベースです。アメリカ国立標準技術研究所 (NIST) から報告されたセキュリティ脆弱性の数を見ると、SQL Server はシェアが高くよく使われているにもかかわらず、他社のデータベースと比べて圧倒的に脆弱性の報告が少なく、最も安全なデータベースです。

 

SQL Server 2017 の優位点 (2)
~最高性能のデータウェアハウス

SQL Server 2017 は、データ ウェアハウジングのワークロードに関する標準的なベンチマークである  TPC-H の非クラスタ部門において30TB、10TB、1TB の結果で No.1 を取得しています。

 

SQL Server 2017 の優位点 (3)
~わずかなコストで実現できる BI

他のセルフ サービス ソリューションの 5 分の 1 のコストで、生データをあらゆるデバイスで利用できる意味のあるレポートへと変換します。

 

SQL Server 2017 の優位点 (4)
~アドバンストアナリティクスもすべてビルトイン

SQL Server 2017 において、ついに Python と R が統合されました。インメモリテクノロジーにより、1 秒あたり最大 100 万件の分析を実現。業界をリードする SQL Server のパフォーマンスとセキュリティを好みのプラットフォームで活用しながら、リアルタイムのインテリジェンスをお客様のシステムで実現いただくことが可能になりました。

また、SQL Server 2017 では、ビッグデータ、アドバンストアナリティクス、AIなどのモジュールはすべて同製品にビルドインされているため、ほかの製品を組み合わせて運用する必要がありません。このことは、実行環境での安定した稼働やサポートの面で大きく有利に働きます。

 

SQL Server 2017 の新機能

SQL Server 2017 にはこれまでにない最高のリリースと思われるいくつかの新機能があります。いくつかの例を以下に並べます。

  • コンテナーのサポートにより、SQL Serverコンテナーをすばやく準備/起動し、終了時にそれらを取り除くことができるため、開発とDevOpsシナリオがシームレスに容易になります。SQL Serverは、Docker Enterprise Edition、KubernetesおよびOpenShift Container Platformをサポートしています。
  • AIとRおよびPythonアナリティクスを使用すると、スケーラブルでGPUアクセラレーションで並列化されたRおよび、今バージョンから Python によるアナリティクスをデータベースで実行して、インテリジェントなアプリケーションを構築できます。
  • グラフデータ分析により、高度に相互接続されたデータの新しい種類の関係を発見するために、グラフ固有のクエリ構文にグラフデータストレージとクエリ言語拡張を使用することができます。
  • アダプティブ クエリ処理は、データベースパフォーマンスにインテリジェンスをもたらすSQL Serverの新機能です。たとえば、SQL Serverのアダプティブ メモリ許可は、特定のクエリで使用されているメモリ量から適切なサイズのメモリ割当量を学習します。
  • 自動プラン修正は、パフォーマンス低下を見つけて修正することにより、継続的なパフォーマンスを保証します。

これらの目玉機能に加えて、まだまだ多くの機能拡張があります。

  • 再開可能なオンラインインデックスの再構築により、インデックスのメンテナンスを停止および開始できます。これにより、長いメンテナンス期間を待たずに、頻繁にインデックスを再作成してインデックスのパフォーマンスを最適化することができます。また、データベースサービスが中断した場合に、中断した箇所をすぐに拾うことができます。
  • 列ストア インデックスのLOB圧縮。以前は、サイズのために列ストアインデックスにLOBを含むデータを含めることは困難でした。これで、これらのLOBを圧縮することができ、LOBをより簡単に操作でき、列ストア機能の適用範囲を広げることができます。
  • クラスターレス可用性グループを使用すると、基盤となるクラスターを使用せずに、常に可用性グループを構築して読み取りをスケールアウトできます。
  • 列ストア、メモリー内OLTP、およびクエリオプティマイザーなどの主要パフォーマンス機能の継続的な改善により、新しいレコード設定パフォーマンスが向上しました
  • T-SQLのネイティブスコアリングを使用すると、モデルにアクセスするための機械学習ライブラリをロードする必要がないため、高度な分析を使用して操作データをほぼリアルタイムでスコアリングすることができます。
  • SQL Server Integration Services(SSIS)のスケールアウトにより、実行を複数のコンピュータに分散することで、パッケージの実行パフォーマンスを向上させることができます。これらのパッケージは、スケールアウトモードで並列に実行されます。
  • SQL Server Analysis Servicesには以下ような多くの機能が強化されました。
    • Oracle、MySQL、Sybase、Teradataなど、多数の新しいコネクタを備えた最新の「get data」エクスペリエンス。新しい変換により、取り込まれるデータを表形式のモデルにマッシュアップすることができます。
    • 表および列のオブジェクト・レベルのセキュリティ。
    • 詳細行と不揃いの階層がサポートされているため、表形式モデルの追加のドリルダウン機能が有効になります。
  • SQL Server Reporting Servicesの機能拡張も行われました
    • SQL Serverデータベースまたは他のSQL Server機能に影響を与えない軽量インストーラ。
    • レポート、KPI、データソースなどにプログラムでアクセスするためのREST API。
    • コメントを報告し、ユーザーがレポートに関する議論に参加できるようにする。

既存のSQL Serverを2017にアップグレードする機能に加えて、ソフトウェアアシュアランスの更新でさらにいくつかのメリットが得られます。

  • Machine Learning Server for Hadoop (旧称R Server) は 、RよびPythonベースのスケーラブルな分析機能をHadoopおよびSpark環境に提供します。このサーバーはソフトウェアアシュアランスのメリットとしてSQL Server Enterprise Editionのお客様に提供されています。
  • SQL Server Enterprise Editionソフトウェアアシュアランスのメリットにより、Power BI Report Serverを実行することもできます。Power BIレポートサーバーでは、Power BIレポートとともにSQL Server Reporting Services(SSRS)レポートを管理できるため、セルフサービスBIおよびエンタープライズレポートを1つのソリューションで実行できます。Power BI Report Serverは、Power BI Premiumを購入することでも利用できます。

また、SQL Serverのサービシングモデルが新しくなりました。SQL Server 2017 以降の新しいサービシングモデル (Modern Servicing Model) についてはこちらの記事 (英語) を参照してください。

 

関連情報

 

Gartner Magic Quadrant

以下の業界でリーダーポジションにつけています。

以下の業界でビジョナリーリーダーポジションにつけています。

 

 

 

クイック実行版の Outlook 2016 で [アーカイブフォルダーの設定] メニューが非表示となる動作について

$
0
0

こんにちは。日本マイクロソフト Outlook サポート チームです。
今回は、クイック実行版の Outlook 2016 にて [ファイル]タブ-[ツール] 配下の [アーカイブフォルダーの設定] メニューが非表示となった仕様変更について解説します。

概要
クイック実行版の Outlook 2016 をご利用で、16.0.8326.XXXX 以上のビルドをご利用の環境では、アーカイブ フォルダーが Exchange サーバー上で既定のフォルダーとして設定されている場合、[ファイル]タブ-[ツール]-[アーカイブフォルダーの設定] のメニューが非表示となり、任意のフォルダをアーカイブ フォルダーとして設定できないように動作が変更されました。

任意のフォルダーをアーカイブ フォルダーと設定できることで発生する問題 (例えば、アーカイブ フォルダーをユーザーが削除してしまった、もしくは retention policy により削除されてしまうなど) について、仕様変更のフィードバックを数多くいただいたことを受け、動作を変更いたしました。


詳細
Outlook 2016 のフォルダ一覧上にてアーカイブ フォルダーを右クリックし、フォルダーの削除や名前の変更ができない場合、そのアーカイブ フォルダーは既定のフォルダーとして設定されていることをご確認いただけます。

本情報の内容 (添付文書、リンク先などを含む) は、作成日時点でのものであり、予告なく変更される場合があります。

Ignite Video – Call Quality Management Reporting for Skype for Business and Microsoft Teams in O365

$
0
0

Hi Everyone

I have been really enjoying listening to a presentation by William Looney and Troy Funk on CQD and Call Analytics for Skype for Business Online and Microsoft Teams. You can listen to the session here in Youtube https://www.youtube.com/watch?v=335OuqsTiNA, or view the deck here https://view.officeapps.live.com/op/embed.aspx?src=https%3A%2F%2F8gportalvhdsf9v440s15hrt.blob.core.windows.net%2Fignite2017%2Fsession-presentations%2FBRK2010.PPTX

It's a really good session, and why I liked it so much was William Looney went into generating custom reports in CQD.

William starts by taking you through the vanilla out of the box CQD reports and there is some good data there. He then looks at problem sites/subnets for call quality. I won't rehash what William and Troy talk about as I recommend you go and have a listen. I will include a couple of screen shots to show William in action slicing and dicing through CQD data to get to the bottom of some poor calls.

Happy Skype/Team'ing.

Steve

Creating a custom CQD Report for packet loss rate

Resulting bar chart for above data.

 

Filtering Custom Reports based on Wifi and for a particular location/site

 

Audio Poor for a particular site

Microsoft Professional Program For Data Science on EdX Overview

$
0
0

Right now the world is changing before our eyes. At this moment Microsoft applications, platforms, and Microsoft Azure together with numerous hardware and software vendors are enabling a technological revolution around the world. Data Science (Big Data, Analytics, and Machine Learning) is changing the world as we know it. Microsoft is democratizing artificial intelligence and making previously unheard-of levels of compute power and extraordinarily complex DS/ML software available to anyone interested for pennies per hour.

Data Science and Machine Learning is the abstract, large-scale computer-based application of statistics, analytics, and mathematics combined to create artificial reasoning and prediction programs that resemble humans perception, thinking, and understanding. Much of the foundation of the technology was developed as simple formula and logic concepts by mathematicians and logician over the last few hundred years. In present day, these concepts and formulas have been programmed, packaged, and shared by communities and companies for free or pennies per hour as IaaS, PaaS, and SaaS Cloud Services for anyone and everyone to use.

In addition to all the inexpensive compute power and free software, the education and training for the technology is also free. Recognizing a shortage of qualified individuals to fill the growing need for specific job roles and technical skills, Microsoft Professional Program is a new way to learn the skills and get the hands-on experience these roles require. After passing all courses in the track and completing a final project, individuals earn a digitally sharable, resume-worthy credential that confirms mastery of these functional and technical skills.

I will be blogging my experience as I progress through the MPP curriculum with an emphasis on the Python and Machine Learning electives in the Data Science track. I'll be focusing on Machine Learning in my studies. I intend to study the concepts and technology to a reasonably technical depth. However, my goal for learning and applying the technology is that of becoming an skilled "assembler" and implementer of the many complex technologies and components involved in ML for AI. I will be taking the classes (including some additional deep study) to learn and understand the core technology with the goal of reasonable mastery and understanding of the technology for practical application.

There are many, many great sites and services I've already taken advantage of to prepare for the classes and get a strong background in the technologies. I'll be sharing them as I move along. Here's a quick list of the Top 10 I've used and found helpful in much of my study (some of these are free and other have a small cost):

OMS 中的自訂記錄欄位

$
0
0

概要:學習如何導入自訂記錄到 OMS 中,並學習如何從以下影片中的攝入數據建立自訂的記錄欄位。

自從自訂欄位發布後,對於將自訂記錄從環境中收集並傳至 OMS 的需求也隨著變大。因此自訂記錄也成為正式的功能,請參考以下影片:

 

想詳細了解自訂記錄,請參閱:Log Analytics 中的自訂記錄檔


Exchange 2013 で Get-CalendarDiagnosticLog コマンドを実行するとエラーが発生する

$
0
0

こんにちは、Exchange Server サポートの小間です。
今回は Exchange 2013 で確認されている Get-CalendarDiagnosticLog コマンドの問題を紹介します。

予定表アイテムに関する診断ログを取得する際に Get-CalendarDiagnosticLog コマンドが利用できます。
Get-CalendarDiagnosticLog コマンドの LogLocation パラメーターを指定すると、ログをファイルに書き出すことができます。

LogLocation パラメータが指定された Get-CalendarDiagnosticLog コマンドを実行する際、件名が空 (ブランク) の予定表アイテムに関するログが含まれていると、コマンドでエラーが発生します。
実際に件名が空の場合以外にも、件名を取得する処理の中で想定していないエラーが発生した場合は件名を空として扱うように実装されているため、結果としてコマンドの実行がエラーになります。

この問題は Exchange サーバーの実装に問題があるために発生いたします。
既に Exchange Online や Exchange 2016 CU3 以降では修正されており、件名が空の予定表アイテムに関するログが含まれていてもエラーが発生しないように修正されています。
現在、Exchange 2013 にも同様の修正を行うことができないか、開発部門にて検討が行われています。

Exchange 2013 環境では根本的な回避策がないため、ログをファイルに書き出す必要がある場合は暫定的な回避策として、LogLocation パラメーターを使用せずにログをコンソールに出力し、内容をテキスト ファイルに手動で保存するなど行っていただきますようお願いいたします。
もしくは Subject パラメーターを指定して、特定の件名を持つ予定表アイテムの情報のログのみを取得していただきますようお願いいたします。

進展があり次第このブログで最新情報をご報告します。
ご不便をおかけしますが、なにとぞよろしくお願いいたします。

※本情報の内容(添付文書、リンク先などを含む)は、作成日時点でのものであり、予告なく変更される場合があります。

Pozvánka na technologické semináře

$
0
0

Stále rychlejším tempem se vyvíjí nejen svět, ale i technologie a online služby, které ve firmách používáme. Udržet si přehled o jednotlivých novinkách, verzích, nástrojích a možnostech, které přinášejí směrem k IT i uživatelům, se pro některé stává složitou úlohou.

Chceme se s vámi podělit o přehled toho nejzajímavějšího na sérii online webinářů a praktických seminářů. Prezentujícími budou specialisté ze společnosti KPCS CZ a těšit se můžete na témata okolo Office 365, Azure, Windows 10, SCCM, ATA, EM+S, GDPR, ITSM a mnohá další.

Všechny uvedené akce jsou otevřeny všem zájemcům a to zcela zdarma. Pro připojení k webináři postačuje webový prohlížeč, nebo lépe klient Skype for Business Online. Kapacita praktických seminářů je ovšem omezena místností a tak vás prosíme o registraci na termín, který byste rádi navštívili. Registrace budou potvrzovány v dávkách e-mailem. Těšíme se na vás!

Extrémní sdílení / Inteligentní spolupráce

Možnost spolupráce a jednoduchého sdílení souborů, prezentací a celkově potřebných dat je dnes nutností pro každou větší společnosti. Jak našim uživatelům zřídit bezpečnou možnost sdílet svoje data mimo organizaci a odprostit se od prostého posílání příloh e-mailem si představíme na tomto webináři.

Microsoft 365: Jak se mění a kam posouvají online služby

Přehled nových služeb dostupných v rámci balíčků Microsoft 365 pro malé i velké společnosti. Společně se změnami, na které IT bude muset připravit sebe a koncové uživatele. Ideálně předem.

Microsoft 365: Novinky a a budoucnost online služeb

Microsoft do Office 365 investuje nemalé prostředky. Obecně tyto služby zanamenávají desítky a stovky změn ročně. Přibývají do nich nové služby pro koncové uživatele. O kterých stojí za to vědět, a především jak si udržet přehled ze strany IT, si povíme a ukážeme. Ať už jsou to plány pro malé společnosti nebo velké korporace.

Azure Site Recovery v hybridních prostředích

Zálohovat. Zálohovat. Zálohovat. Základ trvale udržitelné infrastruktury a aplikací. Představíme si principy vysoké dostupnosti záloh i možnosti Disaster Recovery v Azure, které vám umožní jednoduše přepnout živý provoz mezi lokálními či online prostředími.

Advanced Threat Analytics: Behaviorální anlýza provozu Active Directory

Bezpečnost identity je hodně skloňovanou problematikou. Jak ji ochránit v lokálním prostředí Active Directory, detekovat útoky na AD a zkrátit dobu, po kterou útočník může v síti nepozorovaně krást cenné informace z měsíců na jednotky minut.

Site Recovery nejen v hybridních prostředích

Jak nasadit zálohování konkrétních serverů, celého datového centra či aplikací, jejich replikaci mezi lokálními DC či prostředím Azure v praktických ukázkách. Zálohování, monitoring, obnova.

Nasazení Windows 10: Migrace dat a správa aktualizací

Ačkoliv se to nezdá, konec podpory operačních systémů Windows 7 a Windows 8.1 se rychle blíží. Představíme si proto scénáře jednoduchého upgrade na Windows 10. Stejně tak i následnné principy správy tohoto systému ve firemním prostředí.

Nasazení Shielded VMs v Microsoft Hyper-V 2016

Hyper-V 2016 přineslo zajímavé novinky i z pohledu bezpečnosti. Jak provozovat bezpečně virtuální stroje, do kterých se poskytovatel či správce virtualizační vrstvy nepodívá, to je dnešním tématem.

Správa zařízení pomocí SCCM a Microsoft Intune: Od systému až po konfiguraci aplikací

Spravovat koncová zařízení po celý jejich životní cyklus je leckdy těžkým úkolem. Jak instalovat nové počítače v řádu minut v jasně definovaném standardu, řídit jejich konfiguraci, pravidelné aktualizace, nasadit centrálně potřebné aplikace pro počítače s Windows, macOS i mobilní zařízení Android, iOS a Windows Mobile. To je úloha nástrojů System Center Configuration Manager a služby Microsoft Intune.

ITSM jako pomoc, nikoliv brzda

Nasazení ITSM procesů, nástrojů a metodik leckdy bývá firmě na obtíž. Zpomalí její efektivitu a možnost pružně v IT reagovat na nové požadavky. Jak ITSM uchopit tak, aby dokázalo pomoci, ale ne uškodit.

ATOM - Výchozí bod pro řízení IT

Loď má také jeden kapitánský můstek. A proto i vaše IT by mělo mít jeden nástroj, který dokáže integrovat všechny konzole, reporty i přehledy a dokáže navést vaše IT do těch správných vod.

Efektivní řízení a dohled IT služeb (ITSM/ATOM)

Prakticky se podíváme na modelové příklady ITSM procesů a jejich implementace. Zaměříme se na možnosti dohledu IT služeb po stránce kvality směrem k uživatelům i technologiím.

- Petr Vlk (KPCS CZ, WUG)

Browser security beyond sandboxing

$
0
0

Security is now a strong differentiator in picking the right browser. We all use browsers for day-to-day activities like staying in touch with loved ones, but also for editing sensitive private and corporate documents, and even managing our financial assets. A single compromise through a web browser can have catastrophic results. It doesn’t help that browsers are also on their way to becoming some of the most complex pieces of consumer software in existence, increasing potential attack surface.

Our job in the Microsoft Offensive Security Research (OSR) team is to make computing safer. We do this by identifying ways to exploit software, and working with other teams across the company on solutions to mitigate attacks. This workflow typically involves identifying software vulnerabilities to exploit. However, we believe that there will always be more vulnerabilities to find, so that isn’t our primary focus. Instead, our job is really all about asking: assuming a vulnerability exists, what can we do with it?

We’ve so far had success with this approach. We have helped improve the security of several Microsoft products, including Microsoft Edge. We continue to make strides in preventing both Remote Code Execution (RCE) with mitigations like Control Flow Guard (CFG), export suppression, and Arbitrary Code Guard (ACG), and isolation, notably with Less Privileged AppContainer (LPAC) and Windows Defender Application Guard (WDAG). Still, we believe it’s important for us to validate our security strategy. One way we do this is to look at what other companies are doing and study the results of their efforts.

For this project, we set out to examine Google’s Chrome web browser, whose security strategy shows a strong focus on sandboxing. We wanted to see how Chrome held up against a single RCE vulnerability, and try to answer: is having a strong sandboxing model sufficient to make a browser secure?

Some of our key findings include the following:

  • Our discovery of CVE-2017-5121 indicates that it is possible to find remotely exploitable vulnerabilities in modern browsers
  • Chrome’s relative lack of RCE mitigations means the path from memory corruption bug to exploit can be a short one
  • Several security checks being done within the sandbox result in RCE exploits being able to, among other things, bypass Same Origin Policy (SOP), giving RCE-capable attackers access to victims’ online services (such as email, documents, and banking sessions) and saved credentials
  • Chrome’s process for servicing vulnerabilities can result in the public disclosure of details for security flaws before fixes are pushed to customers

Finding and exploiting a remote vulnerability

To do this evaluation, we first needed to find some kind of entry point vulnerability. Typically, we do this by finding a memory corruption bug, such as buffer overflow or use-after-free vulnerability. As with any web browser, the attack surface is extensive, including the V8 JavaScript interpreter, the Blink DOM engine, and the pdfium PDF renderer, among others. For this project, we focused our attention on V8.

The bug we ended up using for our exploit was discovered through fuzzing. We leveraged the Windows Security Assurance team's Azure-based fuzzing infrastructure to run ExprGen, an internal JavaScript fuzzer written by the team behind Chakra, our own JavaScript engine. People were likely already throwing all publicly available fuzzers at V8; ExprGen, on the other hand, had only ever been run against Chakra, giving it greater chances of leading to the discovery of new bugs.

Identifying the bug

One of the disadvantages of fuzzing, compared to manual code review, is that it's not always immediately clear what causes a given test case to trigger a vulnerability, or if the unexpected behavior even constitutes a vulnerability at all. This is especially true for us at OSR; we have no prior experience working with V8 and therefore know fairly little about its internal workings. In this instance, the test case produced by ExprGen reliably crashed V8, but not always in the same way, and not in a way that could be easily influenced by an attacker.

As fuzzers often generate very large and convoluted pieces of code (in this case, nearly 1,500 lines of unreadable JavaScript), the first step is typically to minimize the test case -- trimming the fat until we're left with a small, understandable piece of code. This is what we ended up with:

The code above looks strange and doesn't really achieve anything, but it is valid JavaScript. All it does is to create an oddly structured object, then set some of its fields. This shouldn't trigger any strange behavior, but it does. When this code is run using D8, V8's standalone executable version, built from git tag 6.1.534.32, we get a crash:

Looking at the address the crash occurs at (0x000002d168004f14), we can tell it does not happen within a static module. It must therefore be in code dynamically generated by V8's Just-In-Time (JIT) compiler. We also see that the crash happens because the rax register is zero.

At first glance, this looks like a classic null dereference bug, which would be a let-down: those are typically not exploitable because modern operating systems prevent the zero virtual address from being mapped. We can look at surrounding code in order to get a better idea of what might be going on:

We can extract a few things from this code. First, we notice that our crash occurs right before a function call to what looks like a JavaScript function dispatcher stub, mostly due to the address of v8::internal::Builtin_FunctionPrototypeToString being loaded into a register right before that call. Looking at the code of the function located at 0x000002d167e84500, we find that address 0x000002d167e8455f does contain a call rbx instruction, which appears to confirm our suspicion.

The fact that it calls Builtin_FunctionPrototypeToString is interesting, because that's the implementation for the Object.toString method, which our minimized test case calls into. This appears to indicate that the crash is happening within the JIT-compiled version of our func0 Javascript function.

The second piece of information we can glean from the disassembly above is that the zero value contained in register rax at the time of the crash is loaded from memory. It also looks like the value that should have been loaded is being passed to the toString function call as a parameter. We can tell that it's being loaded from [rdi + 0x18]. Based on that, we can take a look at that piece of memory:

This doesn't yield very useful information. We can see that most of these values are pointers, but that's about it. However, it’s useful to know where the value (which is meant to be a pointer) is loaded from, because it can help us figure out why this value is zero in the first place. Using the WinDbg's newly public Time Travel Debugging (TTD) feature, we can place a memory write breakpoint at that location (baw 8 0000025e`a6845dd0), then place an execution breakpoint at the start of the function, and finally re-run the trace backwards (g-).

Interestingly, our memory write breakpoint doesn't trigger, meaning that this memory slot does not get initialized in this function, or at least not before it's used. This might be normal, but if we play around with the test case, for example by replacing the o.b.bc.bca.bcab = 0; line with o.b.bc.bca.bcab = 0xbadc0de;, then we start noticing changes to the memory region where our crash value originates:

We see that our 0xbadc0de constant value ends up in that memory region. Although this doesn't prove anything, it makes it seem likely that this memory region is used by the JIT-compiled function to store local variables. That idea is reinforced by how the code from earlier looked like the value we crash trying to load was being passed to Object.toString as a parameter.

Combined with the fact that TTD confirmed that this memory slot is not initialized by the function, a possible explanation is that the JIT compiler is failing to emit code that would initialize the pointers representing the object members used to access the o.b.ba.bab field.

To confirm this, we can run the test case in D8 with the --trace-turbo and --trace-turbo-graph parameters. Doing so will cause D8 to output information about how TurboFan, V8's JIT compiler, goes about building and optimizing the relevant code. We can use this in conjunction with turbolizer to visualize the graphs that TurboFan uses to represent and optimize code.

TurboFan works by applying various optimization phases to the graph one after the other. About half-way through the optimization pipeline, after the Load elimination optimization phase, this is what our code's flow looks like:

It's fairly straightforward: the optimizer apparently inlined func0 into the infinite loop, and then pulled the first loop iteration out. This information is useful to see how the blocks relate to each other. However, this representation omits nodes that correspond to loading function call parameters, as well as the initialization of local variables, which is the information we're interested in.

Thankfully, we can use turbolizer's interface to display those. Focusing on the second Object.toString call, we can see where the parameter comes from, as well as where it's allocated and initialized:

(NOTE: node labels were manually edited for readability)

At this stage in the optimization pipeline, the code looks perfectly reasonable:

  • A memory block is allocated to store local object o.b.ba (node 235), and its fields baa and bab are initialized
  • A memory block is allocated to store local object o.b (node 259), and its fields are all initialized, with ba specifically being initialized with a reference to the previous o.b.ba allocation
  • A memory block is allocated to store local object o (node 303), and its fields are all initialized
  • Local object o's field b is overwritten with a reference to object o.b (node 185)
  • Local object field o.b.ba.bab is loaded (nodes 199, 209, and 212)
  • The Object.toString method is called, passing o.b.ba.bab as first argument

Code compiled at this stage in the optimization pipeline looks like it shouldn't exhibit the uninitialized local variable behavior that we're hypothesizing is the root cause of the bug. That being said, certain aspects of this representation do lend credence to our hypothesis. Looking at nodes 209 and 212, which load o.b.ba and o.b.ba.bab, respectively, for use as a function call parameter, we can see that the offsets +24 and +32 correspond to the disassembly of the crashing code:

0x17 and 0x1f are values 23 and 31, respectively. This fits, when taking into account how V8 tags values in order to distinguish actual objects from inlined integers (SMIs): if a value meant to represent a JavaScript variable has its least significant bit set, it is considered a pointer to an object, and otherwise an SMI. Because of this, V8 code is optimized to subtract one from JavaScript object offsets before they are used for dereferencing.

As we still don't have an explanation for the bug, we keep looking through optimization passes until we find something strange. This happens after the Escape analysis pass. At that point, the graph looks like the following:

There are two notable differences:

  • The code no longer goes through the trouble of loading o and then o.b—it was optimized to reference o.b directly, probably because that field's value is never changed
  • The code no longer initializes o.b.ba; as can be seen in the graph, turbolizer grays out node 264, which means it is no longer live, and therefore won't be built into the final code

Looking through all the live nodes at this stage seems to confirm that this field is no longer being initialized. As another sanity check, we run d8 on this test case with the flag --no-turbo-escape in order to omit this optimization phase: d8 no longer crashes, confirming that this is where the issue stems from. In fact, that turned out to be Google's fix for the bug: completely disable the escape analysis phase in v8 6.1 until the new escape analysis module was ready for production in v8 6.2.

With all this information about the bug's root cause in hand, we need to find ways to exploit it. It looks like it could be a very powerful bug, but it depends entirely on our ability to control the uninitialized memory slot, as well as how it ends up being used.

Getting an info leak

At this point, the easiest way to get an idea of what we can or can't do with the bug is simply to play around with the test case. For example, we can look at the effect of changing the type of the field we're loading from an uninitialized pointer:

The result is that the field is now loaded directly as a float, rather than an object pointer or SMI:

Similarly, we can try adding more fields to the object:

Running this, we get the following crash:

This is interesting, because it looks like adding fields to the object modifies the offsets from where the object fields are loaded. In fact, if we do the math, we see that (0x67 - 0x1f) / 8 = 9, which is exactly the number of fields we added from o.b.ba.bab. The same applies to the new offset that rbx is loaded from.

Playing around with the test case a bit more, we are able to confirm that we have extensive control over the offset where the uninitialized pointer is loaded from, even though none of these fields are being initialized. At this point, it would be useful to see if we can place arbitrary data into this memory region. The earlier test with 0xbadc0de seemed to indicate that we could, but the offset appeared to change with each run of the test case. Often, exploits get around that by spraying values. The rationale is that if we can't accurately trap our target to a given location, we can instead just make our target bigger. In practice, we can try spraying values by using inline arrays:

Looking at the crash dump, we see:

The crash is essentially the same as previously, but if we look at the memory where our uninitialized data is coming from, we see:

We now have a big block of arbitrary script-controlled values at an offset from r11. Combining this observation with the previous one about offsets, we can come up with something even better:

The result is that we are now dereferencing a float value arbitrary from an arbitrary address:

This, of course, is extremely powerful: it immediately results in an arbitrary read primitive, impractical as it may be. Unfortunately, an arbitrary read primitive without an initial info leak is not that useful: we need to know which addresses to read from in order to make use of it.

As the variable v can be anything we want it to be, we can replace it with an object, and then read its internal fields. For example, we can replace the call to Object.toString() with a custom callback, and replace v with a DataView object, reading back the address of that object's backing store. This produces a way for us to locate fully script-controlled data:

The above code returns (modulo ASLR):

Using WinDbg we can validate that this is indeed the backing store for our buffer:

Once again, this is an incredibly powerful primitive, and we could use it to leak just about any field from an object, as well as the address of any JavaScript object, as those are sometimes stored as fields in other objects.

Building an arbitrary read/write primitive

Being able to place arbitrary data at a known address means we can unlock an even more powerful primitive: the ability to create arbitrary JavaScript objects. Just changing the type of the field being read from a float to an object makes it possible for us to read an object pointer from anywhere in memory, including a buffer whose address is known to us. We can test this by using WinDbg to place controlled data at a known address (the same primitive we just developed above) using the following commands:

This places an SMI representing the integer 0xbadc0de at the location where our arbitrary object pointer would be loaded from. Since we didn't set the least significant bit, it will be interpreted by V8 as an inline integer:

As expected, V8 prints the following output:

Given this, we have the ability to create arbitrary objects. From there, we can put together a convenient arbitrary read/write primitive by creating fake DataView and ArrayBuffer objects. We again place our fake object data at a known location using WinDbg:

We then test it with the following JavaScript:

As expected, the call to DataView.prototype.setUint32 triggers a crash, attempting to write value 0xdeadcafe to address 0x00000badbeefc0de:

Controlling the address where the data will be written to or read from is just a matter of modifying the obj.arraybuffer.backing_store slot populated through WinDbg. Since in the case of a real exploit that memory would be part of the backing store of a real ArrayBuffer object, doing so wouldn’t be difficult. For example, a write primitive might look like this:

With this, we can reliably read and write arbitrary memory locations in the Chrome renderer process from JavaScript.

Achieving Arbitrary Code Execution

Achieving code execution in the renderer process, given an arbitrary read/write primitive, is relatively easy. At the time of writing, V8 allocates its JIT code pages with read-write-execute (RWX) permissions, meaning that getting code execution can be done by locating a JIT code page, overwriting it, and then calling into it. In practice, this is achieved by using our info leak to locate the address of a JavaScript function object and reading its function entrypoint field. Once we've placed our code at that entrypoint, we can call the JavaScript function for the code to execute. In JavaScript, this might look like:

It is worth noting that even if V8 did not make use of RWX pages, it would still be easy to trigger the execution of a Return Oriented Programming (ROP) chain due to the lack of control flow integrity checks. In that scenario, we could, for example, overwrite a JavaScript function object's entrypoint field to point to the desired gadget (likely a stack pivot of some kind) and then make the function call.

Neither of those techniques would be directly applicable to Microsoft Edge, which features both CFG and ACG. ACG, which was introduced in Windows 10 Creators Update, enforces strict Data Execution Prevention (DEP) and moves the JIT compiler to an external process. This creates a strong guarantee that attackers cannot overwrite executable code without first somehow compromising the JIT process, which would require the discovery and exploitation of additional vulnerabilities.

CFG, on the other hand, guarantees that indirect call sites can only jump to a certain set of functions, meaning they can’t be used to directly start ROP execution. Creators Update also introduced CFG export suppression, which significantly reduced the set of valid CFG indirect call targets by removing most exported functions from the valid target set. All these mitigations and others make the exploitation of RCE vulnerabilities in Microsoft Edge that much more complex.

The dangers of RCE

Being a modern web browser, Chrome adopts a multi-process model. There are several process types involved: the browser process, the GPU process, and renderer processes. As its name indicates, the GPU process brokers interactions between the GPU and all the processes that need to use it, while the browser is the global manager process that brokers access to everything from file system to networking.

Each renderer is meant to be the brains behind one or more tabs—it takes care of parsing and interpreting HTML, JavaScript, and the like. The sandboxing model makes it so that these processes only have access to as little as they need to function. As such, a full persistent compromise of the victim's system is not possible from the renderer without finding a secondary bug to escape that sandbox.

With that in mind, we thought it would be interesting to examine what might be possible for an attacker to achieve without a secondary bug. Although most tabs are isolated within individual processes, that's not always the case. For example, if you’re on bing.com and use the JavaScript developer console (which can be opened by pressing F12) to run window.open('https://microsoft.com'), a new tab will open, but it will typically fall into the same process as the original tab. This can be seen by using Chrome's internal task manager, which can be opened by pressing Shift + Escape:

This is an interesting observation, because it indicates that renderer processes are not locked down to any single origin. This means that achieving arbitrary code execution within a renderer process can give attackers the ability to access other origins. While attackers gaining the ability to bypass the Single Origin Policy (SOP) in such a way may not seem like a big deal, the ramifications can be significant:

  • Attackers can steal saved passwords from any website by hijacking the PasswordAutofillAgent interface.
  • Attackers can inject arbitrary JavaScript into any page (a capability known as universal cross-site scripting, or UXSS), for example, by hijacking the blink::ClassicScript::RunScript method.
  • Attackers can navigate to any website in the background without the user noticing, for example, by creating stealthy pop-unders. This is possible because many user-interaction checks happen in the renderer process, with no ability for the browser process to validate. The result is that something like ChromeContentRendererClient::AllowPopup can be hijacked such that no user interaction is required, and attackers can then hide the new windows. They can also keep opening new pop-unders whenever one is closed, for example, by hooking into the onbeforeunload window event.

A better implementation of this kind of attack would be to look into how the renderer and browser processes communicate with each other and to directly simulate the relevant messages, but this shows that this kind of attack can be implemented with limited effort. While the democratization of two-factor authentication mitigates the dangers of password theft, the ability to stealthily navigate anywhere as that user is much more troubling, because it can allow an attacker to spoof the user’s identity on websites they’re already logged into.

Conclusion

This kind of attack drives our commitment to keep on making our products secure on all fronts. With Microsoft Edge, we continue to both improve the isolation technology and to make arbitrary code execution difficult to achieve in the first place. For their part, Google is working on a site isolation feature which, once complete, should make Chrome more resilient to this kind of RCE attack by guaranteeing that any given renderer process can only ever interact with a single origin. A highly experimental version of this site isolation feature can be enabled by users through the chrome://flags interface.

We responsibly disclosed the vulnerability that we discovered along with a reliable RCE exploit to Google on September 14, 2017. The vulnerability was assigned CVE-2017-5121, and the report was awarded a $7,500 bug bounty by Google. Along with other bugs our team reported but didn’t exploit, the total bounty amount we were awarded was $15,837. We are currently working with Google to have this amount donated to charity. The bug tracker item for the vulnerability described in this article is still private at time of writing.

Servicing security fixes is an important part of the process and, to Google’s credit, their turnaround was impressive: the bug fix was committed just four days after the initial report, and the fixed build was released three days after that. However, it’s important to note that the source code for the fix was made available publicly on Github before being pushed to customers. Although the fix for this issue does not immediately give away the underlying vulnerability, other cases can be less subtle. Case in point, this security bug tracker item was also kept private at the time, but the public fix made the vulnerability obvious, especially as it came with a regression test. This can be expected of an open source project, but it is problematic when the vulnerabilities are made known to attackers ahead of the patches being made available. In this specific case, the stable channel of Chrome remained vulnerable for nearly a month after that commit was pushed to git. That is more than enough time for an attacker to exploit it. Some Microsoft Edge components, such as Chakra, are also open source. Because we believe that it’s important to ship fixes to customers before making them public knowledge, we only update the Chakra git repository after the patch has shipped.

Our strategies may differ, but we believe in collaborating across the security industry in order to help protect customers. This includes disclosing vulnerabilities to vendors through Coordinated Vulnerability Disclosure (CVD), and partnering throughout the process of delivering security fixes.

 

Jordan Rabet

Microsoft Offensive Security Research team

 

 


Talk to us

Questions, concerns, or insights on this story? Join discussions at the Microsoft community.

Follow us on Twitter @MMPC and Facebook Microsoft Malware Protection Center

 

 

Data and AI Announcements at Microsoft Envision and Microsoft Ignite

$
0
0

Written by Jon Woodward, Business Lead for Data and AI, Microsoft UK

Our two flagship events Microsoft Envision and Microsoft Ignite recently took place in the US. There was a tremendous sense of energy and excitement, with 75+ keynotes, sessions and breakouts at Microsoft Envision, focusing on business leaders with industry-specific transformation scenarios, many of which are made possible by Cloud & Enterprise offerings. At Microsoft Ignite, 650+ Cloud & Enterprise-led sessions and experiences are part of a highly-curated experience designed to give IT and data pros, developers, and ITDMs the information and motivation they need to charge ahead.

Contributing to the excitement, Satya Nadella shared progress on Microsoft's commitment to empowering the quantum computing revolution with topological quantum technology being developed by a global team of renowned scientists. Learn more by visiting microsoft.com/quantum.

Data + AI Announcements

SQL Server 2017 on Linux, Windows, and Docker, general availability. Starting October 2nd, customers will be able to bring the industry-leading performance and security of SQL Server to Linux and Docker containers. SQL Server 2017 delivers mission critical OLTP database capabilities and enterprise data warehousing with in-memory technology across workloads. Customers will gain transformative insights from in-database machine learning with Python and R, plus rich interactive reporting on any device for faster decision making. Developers can choose language and platform, and container support seamlessly facilitates DevOps scenarios; All built into a single product at 1/10th the cost of Oracle. Learn more.

Machine Learning Server benefit for Hadoop, general availability. Effective October 1st, Microsoft Machine Learning for Hadoop/Spark becomes a Software Assurance benefit for SQL Server Enterprise edition customers. It provides the rights to run Microsoft Machine Learning Server for Hadoop on up to 10 servers for every 2 cores of SQL Server Enterprise Edition under active SA as of October 1st, 2017. Learn more on the Software Assurance page.

Power BI Report Server SA benefit, in market since June, this enables a truly hybrid reporting and dashboard experience by allowing customers to manage SQL Server Reporting Services (SSRS) reports alongside Power BI reports. Learn more on the Software Assurance page.

Azure Database Migration Service and Azure SQL Database Managed Instance pre-announce, public preview. New Managed Instance offering within SQL Database offers near-complete SQL Server compatibility and network isolation for easiest lift and shift to Azure. DMS offers a fully managed, first party Azure service that enables customers to easily migrate their on-premises SQL server databases to Azure SQL Database Managed Instance and SQL Server in Azure Virtual Machines with minimal to no downtime. Customers can maximize existing license investments with discounted rates on Managed Instance using a new Azure Hybrid Benefit for SQL Server. Sign up for news on availability.

Azure Machine Learning, new capabilities public preview. Updates connect every element of the data science process with enhanced productivity and collaboration for AI developers and data scientists at any scale. Enables them to start building right away with their choice of tools and frameworks. The updated platform includes AI enhanced data cleansing and prepping tools to start the modelling process sooner. The new features will help data scientists develop, deploy, and manage machine learning and AI models at any scale wherever data lives: in the cloud, on-premises, and edge.

SQL Data Warehouse, announcement. This fall, SQL Data Warehouse will preview an "optimised for compute" performance tier that significantly improves performance of analytics in the cloud with workloads running up to 2x faster. In addition, this new tier scales further than ever before - up to 30,000 compute Data Warehouse Units. Get started with provisioning your SQL Data Warehouse today in the Azure Portal or request an Extended Free Trial.

Microsoft Cognitive Services updates. Includes general availability of Text Analytics API, a cloud-based service for language processing such as sentiment analysis, key phrase extraction and language detection. In October, we will also make generally available Bing Custom Search to create customized search experience for a section of the web, and Bing Search APIs v7 for searching the entire web for more relevant results using Bing Web, News, Video & Image search. Read the announcement blog post.

Intelligent search experiences across Microsoft 365. Using AI and signals from the Microsoft Graph, intelligent search experiences surface insights and deliver more relevant search results. New experiences include Bing for business private preview for enterprises, schools, or organizations. It combines public web search with multiple customer-specific content sources - documents, people, bookmarks, and more - to create a unified and efficient way to search for information and documents. Learn more.

For more information around Data & AI announcements or Microsoft Envision and Microsoft Ignite, reach out to your Microsoft point of contact, or to our Partner Concierge at www.microsoft.com/uk/partner/concierge/

Security baseline for Windows 10 “Fall Creators Update” (v1709) – FINAL

$
0
0

Microsoft is pleased to announce the final release of the recommended security configuration baseline settings for Windows 10 “Fall Creators Update,” also known as version 1709, “Redstone 3,” or RS3. There are no changes from the draft release we published a few weeks ago.

The 1709 baseline package has been added to the Microsoft Security Compliance Toolkit. On that page, click the Download button, then select "Windows 10 Version 1709 Security Baseline.zip" and any other content you want to download.

The 1709 baseline package includes GPOs that can be imported in Active Directory, scripts for applying the GPOs to local policy, custom ADMX files for Group Policy settings, and all the recommended settings in spreadsheet form. The spreadsheet also includes the corresponding settings for configuring through Windows’ Mobile Device Management (MDM).

We're also happy to announce the revamping of the Windows Security Baselines landing page.

The differences between the 1709 baseline and that for Windows 10 v1703 (a.k.a., “Creators Update,” “Redstone 2”, RS2) are:

  • Implementing Attack Surface Reduction rules within Windows Defender Exploit Guard. Exploit Guard is a new feature of v1709 that helps prevent a variety of actions often used by malware. You can read more about Exploit Guard here: Reduce attack surfaces with Windows Defender Exploit Guard. Note that for this draft, we are enabling “block” mode for all of these settings. We are taking a particularly careful look at the “Block office applications from injecting into other process;” if it creates compatibility problems then we might change the baseline recommendation to “audit” mode for that setting. Please let us know what you observe with this draft baseline.
  • Enabling Exploit Guard’s Network Protection feature to prevent any application from accessing web sites identified as dangerous, including those hosting phishing scams and malware. This extends the type of protection offered by SmartScreen to all programs, including third-party browsers.
  • Enabling a new setting that prevents users from making changes to the Exploit protection settings area in the Windows Defender Security Center.

We also recommend enabling Windows Defender Application Guard. Our testing has proven it to be a powerful defense. We would have included it in this baseline, but its configuration settings are organization-specific.

The old Enhanced Mitigation Experience Toolkit (EMET) add-on is not supported on Windows 10 v1709. Instead, we offer Windows Defender Exploit Guard’s Exploit Protection, which is now a built-in, fully-configurable feature of Windows 10. Exploit Protection brings the granular control you remember from EMET into a new, modern feature. Our download package includes a pre-configured, customizable XML file to help you add exploit mitigations to many common applications. You can use it as-is, or customize it for your own needs. Note that you configure the corresponding Group Policy setting by specifying the full local or server file path to the XML file. Because our baseline cannot specify a path that works for everyone, it is not included in the baseline packages GPOs – you must add it yourself.

Thank you to the Center for Internet Security (CIS) and to everyone else who gave us feedback.

Viewing all 34890 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>