Quantcast
Channel: TechNet Blogs
Viewing all 34890 articles
Browse latest View live

Top Contributors Awards! February’2018 Week 2!!

$
0
0

Welcome back for another analysis of contributions to TechNet Wiki over the last week.

First up, the weekly leader board snapshot...

 

As always, here are the results of another weekly crawl over the updated articles feed.

 

Ninja Award Most Revisions Award
Who has made the most individual revisions

 

#1 H Shakir with 78 revisions.

 

#2 Dave Rendón with 61 revisions.

 

#3 RajeeshMenoth with 58 revisions.

 

Just behind the winners but also worth a mention are:

 

#4 Peter Geelen with 56 revisions.

 

#5 Ken Cenerelli with 54 revisions.

 

#6 Somdip Dey - MSP Alumnus with 48 revisions.

 

#7 Subhro Majumder with 30 revisions.

 

#8 Av111 with 26 revisions.

 

#9 Santhosh Sivarajan- with 23 revisions.

 

#10 .paul. _ with 21 revisions.

 

 

Ninja Award Most Articles Updated Award
Who has updated the most articles

 

#1 Somdip Dey - MSP Alumnus with 29 articles.

 

#2 Ken Cenerelli with 27 articles.

 

#3 RajeeshMenoth with 25 articles.

 

Just behind the winners but also worth a mention are:

 

#4 Santhosh Sivarajan- with 20 articles.

 

#5 Dave Rendón with 20 articles.

 

#6 Richard Mueller with 15 articles.

 

#7 Peter Geelen with 15 articles.

 

#8 Av111 with 5 articles.

 

#9 .paul. _ with 3 articles.

 

#10 H Shakir with 3 articles.

 

 

Ninja Award Most Updated Article Award
Largest amount of updated content in a single article

 

The article to have the most change this week was Image cropping using Jcrop with ASP.NET MVC and EF 6, by Mahedee

This week's revisers were RajeeshMenoth & Dave Rendón

[Guru's Says]: Do you know how to crop image using Jcrop with ASP.NET MVC, check this useful article created by Mahedee 🙂

 

Ninja Award Longest Article Award
Biggest article updated this week

 

This week's largest document to get some attention is SharePoint 2010 : Custom BCS connector for Search with Security Trimming, Batching and Incremental Crawling, by Nitin K. Gupta

This week's revisers were Dave Rendón & vaibhav sharma25

[Guru's Says]: This article is designed to highlight the solution for enabling security trimming on search results in SharePoint 2010 for external database, Nice article 🙂

 

Ninja Award Most Revised Article Award
Article with the most revisions in a week

 

This week's most fiddled with article is Active Directory Replication Metadata, by Subhro Majumder. It was revised 26 times last week.

This week's revisers were Dave Rendón, Peter Geelen, Subhro Majumder & H Shakir

[Guru's Says]: Do you know what is Active Directory Replication Metadata, Replication Metadata is the data which captures all the change log of an object since its creation until deletion. Read more in this useful article 🙂

 

Ninja Award Most Popular Article Award
Collaboration is the name of the game!

 

The article to be updated by the most people this week is TechNet Guru Competitions - February 2018, by Peter Geelen

[Guru's Says]: Gurus, where are you? February competition live now and we have total 12 nice articles in all categories. Go Go Gurus!! 🙂

This week's revisers were Ramakrishnan Raman, .paul. _, Subhro Majumder, pituach, AnkitSharma007, H Shakir, SYEDSHANU - MVP, Av111, Siva Padala & RajeeshMenoth

 

Ninja Award Ninja Edit Award
A ninja needs lightning fast reactions!

 

Below is a list of this week's fastest ninja edits. That's an edit to an article after another person

 

Ninja Award Winner Summary
Let's celebrate our winners!

 

Below are a few statistics on this week's award winners.

Most Revisions Award Winner
The reviser is the winner of this category.

H Shakir

H Shakir has won 6 previous Top Contributor Awards. Most recent five shown below:

H Shakir has not yet had any interviews, featured articles or TechNet Guru medals (see below)

H Shakir's profile page

Most Articles Award Winner
The reviser is the winner of this category.

Somdip Dey - MSP Alumnus

Somdip Dey - MSP Alumnus has won 5 previous Top Contributor Awards:

Somdip Dey - MSP Alumnus has not yet had any interviews, featured articles or TechNet Guru medals (see below)

Somdip Dey - MSP Alumnus's profile page

Most Updated Article Award Winner
The author is the winner, as it is their article that has had the changes.

Mahedee

This is the first Top Contributors award for Mahedee on TechNet Wiki! Congratulations Mahedee!

Mahedee has not yet had any interviews, featured articles or TechNet Guru medals (see below)

Mahedee's profile page

Longest Article Award Winner
The author is the winner, as it is their article that is so long!

Nitin K. Gupta

Nitin K. Gupta has won 3 previous Top Contributor Awards:

Nitin K. Gupta has TechNet Guru medals, for the following articles:

Nitin K. Gupta has not yet had any interviews or featured articles (see below)

Nitin K. Gupta's profile page

Most Revised Article Winner
The author is the winner, as it is their article that has ben changed the most

Subhro Majumder

Subhro Majumder has won 5 previous Top Contributor Awards:

Subhro Majumder has not yet had any interviews, featured articles or TechNet Guru medals (see below)

Subhro Majumder's profile page

Most Popular Article Winner
The author is the winner, as it is their article that has had the most attention.

Peter Geelen

Peter Geelen has been interviewed on TechNet Wiki!

Peter Geelen has featured articles on TechNet Wiki!

Peter Geelen has won 198 previous Top Contributor Awards. Most recent five shown below:

Peter Geelen has TechNet Guru medals, for the following articles:

Peter Geelen's profile page

Ninja Edit Award Winner
The author is the reviser, for it is their hand that is quickest!

H Shakir

H Shakir is mentioned above.

 

Subhro Majumder

Subhro Majumder is mentioned above.

 

mb0339 - Marco

mb0339 has won 4 previous Top Contributor Awards:

mb0339 has not yet had any interviews, featured articles or TechNet Guru medals (see below)

mb0339's profile page

 

Another great week from all in our community! Thank you all for so much great literature for us to read this week!
Please keep reading and contributing!

PS: Above top banner came from  Rajeesh Menoth.

Best regards,
— Ninja [Kamlesh Kumar]

 


Logging on to Azure for your everyday job

$
0
0

Sometimes life is about the little things, and one little thing that has been bothering me is logging on to Azure RM in Powershell using Add-AzureRMAccount. Every time you start Powershell, you need to log on again and that gets tired quickly, especially with accounts having mandatory 2FA.

It gets even more complicated if you have multiple accounts to manage, for instance, one for testing and another for production. To top it off, you can start over when it turns out that your context has expired, which you will only discover after you actually executed some AzureRM commands.

The standard trick to make this easier is to save your Azure RM context with account and subscription information to a file (Save-AzureRMContext), and to import this file whenever you need (Import-AzureRMContext). But we can do a little bit better than that.

Here is my solution to the problem.

  • Use a PowerShell profile to define a function doing the work. A profile gets loaded whenever you start PowerShell. There are multiple profiles, but the one I want is for CurrentUser - Allhosts.
  • The function will load the AzureRM context from a file. If there is no such file, it should prompt me to log on.
  • After logging on, the context should be tested for validity because the token may have expired.
  • If the token is expired, prompt for logon again.
  • If needed, save the new context to a file.

So here is my take on it. Note the specific naming convention that I use for functions defined in a profile.

function profile_logon_azure ([string] $parentfolder, [string]$accountname)
{
    $validlogon = $false
    $contextfile = Join-Path $parentfolder "$accountname.json"
    if (-not (Test-Path $contextfile))
    {
        Write-Host "No existing Azure Context file in '$parentfolder', please log on now for account '$accountname'." -ForegroundColor Yellow
    } else {
        $context = Import-AzureRmContext $contextfile -ErrorAction stop
        #
        # check for token expiration by executing an Azure RM command that should always succeed.
        #
        Write-Host "Imported AzureRM context for account '$accountname', now checking for validity of the token." -ForegroundColor Yellow
        $validlogon = (Get-AzureRmSubscription -SubscriptionName $context.Context.Subscription.Name -ErrorAction SilentlyContinue) -ne $null
        if ($validlogon) {
            Write-Host "Imported AzureRM context '$contextfile', current subscription is: $($context.Context.Subscription.Name)" -ForegroundColor Yellow
        } else {
            Write-Host "Logon for account '$accountname' has expired, please log on again." -ForegroundColor Yellow
        }
    }
    if (-not $validlogon)
    {
        $account = $null
        $account = Add-AzureRmAccount
        if ($account)
        {
            Save-AzureRmContext -Path $contextfile -Force
            Write-Host "logged on successfully, context saved to $contextfile." -ForegroundColor Yellow
        } else {
            Write-Host "log on to AzureRM for account '$accountname' failed, please retry." -ForegroundColor Yellow
        }
    }
}

To make this work, add this function to the powershell profile: from the Powershell ISE, type ise $profile.CurrentUserAllHosts to edit the profile and copy/paste the function definition.

Suppose I have two Azure RM accounts that I want to use here, called 'foo' and 'bar'. For that I would add the following function definitions to the profile:

#
# specific azure logons. Context file is deliberately in a non-synced folder for security reasons.
#
function azure-foo { profile_logon_azure -parentfolder $env:LOCALAPPDATA -accountname "foo" }
function azure-bar { profile_logon_azure -parentfolder $env:LOCALAPPDATA -accountname "bar" }

To log on to 'foo', I'd simply execute azure-foo. If this is a first logon, I get the usual AzureRM logon dialog and the resulting context gets saved. The next time, the existing file is loaded and the context tested for validity. From that point on you can switch between accounts whenever you need.

Try it, it just might make your life a little bit easier.

Interview with a BizTalk Server Wiki Ninja – Mandar Dharmadhikari

$
0
0

Welcome to another Interview with a Wiki Ninja! Today's interview is with...

It's my pleasure to introduce you to someone who is no newcomer to the community (so hopefully you already know him). He has written 22 Wiki articles, edited 181 articles and 147 wiki comments, not yet done, he is Moderator of BizTalk forum too.

Mandar Dharmadhikari

Let's look at some of Mandar Dharmadhikari's statistics:

  • 22 Wiki Articles!
  • 181 Wiki article edits!
  • 147 Wiki article comments!
  • 287 MSDN forum Answer!
  • 249 Helpful post!
  • 1703 Replies!
  • MSDN BizTalk Server Moderator!

Now let's dig into the interview!

===============================

Who are you, where are you, and what do you do? What are your specialty technologies?

Hi, I am Mandar Dharmadhikari. I live in the beautiful city of Pune; India and my family consists of my loving parents and my Younger brother who stay at my hometown Wardha. I work for an IT firm in the capacity of Delivery Module Lead my specialty technologies are Microsoft BizTalk Server, Windows PowerShell and Azure Logic Apps.

On a non-technical note, I am a music aficionado and I like to listen to various genres of music right from Indian classical to Punk and Electro. My all-time favorite artists are ABBA, Beatles, Metallica to name a few. I am exploring the works of maestros like Beethoven, Mozart and Schubert these days. I also like reading fiction and fantasy genres book and I am a crazy fan of the Sherlock Holmes, Song of Ice and Fire and Lord of the Rings Series.

Pic: At Port Adeliade, Adelaide SA

What are your big projects right now?

I am currently working for a very big Banking client which uses Microsoft BizTalk Server as their preferred integration Platform.  The integrations that we do are complex ones and involve multiple systems like Microsoft CRM, Sharepoint, SQL and Oracle Databases as well as Mainframe Systems.

I am working on learning more and more on Microsoft Azure offerings especially ones around integration (Logic Apps, Azure Functions etc.) and I am fascinated by the Cognitive Services and PowerShell Core. I am working on a few articles which are centered on BizTalk, Logic Apps and Cognitive Services feature.  Apart from that I am working on a whitepaper on Continuous integration and deployment of BizTalk Applications.

What is TechNet Wiki for? Who is it for?

I believe that TechNet wiki is for individuals who want to share their knowledge about the awesome Microsoft Technologies as well as for those who want to learn them. TechNet wiki serves as a open treasure box of knowledge where anyone who comes will always leave a rich man in terms of knowledge be it an author or a reader. It is a win situation for all.

Pic: At Kolhapur Temple, Maharashtra (India) With Family

What do you do with TechNet Wiki, and how does that fit into the rest of your job?

Not a day goes by when I don’t read at least one article on TechNet wiki. TechNet Wiki has come to my help many times when I needed to fix issues that I had not encountered before and it always helps me learn something new every dayJ . I like to contribute articles which are development and dev ops centered. If I learn something new and it is not present on TechNet wiki, I try to write an article on it. I try to contribute at least one article per month and more if possible.

In what other sites and communities do you contribute your technical knowledge?

I am active on Microsoft BizTalk Forums, where I try to help Ops with their questions and follow other threads on which I have not worked so that I learn something new.

What is it about TechNet Wiki that interests you?

TechNet Wiki is an ocean of knowledge on Microsoft Technologies. TechNet wiki is a perfect example of community collaboration where people from all the corners of the world come together and create quality content for readers. TechNet wiki shows us that if we each bring even a small cup of water with us, we can create an ocean together.

What are your favorite Wiki articles you’ve contributed?

MY favorite articles that I contributed to TechNet wiki are

Pic: At Cleland Wild Life Park, Adelaide SA

What are your top 5 favorite Wiki articles?

The articles I love the most are

Who has impressed you in the Wiki community, and why?

I have immense respect for all the authors and contributors who have shown spectacular commitment in maintaining the quality of TechNet wiki articles.  I am impressed by the work done by the great integration MVPs like Steef Jan Wiggers, Sandro, Tord, Johns 305 and my friend and mentor Abhishek Kumar (I followed his footsteps into the BizTalk world J .A heartfelt Thanks to him) it has really made learning BizTalk simple for all. I also love the work done by ED, Peter and Pedro who are building blocks of the TechNet. I am also deeply impressed and awestruck by the Hindi translations that you have done Kamlesh, they are just awesome. And I also admire Ronen Ariely who keeps guiding me whenever I have any wiki related doubts or issues. Big thanks to all.

Do you have any tips for new Wiki authors?

It is really simple, keep writing and sharing after-all, Sharing is Caring!!

===============================

Thank you, Mandar for such a detailed resume! It's truly an honor to have you contributing content with us!

Everyone, please join me in thanking Mandar Dharmadhikari for his contributions to the community!

Join the world! Join TechNet Wiki.

— Ninja [Kamlesh Kumar]

《最長 2018 年 11 月まで》 SQL Server on Linux スペシャルキャンペーン開始【2/12 更新】

$
0
0

 

2017 年 10 月より一般提供開始された SQL Server on Linux ですが、期間限定もしくは先着順のお得なキャンペーンをご用意しています。最新の SQL Server 2017 を Linux 上、特にマイクロソフトとの統合サポートを提供している Red Hat Enterprise Linux 上に展開することを考えているお客様に、スペシャルキャンペーンをご提案ください。

キャンペーンは10 月から提供しているものも含めて 4 つほどございます。

 

1.Microsoft より: Special Subscription Offer (~2018 年 6 月末)

SQL Server on Linux をサブスクリプションでご利用の際、サブスクリプションが30% Discount でご利用いただけます。10 月より SQL Server 2017 and Red Hat Enterprise Linux offer として提供中です。

適用条件: プロモーション期間は2018 年6 月末まで。対象はサブスクリプションのSCE、EA、ESA の各プログラム。コア単位の Standard / Enterprise Edition で利用可能。

 

2.Red Hat より: Special Subscription Offer (~2018 年 11 月末)

SQL Server 2017 で新しいRHEL サブスクリプションを購入する際、サブスクリプションが30% Discount でご利用いただけます。10 月より SQL Server 2017 and Red Hat Enterprise Linux offer として提供中です。

適用条件: プロモーション期間は2018 年11 月末まで。対象はRed Hat Enterprise Linux サブスクリプション。

 

3. HPE より: Superdome Flex Premium プログラム (~2018 年 6 月末)

HPE のサーバー“HPE Superdome Flex” をベースとした新SSD アプライアンスの特別プログラムをご提供いたします。

プログラム内容: HW サイジング支援、PoC センターでの検証支援、お試し特別価格。
適用条件: プロモーション期間は2018 年6 月末まで。

 

4.Insight Technology より: Attunity Replicate 利用の無料アセスメント (先着 5 社)

Attunity 社とMicrosoft 社で発表された無料サービス(Attunity Replicatefor Microsoft Migrations) の活用アセスメント。異なるデータベース移行をした場合の問題点・難易度・移行に要する時間を見積り、レポートとしてご報告いたします。Insight Technology が 2 月よりキャンペーンを開始しました。

適用条件: SQL Server on Linux へのデータ移行。先着5 社まで。

 

 

▼ キャンペーン詳細のチラシをダウンロード

 

 

 

1月更新-強化跨裝置間的團隊合作

$
0
0

今日文章是由Office團隊的副總裁Kirk Koenigsbauer所撰寫

 

嶄新的2018,我們將透過增進團隊之間的合作,以新方式建立並管理跨裝置上的內容,為Office 365用戶提供全新的價值。

利用Microsoft Teams成就更多

 

Microsoft Teams的新功能讓您可以與應用程式有新的互動方式,客製化您的個人工作區域,並且更快速的執行任務。

用新的方式搜尋與使用應用程式— 您可以透過應用程式,在對話裡用之前加入表情符號或GIF的方式,夾入互動式卡片。只需要一鍵,您就可以將Trello中的任務等重要資訊,放入對話或聊天頻道。透過新的市集功能,您可以更輕易地以名稱或類別,搜尋Teams中新的應用程式與服務,例如:「專案管理」或「AnalyticsBI」。

針對應用程式下命令並在Teams中快速執行任務— 我們新推出Microsoft Teams中的命令清單,由此可直接取得您過去所有搜尋與命令的紀錄。現在您可以透過Teams內的命令清單立即與應用程式、執行工作和瀏覽進行互動,還可以搜尋所有人員、訊息、檔案和應用程式。

 

iOSMac上進行更新,以更有效率的進行團隊合作

 

iOS和Mac上新的Office 365功能,增進團隊協作的方式,無論您在何處,產出進階文件、簡報和試算表都變得更加容易,同時還具有新的檔案搜尋、預覽和互動方式。

iOSMac上共同編輯— 我們讓成員透過iOSMacWord PowerPointExcel共同編輯,更容易地在跨裝置間相互進行合作,現在,無論您是在Mac、個人電腦或行動裝置上進行作業,都會知道誰在與您一起工作,並看到他們的工作進度以及所做的變更。共同編輯已於Windows 桌面版

Office、AndroidOffice和線上版Office適用。在Microsoft Tech Community中了解更多。

自動儲存您在Mac上的作業—使用MacOffice 365訂閱用戶,現在也可享有ExcelWordPowerPoint文件自動儲存OneDrive SharePoint的便利性。。無論是獨立作業或與他人合作,最新的改變都會自動儲存於雲端,因此您不再需要擔心忘記點擊儲存按鈕。您也可以透過版本歷程紀錄,隨時檢閱或恢復之前的文件版本。

 

iOS上拖放內容與檔案— iOS版的OfficeOneDrive應用程式,現在支援拖放內容與檔案。在建立文件時,其中一個最常見與費時的任務即是整合照片、圖表和其他不同來源的物件,現在,iPadiPhone上的Office 365訂閱戶可以輕易地將其它Office應用程式或OneDrive上的內容拖放至文件、簡報和試算表中。iOS上的拖放支援,讓您可以將檔案移入或移至OneDrive和其他來源,例如:SharePointiMessage,讓分散在不同應用程式和服務的內容的更輕易地組織在一起。

 

從更多的iOS應用程式中使用OneDrive檔案— iOS版OneDrive現在原生支援新的iOS 11 Files應用程式,這表示說iPhoneiPad的使用者可以從任何支援Files應用程式整合的iOS應用程式中,上傳、使用、編輯和儲存內容至OneDriveSharePoint,這也是備受大眾要求的功能之一。使用者還可以從Files應用程式中,標註最喜愛的OneDriveSharePoint檔案,讓您更輕鬆地搜尋與使用重要的內容。

透過iOSOneDrive預覽更多類型的檔案— 我們將iOSOneDrive應用程式的清單連結重新設計得更加詳細,讓它更輕易地掃描檔案名稱、檢閱相關資訊和依照特定屬性進行檔案分類。iOSOneDrive應用程式的更新還建立了清晰的縮圖,並支援超過130種檔案類型的預覽,其中包含:Adobe Photoshop3D物件,讓您不用離開應用程式,即可開啟、檢視和分享正確的內容。

 

透過iOSOutlook搜尋整個組織— iOS版Outlook中新的搜尋體驗是利用Microsoft Graph依循您的常用聯絡人、即將到來的出差行程、包裹寄送和近期附件等來產生搜尋結果。結合主動式搜尋建議和整合設計,現在它可以提供一致性且個人化的結果,讓您更快的探索跨組織的資訊。

使用Mac的學習工具提升閱讀能力— Mac版Word現在可支援Immersive Reader Read Aloud,這兩項工具在WindowsWord和行動版應用程式中已可使用。這些工具可以優化有學習問題的使用者檢視內容的方式,並讓文件可以同時回讀與反白。這些功能讓您更輕易的辨識並修正錯誤,提升閱讀與編輯的準確度,特別是對於具學習障礙(如:識字困難)的使用者。

其他更新

  • Yammer上新的分享方式本月初,我們推出了新的方式讓使用者在Yammer行動應用程式中分享他們來自各地的工作內容,使用者可以發布通知至群組、添加GIF動畫或更多。
  • 強大包容性的學習工具上週我們推出了一系列強大的新工具,讓教學與學習變得更具包容性與協作力,其中包含Office 365中內建的聽寫功能和擴展到MaciPhone上的學習工具。

 

Office for Windows desktops | Office for Mac | Office Mobile for Windows | Office for iPhone and iPad | Office for Android 了解更多有關Office 365訂閱戶本月的更新項目。如果您是Office 365家用版或個人版用戶,請務必登入Office Insider,成為第一個使用最新和最佳的Office生產力工具的使用者。不論是Monthly ChannelSemi-Annual Channel的商務版用戶,都可以透過目標發布(客戶 服務),提早取得完整支援。此網頁將有更完整資訊,提供您可以預期和取得的功能。

New SCOM Web Console – Blog series (Post#1)

$
0
0

About 

This series of blogs intends to introduce the new SCOM Web Console released in System Center Operations Manager 1801. For details on implementation and other parameters please refer to the detailed documentation. This blog is designed to be a bit more informal and describes the different features by associating them to use cases.
The series is divided into different parts and it is recommended to read them in order for better understanding. 

After going through this series of blogs a user would: 

  • Get a fair idea about the new SCOM Web Console and the different features added to it 
  • Understand the new dashboard capability 
  • Understand the different widgets and their customizations 
  • Learn about the different dashboard and widget actions 
  • Learn about the drilldown feature and how it can be used to investigate issues 
  • Get a walkthrough on how to create a custom web application on SCOM REST APIs 

What’s new in SCOM 1801 Web Console? 

The SCOM 1801 release marks the inception of a faster, modern, flexible and more reliable HTML based Web Console. The Web Console has been given a complete reboot to ensure that it fulfills modern day monitoring needs (and yes, it is now completely free from Silverlight!). 
This is another step towards our continuous commitment to the SCOM community and we would like to give a big thanks to all our customers who voted this as the top most requested feature in SCOM User Voice. We are really excited about this new Web Console and we strongly believe that you’d just fall in love with it!  

Still using Silverlight? Don’t worry, we’ve got that covered too! The Silverlight dashboards are available in a new URL:
http://<Your_Web_Server_Name>/Dashboard 

Authentication 

The new SCOM Web Console brings back the network authentication! This is what you’d see when you visit the Web Console for the first time:
 

Choose your preferred login option and you are in, welcome to the new SCOM Web Console! 

 Recommended Next 

  1. The all new Dashboards

 

New SCOM Web Console – Blog series (Post#2): The all new Dashboards

$
0
0

About

This blog aims at introducing the all new HTML5 based dashboard functionality added to SCOM 1801 release. For details on implementation and other parameters please refer to the detailed documentation.
After going through this blog, a user would:

  • Understand the new dashboard capability
  • Learn about the different dashboard actions

The all new Dashboards

The SCOM 1801  Web Console introduces the all new, fully customizable, dashboards. These dashboards are built and fine-tuned keeping in mind the huge volume of IT monitoring data. This ensures that you get next to real-time monitoring information without compromising on performance.
The dashboards, being built in HTML5, support a wide range of modern browsers including Internet Explorer, Microsoft Edge, Google Chrome and Mozilla Firefox.

Below is a screenshot of a sample dashboard:

As portrayed in the above screenshot, a dashboard is comprised of multiple widgets. These widgets can be completely configured for data, display and positioning within the dashboard in a manner that best suits your needs.
Currently the dashboard supports the following widgets:

  • Alert Widget
  • State Widget
  • Performance Widget
  • Topology Widget
  • Tile Widget
  • Custom Widget

Dashboard Actions

The dashboard supports the following actions:

Creating a dashboard

You can create a new dashboard by selecting the “+ New Dashboard” option in the navigation tree as shown below:

This would popup a right pane as follows:

Wish to add this dashboard in a new MP? Don’t worry, you needn’t go back to the SCOM console. Just hit the “+” next to the MP list and you’d get a section allowing you to create a new MP and add this dashboard to it.

It really is that simple!

Deleting a dashboard

To delete a dashboard simply hit the “Delete Dashboard” button on top and when prompted hit “Yes”

Editing a dashboard

The edit operation allows the user to edit the name of the dashboard as well as the layout of the widgets added to it.
Once you hit the edit action, the dashboard name becomes an editable field.

Also, all the widgets can now be dragged and resized. This is really useful when you want to club together the widgets targeted to similar objects/groups.

Once you are done, hit “Save Changes” and your layout is saved!

Adding a Widget

When you create a new dashboard it is empty and has no widgets. You can click on the “+ Add Widget” action on top of the dashboard that would lead to the right pane popping up as shown below:

There are lots of widgets that are shipped in box. All of them are discussed in detail in the next part.

Viewing in Full Screen

Wish to only view the dashboard in a big screen like a projector? That’s now possible!
Hit the “View in Full Screen” link on top of the dashboard and you’d get a full screen view of the dashboard.

Exporting Dashboards

Dashboard once created can easily be exported. To export a dashboard, the user simply needs to export the management pack in which the dashboard is stored. If you drilldown into the exported management pack, you’d observe that the dashboard is defined as a view. So for ex. If a dashboard is created as shown below:

Then the generated MP would look like this (note below just a snippet from the MP is taken to avoid clutter):

Note the TypeID of the view. This is a new TypeID introduced for HTML dashboards. Rest of the structure of the MP is pretty similar to any other view.

Recommended Next: New SCOM Web Console – Blog series (Post#3): The new HTML5 Widgets

New SCOM Web Console – Blog series (Post#3): The new HTML5 Widgets

$
0
0

About

This blog aims at introducing the all new HTML5 based widgets added as part of the new dashboards with  . For details on implementation and other parameters please refer to the detailed documentation.
After going through this blog a user would:

  • Understand what are all the new widgets
  • Learn about the different widget actions

It is highly recommended to read the previous blog in this series for better understanding.

The new HTML5 Widgets

There are a total of 6 widgets that are shipped with SCOM 1801 :

  • Alert Widget
  • State Widget
  • Performance Widget
  • Topology Widget
  • Tile Widget
  • Custom Widget

These widgets are designed to be fast and robust. It's quick to load, unlike Silverlight. The widgets support a high level of customization to ensure that they can be used effectively by one and all.
One important thing to note is that the data refresh for these widgets happens in the background at the defined interval (or you can do a force refresh). This way you always have some data to work on while the new chunk is being fetched in the background. Thus, the widgets and the dashboard in general feels a lot more responsive.

Widgets are stored in the management packs as views. Below is a snippet from a management pack containing a "Tile widget"

Note the TypeID. This is a new TypeID introduced for HTML widgets.

Types of Widgets

Alert Widget

This widget displays the list of alerts for a given criteria. Refer to the authoring parameters section below to learn more about the customizations that can be done.

Authoring Parameters

When you start off with the authoring for alert widgets you’d see something like this:

As clear from above there are 4 sections at high level:
Scope

In this section you can define the groups/objects to which this widget is to be targeted. For ex. If you enter “All Windows Computers” then this widget would show the alerts targeted to “All Windows Computers”.

Criteria

Here you can filter alerts based on their severity, priority, resolution state and age.
Take note of the age parameter. At times you might get better performance out of this widget if you select a suitable value for the age parameter.

Display

Here you can select what all columns you wish to see in the widget. Additionally you may select a column with which you wish to group the alerts.

Completion

Finally you give the widget a name and description and you’re done!

Optionally you may specify the refresh interval (minimum value 1 minute) in which the widget would refresh its data.

Actions

Alert widget supports the following actions:

Setting resolution state

You can select one or more alerts and select this action. Once selected it’d open the right pane where you set the state and give an optional comment.

Exporting to Excel

The data shown in the widget can be exported in excel format. This helps when you wish to do any custom analysis on the data by leveraging the power of Excel.

Personalization

Widgets can be personalized for each user. Each user can select the column they wish to see and the grouping they wish to apply. In other words “Personalization” is like the “Display” section shown in authoring.
Note: The selection made in “Personalization” would always overwrite the selection made in the “Display” section. Also note that personalization data is stored in the browser in the current system and thus if you switch browsers or machine
then you’d have to re-personalize the widgets.

Edit & Delete Widget
As the name suggests you can edit and delete this widget from a dashboard. Note: This action is permanent and all the users having access to these dashboards would be affected by it.

State Widget

The state widget displays the health state information about the targeted entities satisfying a particular criterion. Refer to the authoring parameters section below to learn more about the customizations that can be done on this widget.

Authoring Parameters

When you start off with the authoring for state widgets you’d see something like this:

As clear from above there are 4 sections at high level:
Scope

In this section you can define the groups/objects to which this widget is to be targeted. For ex. If you enter “All Windows Computers” then this widget would show the health state information targeted to “All Windows Computers”.
There is another required parameter, class.
You also have the option here to get the health state of the group or the entities contained in that group which are the individual objects.

Criteria

Here you can set the filter to see the entities only in particular health states.

Display & Completion

Display and completion section of the state widget is similar to that of Alert Widget except for one difference. The display columns for the state widget are defined as per the “class” selected in the Scope section whereas the alert widget has fixed display columns.

Actions

State widget supports the following actions:

All of these are exactly similar to what has been defined for alert widget above.

Performance Widget

The performance widget displays the information about the different counters associated with the entity. Refer to the authoring parameters section below to learn more about the customizations that can be done on this widget.

Authoring Parameters

When you start off with the authoring for performance widgets you’d see something like this:

As clear from above there are 5 sections at high level:
Scope

In this section you can define the groups/objects to which this widget is to be targeted. For ex. If you enter “All Windows Computers” then this widget would show the health state information targeted to “All Windows Computers”.

Metrics

Here you can select the object, counter and instance triplet whose data would be displayed in the widget.

Criteria

Here you can specify the age of data that you are interested in.

Display

This section is important. If you wish to visualize the widget with a graph then the above act as legend columns. If you wish to only see these columns then you can check “Visualize objects by performance”. Then you’d only see the table without the graph.

Completion

This section is similar to Alert widget.

Actions

Performance widget supports the following actions:

All of these are exactly similar to what has been defined for alert widget above except for “Set Vertical Axis”. With this action you can specify a range and the graph is scoped to that. This is useful when you are trying to drilldown on particular events. This is how it looks like:

Topology Widget

Have a Visio or other image of your IT topology? Wish there was a way to map the health states to these entities? Then topology widget is what you are looking for. Refer to the authoring parameters section below to learn more about the customizations that can be done on this widget.

Authoring Parameters

As clear from above there are 3 sections at high level:
Scope

This is exactly similar as State Widget.

Display

This is the section where you upload and select your IT topology image:

Completion

Similar to State Widget.

Actions

When a topology widget is created, you’d see all the health icons placed at the top left corner. Drag the icons and place them at relevant places on the image and hit save once you are done. Below image shows an example of how  it works.

Apart from this, the topology widget supports the standard edit and delete widget actions.

Tile Widget

Need a quick way to investigate the health of an entity and the current alerts generated on it? Tile widget is the answer for you. This is the smallest widget (size wise) in the dashboard. Below is a sample tile widget:

As can be seen clearly, the current health state of “All Windows Computers” group is Warning state. This is because the availability monitor is in warning state resulting in the health rollup.

Authoring Parameters

The authoring for tile widget is very straightforward and is like a subset of “Alert Widget” as can be seen in the image below:

Actions

Apart from edit and delete widget the user can launch the health explorer for the target entity from topology widget. Isn’t that cool? 😊

Use the health explorer to dig down further on the health state of the entity and its monitors

Custom Widget

SCOM 1801 release marks the inception of REST based APIs to fetch SCOM data thus giving birth to custom widget. With custom widget, you could bring in any custom html code and it’d get rendered as a widget which could then reside with other widgets in the dashboard. This brings in a whole new strength to the dashboards since the power to manipulate and render the data is completely up to you. For sample scripts to talk to REST APIs, please visit the official documentation here.

Below is a diagram to show how custom widget works:

For details about SCOM REST APIs visit here <<Insert link to SCOM REST APIs>>. Below is a screenshot showing custom widget in action:

Authoring Parameters

The authoring of custom widget is straightforward and requires just an HTML source code. Note if you have any JavaScript (which you most probably would have) you’ll have to insert it inline with the HTML code. Below is a screenshot taken while authoring custom widget with a basic HTML code:

Below is an image taken from the detailed documentation:

Can you figure out which are the custom widgets above? If your answer is no, then that’s exactly how we intended it to be! The custom widget simply blends with other widgets in the dashboard and once created acts no differently from the other widgets. Well, if your answer was yes then we’ve got to give it to you, you are really insightful 😊

Actions

Custom widget supports the basic edit and delete widget actions. But this in no way limits you to innovate! You can define and design custom actions for your custom widgets which could then reside in the widget container. The limit here is just your imagination!

Recommended Next

  1. The all new Drilldown experience


New SCOM Web Console – Blog series (Post#4): The all new Drilldown experience

$
0
0

About

This blog aims at introducing the all new drilldown experience added as part of the new dashboard with SCOM 1801 release.  For details on implementation and other parameters please refer to the
After going through this blog a user would:

  • Understand what the new drilldown feature is.
  • Get a brief understanding of how drilldown feature can be used for better monitoring.

The all new Drilldown experience

The new SCOM dashboards come with the drilldown feature which, as the name suggests, allows you to drilldown into a problem and get more insights about the situation. This is helpful in root causing the issue and in identifying what all components are affected by the problem. There are five type of drilldown pages:

  1. Alert page
  2. SCOM Group/Object page
  3. SCOM Class page
  4. Rule Page
  5. Monitor Page

These pages are dashboards of their own comprising of different widgets. The widgets present in these dashboards take up the context at runtime and scope their data to the current target entity. For ex. take a look at the below URL for alert drilldown page:
http://<server_name>/OperationsManager/#/monitoring/drilldown/alert/023e5e00-e9e9-4d81-8135-052bf935062f/dashboard/d0d82ac8-215b-3b77-7a3b-8bef450796e3?mpId=da187e72-b9d7-9e16-d098-3b0a624dc38c&show_full_screen_link=false&hide_header=true

The highlighted section tells the Alert drilldown page to display data in all widgets targeted to this alert. This makes sharing the drilldown pages within the organization super easy. All you need to do is share the URL and people can start off from there.

How to drilldown?

Well, the next obvious question is how to use the drilldown feature? The answer to that is simple. Some of the widgets discussed in 2. The all new HTML5 widgets and their actions allow you to click a row/entity from the data. Once you make your selection the drilldown page is launched. For ex. consider the state widget below:

Now when you click on any of the row above the Group/Object drilldown page is launched. That page then has widgets displaying all sorts of relevant data targeted to the selected row from the state widget.

Which drilldown page leads to where?

The starting point of drilldown could be either a row from the alert widget, a row from the state widget or a health icon from the topology widget. The user can then navigate to the other drilldown pages by clicking on items present in the widgets of the current drilldown page. Below diagram shows the path a user can navigate during drilldown.

For ex: from a State widget the user can drilldown and land at the SCOM Group/Object drilldown page and from there they can click one of the unhealthy monitors and land up in the Monitor Drilldown page.

Drilldown pages deep dive

Alert drilldown page

The alert drilldown page contains detailed information about the alert. Below is a screenshot of how the alert drilldown page looks like:

As clear from the above screenshot the alert drilldown page has 6 widgets. These represent the following data from left to right and top to bottom:

  1. Alert description: Here you will get detailed description for the alert like the workflow name, instance name etc.
  2. Alert context: All the context information for this alert would be displayed here
  3. Company knowledge: Any added company knowledge for the underlying rule or monitor for this alert would be displayed here. Read the text in the company knowledge widget above, you may discover a cool new feature 😊. Yes, you read it right, now SCOM supports adding HTML based company knowledge right from the Web Console.
    Just hit the ellipses icon () and you shall see a “Edit Company Knowledge” action. Fill in the company knowledge in the editor that shows up, choose the MP and hit save!
    Below are just some screenshots showing the flow:
    • Select the action
    • Enter the company knowledge
    • Choose MP (or create new one) and save
  4. Product knowledge: Here you will see the product knowledge added for the corresponding rule or monitor.
  5. Rule Properties: Would display the properties of the rule that generated this alert. It is blank in the above screenshot since this alert came from a monitor.
  6. History: Would show the history of changes to the resolution state of the alert.

Use cases

Scenario 1: Adding company knowledge without the burden of desktop console and Word

Many a times there are alerts which come up frequently in an environment. You might want to add information for your fellow operators to help them save time. But you don't have access to Operations Console and an active Word deployment. This is where the Company knowledge widget in alert drilldown comes in handy. Just click the alert and once the "Alert drilldown page" opens up start editing the company knowledge without having to depend on Word or Operations Console.

SCOM Group/Object drilldown page

The SCOM Group/Object drilldown page shows detailed information about a SCOM Group/Object. Below is a screenshot of a sample SCOM Group/Object drilldown page:

As is evident from above the SCOM Object/Group page consists of 2 dashboards. The first one is the object information dashboard. This dashboard consists of 5 widgets:

  1. Object relationship and properties widget: This widget shows all the related objects to the current object along with their properties to the right. You can select any of the items from this diagram and the properties on the right would get updated. Notice the small health state icons on top of each entity. This would help you figure out if there are any related entities which are at the crucial stage too!
  2. Warning and critical alerts generated on this object
  3. The unhealthy monitors targeted on the current object. This is a really useful widget and can effectively tell you about the root cause for the critical or warning health state of the entity.
  4. Performance metrics: This widget is like a “Object By Performance Widget” and displays all the performance metrics related information for the current target object.
  5. Classes widget: All the classes the current entity belongs to is displayed in this widget.

The second one is the Performance dashboard. This dashboard shows one performance widget each for every performance object of the current entity. Below is an example:

User cases

This section tries to narrate a few possible scenarios which you might face regularly and where drilldown can really come in handy.

Scenario 1: A server in the environment is reporting a critical health state

In this case the user can click that server in the state widget and can launch the SCOM Group/Object drilldown page. Here you will find lots of valuable information which'd help you root cause and figure out the issue.
What all can you do to investigate the issue?

  • Check the related objects widget and see if some underlying entity is critical. For example, the hard drive might on the server may be critical (say because of less space) and thus the health of the server rolled up to be critical. Now you know you need to check the hard drive. This way you can keep drilling down and get to the root cause of the issue
  • Check the currently active alerts generated on this server. There would most probably be an alert sitting there clearly calling out the problem.
  • Check the unhealthy monitors widget if it contains any entries.
  • Check the performance metrics and see if there is any unusual behavior or spikes.

You are highly likely to discover the root problem with the above-mentioned steps. If not, then keep drilling down wherever you find anything suspicious.

Scenario 2: A server is reporting delays and unexpected behavior

Now is a good time to check for the performance data collected from the server. Simply select the server from a state widget and then once the "SCOM Group/Object drilldown page" opens, select the 2nd tab "Performance". Here you'd see all the performance data collected from the server and you most probably should see a spike or abnormal behavior.

 

SCOM Class drilldown page

The SCOM Class drilldown page gives information about a SCOM class. Below is a sample screenshot of how the SCOM class page looks like:

This drilldown page has 3 widgets:

  1. Class properties widget displaying all the properties of the class
  2. Rule widget: Showing information about all the rules targeted to this class
  3. Monitor widget: Showing information about all the monitors targeted to this class

Use Cases

Scenario 1: Figuring out all targeted rules and monitors of a class

Not only that you can then even go ahead and look up those rules/monitors and even modify their company knowledge.

Rule drilldown page

The Rule drilldown page shows detailed information about a SCOM Rule. Below is a sample screenshot:

The Rule drilldown page has 4 widgets.

  1. The rule properties widget displaying all the properties of the rule.
  2. A rule configuration widget displaying the configuration of the rule as present in the management pack
  3. A company knowledge widget where the user can see the company knowledge for this rule. Users can also edit the company knowledge here if they have sufficient permission.
  4. A product knowledge widget where the user can see the product knowledge for the rule.

Use Cases

Scenario 1: You want to check the rule properties and/or modify the company knowledge

 

Monitor drilldown page

The Monitor drilldown page shows detailed information about a SCOM Monitor. Below is a sample screenshot:

The Monitor drilldown page has 3 widgets.

  1. The monitor properties widget displaying all the properties of the monitor.
  2. A company knowledge widget where the user can see the company knowledge for this monitor. Users can also edit the company knowledge here if they have sufficient permission.
  3. A product knowledge widget where the user can see the product knowledge for the monitor.

Use Cases

Scenario 1: You want to check the monitor properties and/or modify the company knowledge

Recommended Next

  1. Sample Custom Dashboard walkthrough

OneDrive pro firmy umožní obnovit až 30 dnů staré soubory

$
0
0

Nejnovější funkce ve OneDrive pro firmy umožňuje obnovit všechny soubory v čase za posledních 30 dnů. File restore, tak jak se nová funkce nazývá, poskytuje detailní přehled o veškerých změnách, kdo je udělal a kdy, a následně dokáže vrátit celý OneDrive do stavu před změnami. Toto se hodí v případě náhodného smazaní uživatelem, požkození souborů nebo po malware útoku. Pokud koncový uživatel zjistí, že mu chybí některé soubory, může toto provést sám bez nutnosti zásahu administrátora.

Tato funkce je vytvořena pro obnovení většího počtu souborů. Pro obnovení jednotlivých souborů je možné použít obnovení smazaných souborů nebo obnovení předchozí verze souboru.

Funkce je vypouštěna postupně mezi uživatele a organizace. Je možné ji najít jako Restore your OneDrive v nastavení OneDrive. Je zde na výběr několik předvoleb, anebo celková časová osa za posledních 30 dnů, která obsahuje detailní pohled všech změn a posuvník se kterým se „vrátíte zpět v čase“.

File Restore se nachází v nastavení OneDrive jako Restore OneDrive.

Je možné vybrat si přednastavenou hodnotu, nebo si zvolit vlastní datum a čas.

Vlastní datum se nastavuje posuvníkem a pod ním se vybírají konkrétní změny, které se mají obnovit.

Nakonec je potřeba vše potvrdit a soubory budou obnoveny do požadovaného stavu.

 

Pro správné fungování je potřeba mít zapnutou historii verzí, aby OneDrive mohl obnovit předchozí verze. Také nelze obnovit soubory, které již nejsou v koši. Detailnější informace naleznete v oficiální dokumentaci.

 

- Matěj Borský, TheNetw.org s.r.o.

 

New SCOM Web Console – Blog series (Post#5): Sample Custom Dashboard walkthrough

$
0
0

About

SCOM 1801 marks the release of REST APIs for SCOM SDK. Using these APIs a user can create any custom client application of their own. This blog aims at walking the user through a scenario where a complete standalone application is developed and deployed using the SCOM REST APIs. This application can even be brought inside the Web Console using Custom Widget and can be made to reside next to any of the widgets shipped by SCOM.

It is highly recommended to go through the other previous blogs in the series to better understand the content provided here.

What does the sample comprise of?

The sample is a JavaScript based application that is communicating with SCOM SDK and displaying data to the user who can take further action on the same. We have also tried to stretch the limit of custom widget here by not only limiting it to be used as a widget but rather as a standalone application in itself. This application could then be rendered as a dashboard or a widget, we leave the choice to you. The sample comprises of two major sections:

  • Overview Dashboard
  • Search Dashboard

Overview Dashboard

This is a view designed to give a quick overview about the current monitoring state. It can act as a starting page using which you can proceed with further actions. Below is a screenshot of how the Overview Dashboard looks like:

As displayed in the image above the Overview Dashboard has two major sections:

  • Active Alerts
  • Health States

Active Alerts

This section shows the active alerts for the past 7 days in three categories namely critical, warning and informational alerts. If you are interested, you may dive into the individual alerts by clicking “View Details”. For example if you click on “View Details” under critical alerts, you’d see a view like:

Need more information? You got it!
Each of these rows are clickable and would take you to our very own drilldown pages  (refer to blog 3 for more details on drilldown).

Once an alert above is clicked, it launches the following drilldown page:

As you can see, there is a lot more detail about the alert here which would help you in further investigating the issue.

Health States

For a given target class and given target object group you’d see the health states in three buckets namely unhealthy, in maintenance mode and healthy. The target class and target group field can be modified and the health state information displayed below would modify as per the new input. By default this view shows health state information about the “Windows Computer” SCOM class.
Similar to alert you can see the details of the entity by clicking “View Details”:

And yes, you guessed it right. We have drilldown pages for these entities as well! Here’s how they look like:

There is a bunch of information about the entity here like related objects (their health states and properties), the alerts targeted to this entity, performance metrics and the classes this entity belongs to. Most of the entries shown above can be drilled down further giving a more detailed view. Again, going over all the details is out of scope for this blog and we’d strongly recommend going over through the detailed documentation.

Search Dashboard

Know what you are looking for but hate to go over multiple pages and views in the current desktop or web console? Then this search section is designed just for you!
Here you can search for any active alert (for the last 7 days), SCOM object, SCOM group, SCOM class, rule or any monitor. The search is asynchronous and quite fast. It’d help you choose the starting point using which you can further drilldown. Below is a screenshot of how search looks like (say you search for the term “health”):

And that’s not it! Remember drilldown? From all of these search results you can jump to their drilldown pages and proceed with any action you may wish!
Below are a couple of screenshots portraying what you can expect after clicking these results (you have already seen the alert and object drilldown pages above when we were at <<Overview Dashboard>>):

Clicking a rule:

Clicking a monitor:

Well this is just the beginning! The intention of walking you over this sample application was to show you the power custom widget and in turn, the SCOM REST APIs provide. With a few lines of code, one can achieve functionality which would otherwise have taken a lot more steps.
Feel free to go through the SCOM REST API documentation and create your own user stories and your own custom widgets!

Deployment

There are multiple ways you can deploy the sample dashboard discussed in this blog:

  1. Importing the management pack
  2. Adding it alongside SCOM Web Console as a JavaScript application (with this the application will have its own URL and may or may not be added as a custom widget)
  3. Adding the two sections as individual widgets

Note: From now on we will refer to the content available in attached zip file: Custom-Widget

Importing the management pack

A management pack containing the two individual sections (overview and search) are available under "Custom WidgetManagement Pack"

Import this management pack and you should see two dashboards overview and search having the two sections respectively.

Adding it alongside SCOM Web Console as a JavaScript application

  1. Go to the directory where SCOM 1801 Web Console is installed. Ex. C:Program FilesMicrosoft System CenterOperations ManagerWebConsoleDashboard
  2. Create a folder named “custom”. Note that you may choose any folder name, this is just an example
  3. Copy the contents placed under "Custom WidgetSource Code"
  4. Go to SCOM Web Console
  5. Create a dashboard
  6. Click Add Widget and select Custom Widget from the dropdown.
  7. When asked for source code enter the following:
    <iframe src="http://your_server_name/OperationsManager/custom " style="width: 100%; height: 100%"></iframe>
    Here replace your_server_name with your Web Server
  8. Hit save and you are done!

Adding the two sections as individual widgets

  1. Go to SCOM Web Console
  2. Create a dashboard
  3. Click Add Widget and select Custom Widget from the dropdown.
  4. When asked for source code pick the contents of any one file from under "Custom WidgetIndividual Sections"
  5. Hit save
  6. Repeat 4 and 5 for the other one or repeat 2-5 if you wish to add these in separate dashboards

Documentation

Discussing the technical approach of how the sample widget presented here works is out of scope of this blog. The code has been documented thoroughly and documentation has been generated using JSDoc. The documentation can be found under "Custom WidgetSource Codedocs"

Start with index.html and that would guide you through all the code.

How SCOM REST APIs can be used

Refer to "Custom WidgetSource Codedistjshelpersdata-helper.js" for an example of how SCOM REST APIs can be used.

For more details on SCOM REST APIs, please refer https://docs.microsoft.com/en-gb/rest/operationsmanager/

References

For the sample app discussed above, the following 3rd part libraries were used:

  1. jQuery: <script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.2.1/jquery.min.js"></script>
  2. jQuery UI: <script src="https://code.jquery.com/ui/1.12.1/jquery-ui.js"></script>
  3. MetisMenu: <script src="https://cdnjs.cloudflare.com/ajax/libs/metisMenu/2.7.1/metisMenu.min.js"></script>
  4. Bootstrap: <script src="https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/3.3.7/js/bootstrap.min.js"></script>
  5. jQuery DataTables: <script src="https://cdn.datatables.net/1.10.16/js/jquery.dataTables.min.js"></script>
  6. Bootstrap DataTables: <script src="https://cdn.datatables.net/1.10.16/js/dataTables.bootstrap.min.js"></script>
  7. Responsive DataTables: <script src="https://cdn.datatables.net/responsive/2.2.0/js/dataTables.responsive.min.js"></script>

[Assessment]ShadowIT

$
0
0

Aus der Serie "Microsoft Security Assessments" diesmal das Thema ShadowIT Assessment:

Ziel:

Mit dem ShadowIT Assessment soll die aktuelle Nutzung von allen (!) Cloud Apps aufgezeigt werden, insb. geht es dabei darum dem Kunden ein umfängliches Bild vorzulegen, welches neben den bekannten Apps (z.B. Exchange Online) auch unbekannte oder ungewollte Apps (z.B. Cloud Storage Apps wie Dropbox oder Google Drive u.a.) und den Umfang der tatsächlichen Nutzung (Quantität der Daten) darstellt.

Daraus abgeleitet eine (Implementierungs-) Roadmap für die nächsten 0-9 Monate.

Für den Partner sollten sich daraus die Chanc für ein Umsetzungsprojekt mit dem Ziel der Eindämmung der ungewollten Nutzung von Cloud Apps ergeben.

Inhalt:

  1. Shadow IT Assessment-Close-out Presentation-v1.0.pptx
    Zusammenfassung und Besprechung der nächsten Schritte.
  2. Shadow IT Assessment-Engagement Delivery Guide-v1.0.docx
    Die detaillierte Anleitung zur Ausführung des Assessments.
  3. Shadow IT Assessment-Introduction to CAS-v1.0.pptx
    Einführung in "Cloud App Security" (CAS), also das Tool, welches für die Analyse herangezogen wird.
  4. Shadow IT Assessment-Kick-off Meeting-v1.0.pptx
    PPT für das Kick-off Meeting
  5. Shadow IT Assessment-On-site Engagement Overview-v1.0.pptx
    Übersicht über das, was und wann im On-Site Workshop/Engagement passieren wird, sowie die Definition von "Erfolg" und die Möglichkeit für Fragen & Antworten.
  6. Shadow IT Assessment-Questionnaire-v1.0.docx
    Fragenkatalog zum allgemeinen Erfassen von Cloud und Cloudanbindung.
  7. Shadow IT Assessment-SoW Example-v1.0.docx
    Beispiel für ein Statement of Work (SOW).

Vorgehen

FYI: Ich habe im Modern Workplace ChampCall im Februar (ab ca. 26:40min) das Vorgehen kurz gezeigt, also schaut es euch gerne an, denn dort gebe ich auch ein, zwei Tipps, die aus der Doku nicht hervorgehen! 😉

Bei dem ShadowIT Assessment geht es ja darum (ungewollte) Cloud App Nutzung aufzuzeigen. Dazu bedarf es neben dem Verständnis, wie der Kunde zum einen mit dem Gesamtthema Cloud umgeht und und zum anderen wie die Infrastruktur zur (Cloud)App Nutzung aussieht (=> Questionnaire), auch Daten, welche die Analyse der Cloudnutzung möglich macht.

Hierzu werden "Realdaten" von z.B. Proxy oder Firewall zur Analyse benötigt. (Vgl. Liste der supporteten Logs/Appliances)

WICHTIG: da in diesen Daten typischerweise PII enthalten sind, die u.U. auch zu personellen Konsequenzen führen könnten, ist es unumgänglich, dass von vorne herein der Betriebsrat des Kunden eingebunden wird. 
Außerdem empfehlen wir für das Assessment das Anonymisieren der Daten zu aktivieren!

D.h. für den Erfolg des Assessments ist es notwendig, dass der Kunde für einen sinnvollen Zeitraum (z.B. 24.12.-6.1. ist nicht so sinnvoll 😉 ) Logs bereitstellt.

Diese Logs müssen dann im Laufe des Assessments in eine Cloud App Security (CAS) Instanz hochgeladen und analysiert werden. Dies könnte entweder die bereits vorhandene CAS Umgebung des Kunden sein oder - wenn noch kein CAS beim Kunden genutzt wird und für das Assessment auch keine zusätzlichen Lizenzkosten eingegangen werden sollen, dann kann über eine CAS Test Instanz gegangen werden.

In CAS wird dann ein neuer Snapshot Report erstellt, in den dann die vorliegenden Daten hochgeladen werden.

Und hier noch einmal der eindringliche Hinweis das Thema "Anonymisierung" ernst zu nehmen und beim Hochladen darauf zu achten, dass der Haken auch korrekt gesetzt ist und der Betriebsrat/HR in das Procedere eingebunden werden!

Nachdem nun die Daten hochgeladen worden sind analysiert der CAS diese und zeigt diese dann in seiner UI entsprechend an (einzelne Snapshots können einzeln ausgewählt werden, ist für das Assessment aber i.d.R. nicht notwendig).

Dieses Ergebnis wird dann von dem ausführenden Partner konsolidiert und in die Ergebnispräsentation eingebettet, sowie wird daraus (in Bezug auch auf den Fragebogen) eine Roadmap für den Kunden erzeugt und entsprechend präsentiert.

 

Ergebnis/Outcome:

Dem Kunden wird in der Ergebnispräsentation der IST Stand seiner Schatten IT präsentiert und direkt daraus abgeleitet eine Handlungs/Umsetzungsempfehlung. D.h. das Ziel ist, dass daraus herausgehend die Beauftragung für die Umsetzung der Empfehlungen resultiert - und dies aufgeteilt in kurz-, mittel- und langfristige Tätigkeiten.

Ein wichtiges Ziel bei dem gesamten Unterfangen ist, dass diese Analyse nicht einmalig, sondern kontinuierlich durchgeführt wird, also eine Art "ShadowIT Analyse as a Service" bzw. darüber hinaus ein "Cloud Data Management as a Service" erreicht wird, welches idealerweise durch den Partner, der das Assessment durchgeführt hat, geleistet wird.

Call to action:

Wenn ihr als Microsoft Partner dieses (oder auch die anderen) Assessments durchführen möchtet und benötigt dazu (technische) Unterstützung, so sprecht bitte eure entsprechenden Partner Development Manager und Partner Technical Strategists an.

Grade bei diesem Assessment bietet sich die Chance dieses um entsprechende "on-premise" Dienste anzureichern. D.h., dass durch weitere Tools (Eigenentwicklungen, MAP, etc) die internen Apps/Datenquellen offen gelegt werden und diese entsprechend in die Empfehlungen mit eingearbeitet werden. Auch ein umfangreiches Master Data Management könnte ein entsprechender Outcome sein. Hier sind der Fantasie keine Grenzen gesetzt.

Auch kann und sollte dieses Vorgehen in eine GDPR/DSGVO Betrachtung mit einfließen, denn wenn ich nicht weiß "wo" ich (welche) Daten habe, werde ich nie in die Lage versetzt sein eine GDPR Compliance zu erreichen.

Last but not least hier noch der Link auf den Download der Assessment Unterlagen (aktuell nur auf engl. - wenn hier ein *wirklicher* Bedarf und Impact an einer Lokalisierung existiert, dann lasst es mich bitte wissen!)

Microsoft Security Assessments

$
0
0

Hinweis: die Beschreibung der Assessments werde ich im Laufe der nächsten Tage vervollständigen und dementsprechend nach und nach die passenden Links oben einfügen.

Oft stellt sich die Frage: "Wie fange ich überhaupt an?" - insb. wenn es darum geht mit dem sensiblen Thema "Security" beim Kunden zu punkten.

Damit es euch lieben Microsoft Partnern einfacher fällt entsprechende Security Projekte bei euren Kunden zu starten haben wir 4 Assessments designed, die euch einen Koffer mit fertigen (!) Inhalten und Tools zur Hand geben:

  • Rapid Cyberattack Assessment 
    Determine customer's ability to prevent, detect, and respond to ransomware with Windows
  • GDPR Detailed Assessment
    Assess a customer's GDPR readiness and maturity across technology, people and processes with Microsoft 365

Diese Assessments könnt ihr nutzen um von 0 auf 100 in sehr kurzer Zeit entsprechende Assessments mit Ziel der anschließenden Umsetzung der Erkenntnisse zu erstellen und auf "standardisierter" Basis möglichst häufig zu wiederholen.

Wenn ihr Fragen zu den Assessments habt oder Unterstützung bei der technischen Readiness habt, so sprecht gerne euren entsprechenden PDM/PTS an.

 

 

Microsoft Office 365 x Superhub x The Madison Group

$
0
0

Microsoft Office 365 x Superhub x The Madison Group

打破地域界限提升協作!

 

麥迪森集團於1996年在香港成立,代理多個國際生活品味高端品牌,實時通訊對其業務運作極其重要。Microsoft Hong Kong與Superhub合作提供O365+,令The Madison Group能在安全、方便、易用的前題下以共同協作增加團隊交流,提升工作效率,事半功倍。

麥迪森集團CEO Carsten Nittke 心目中,理想電子郵件方案首要元素是方便、有安全保障、有效率和容易使用。他希望所有辦事處都能通過不同的方式暢通無阻地溝通,只要動動手指就能存取及分享知識和資訊 ,所以溝通工具一定要簡單。

事實上一年前的情況絕不一樣,當時他們資詢了資訊科技部的意見。他們提議與Microsoft的雲端解決方案直接合作夥伴 - Superhub合作,把公司和客戶資料存放於既安全又有保障的雲端空間,從而令人手更充足 、資訊傳遞速度更快 、系統變得更安全 。現在他們在辦公室以外的地方也能查看資料了。

Retire Those Old Legacy Protocols

$
0
0

Hello Paul Bergson back again, and I wanted to bring up another security topic. There has been a lot of work by enterprises to protect their infrastructure with patching and server hardening, but one area that is often overlooked when it comes to credential theft and that is legacy protocol retirement. These legacy protocols were built when there wasn't the understanding of security requirements that our modern enterprises need today.

To better understand my point, American football is very fast and violent. Professional teams spend a lot of money on their quarterbacks. Quarterbacks are often the highest paid player on the team and the one who guides the offense. There are many legendary offensive linemen who have played the game and during their time of play they dominated the opposing defensive linemen. Over time though, these legends begin to get injured and slow down do to natural aging. Imagine a quarterback at the peak of his career, making over $10 million in salary being protected by a legendary offensive line that was 10 years beyond their prime. If you think of these older protocols like offensive linemen that are protecting the operating system and its data, they need patches (injured) and they get old & slow (weak encryption, etc...). Unfortunately, I see all too often, enterprises running old protocols that have been compromised, with in the wild exploits defined, to attack these weak protocols. No General Manager would ever risk the safety/security of his investment in his key offensive player(s), neither should the teams responsible in protecting the safety and security of their IT enterprise.

Attack Surface Reduction can be achieved by disabling support for insecure legacy protocols.

  • TLS 1.0 & 1.1 (As well as all versions of SSL)
  • Server Message Block v1 (SMBv1)
  • LanMan (LM) / NTLMv1
  • Digest Authentication

The SSL protocol is broken and can no longer be fixed, threats such as POODLE still exist (see cve-2014-3566) SSL protocol should be retired. TLS 1.0 is no longer considered secure and as of June 30, 2018 the PCI board has set for a deadline for disabling all SSL and TLS 1.0 with the recommendation to use TLS 1.2. *1

The WannaCrypt ransomware attack, worked to infect a first internal endpoint. The initial attack could have started from phishing, drive-by, etc… Once a device was compromised, it used an SMB v1 vulnerability in a worm-like attack to laterally spread internally. *2

A second round of attacks occurred about 1 month later named Petya, it also worked to infect an internal endpoint. Once it had a compromised device, it expanded its capabilities by not only laterally moving via the SMB vulnerability it had automated credential theft and impersonation to expand on the number devices it could compromise. *3 *4

Both WannaCrypt and Petya are just two of many assaults that leverage SMBv1. With LanMan and NTLMv1 there are open source tools readily available to capture and crack credentials. This is why it is becoming so important for enterprises to retire old outdated equipment, even if it still works!

The rest of this document covers details of the protocols and how they can be removed from the enterprise's environment.

Ned Pyle wrote a great blog on the retirement of SMB1 that I have borrowed from. This is a great article that you will want to read if you haven't already. The link to his article can be found in the "References" below.

Server Message Block v1 (SMBv1)

With SMB1 you don't have access to modern security features that SMB 3 provides. *5

Updated security features are found below

  • Pre-authentication Integrity (SMB 3.1.1+)
    • Protects against security downgrade attacks
  • Secure Dialect Negotiation (SMB 3.0, 3.02)
    • Protects against security downgrade attacks
  • Encryption (SMB 3.0+). Prevents inspection of data on the wire, MiTM attacks
    • In SMB 3.1.1 encryption performance is even better than signing
  • Insecure guest auth blocking (SMB 3.0+ on Windows 10+)
    • Protects against MiTM attacks
  • Better message signing (SMB 2.02+)
    • HMAC SHA-256 replaces MD5 as the hashing algorithm in SMB 2.02, SMB 2.1 and AES-CMAC replaces that in SMB 3.0+
    • Signing performance increases in SMB2 and 3

SMB1 is Very Inefficient When Compared to SMB 3.0 *5

  • Larger reads and writes (2.02+)
    • More efficient use of faster networks or higher latency WANs
    • Large MTU support.
  • Peer caching of folder and file properties (2.02+)
    • Clients keep local copies of folders and files via BranchCache
  • Durable handles (2.02, 2.1)
    • Allow for connection to transparently reconnect to the server if there is a temporary disconnection
  • Client oplock leasing model (2.02+)
    • Limits the data transferred between the client and server, improving performance on high-latency networks and increasing SMB server scalability
  • Multichannel & SMB Direct (3.0+)
    • Aggregation of network bandwidth and fault tolerance if multiple paths are available between client and server, plus usage of modern ultra-high throughout RDMA infrastructure
  • Directory Leasing (3.0+)
    • Improves application response times in branch offices through caching

Use Cases Where SMB1 is Still Required

  • Still running XP or WS2003 (Or older)
    • Out of support (Unless there is a custom support agreement in place)
  • Old management software that demands admins browse the 'network neighborhood' master browser list
  • Old network storage device
  • Old multi-function printers with old firmware in order to "scan to share"

The above listed services should all be scheduled for retirement since they risk the security integrity of the enterprise. The cost to recover from a malware attack can easily exceed the costs of replacement of old equipment or services.

Retirement

The SMB1 protocol can be removed via Group Policy, PowerShell or Server Manager. *6

LanMan (LM) / NTLM v1

"We are aware of detailed information and tools that might be used for attacks against NT LAN Manager version 1 (NTLMv1) and LAN Manager (LM) network authentication. Improvements in computer hardware and software algorithms have made this protocol vulnerable to published attacks for obtaining user credentials." *7

LMHash was developed pre-WinNT. It is now considered extremely insecure and we STRONGLY encourage our customers to disable its use. Although NTLM v1 is a newer protocol, it too is considered insecure and we again STRONGLY encourage its retirement as well.

Utilizing a Group Policy applied against clients' and/or servers', legacy protocols can be eliminated from use.

Possible values

  • Send LM & NTLM responses
  • Send LM & NTLM - use NTLMv2 session security if negotiated
  • Send NTLM responses only
  • Send NTLMv2 responses only
  • Send NTLMv2 responses only. Refuse LM
  • Send NTLMv2 responses only. Refuse LM & NTLM
  • Not Defined

Retirement

  • The recommended settings would be to "Send NTLMv2 responses only. Refuse LM & NTLM". If NTLMv1 is in use, at a minimum "Send NTLMv2 responses only. Refuse LM" should be configured for your domain environment.
  • Administrators are strongly encouraged to prevent the LM hash from being stored in the local SAM database and Directory Services. Implementing the NoLMHash can be configured by setting the "Network security: Do not store LAN Manager hash value on next password change" to enabled. By default this should already be set.

Potential Impact

As with any changes to your environment, it is recommended to test this prior to pushing into production. If there are legacy protocols in use, an enterprise does run the risk of services becoming unavailable. It would be in the best security interests, if insecure legacy protocols are in use, to chart out a plan to retire/migrate the devices that still require these protocols.

TLS/SSL

"Many operating systems have outdated TLS version defaults or support ceilings that need to be accounted for. Usage of Windows 8/Server 2012 or later means that TLS 1.2 will be the default security protocol version." *8

Security Protocol Support by OS Version:

Windows OS

SSLv2

SSLv3

TLS 1.0

TLS 1.1

TLS 1.2

Windows Vista

Enabled

Enabled

Default

Not Supported

Not Supported

Windows Server 2008

Enabled

Enabled

Default

Disabled

Disabled

Windows 7 (WS2008 R2)

Enabled

Enabled

Default

Disabled

Disabled

Windows 8 (WS2012)

Disabled

Enabled

Enabled

Enabled

Default

Windows 8.1 (WS2012 R2)

Disabled

Enabled

Enabled

Enabled

Default

Windows 10

Disabled

Enabled

Enabled

Enabled

Default

Windows Server 2016

Not Supported

Disabled

Enabled

Enabled

Default

To disable the use of security protocols on a device, changes need to be made within the registry. Once the changes have been made a reboot is necessary for the changes to take effect.
https://technet.microsoft.com/en-us/library/dn786418.aspx

The default settings for the TLS/SSL are all enabled with the exception of Client SSL 2.0, which is disabled. The registry settings below are ciphers that can be configured. If you want to disable a protocol just create a new entry and configure "Enabled" to equal 0 under the specific sub-key you want to disable.

In the settings below, both TLS 1.0 and TLS 1.1 are disabled.

Open up the registry (RegEdit) and browse to:
Computer > HKLM > System > CurrentControlSet > Control > SecurityProviders > SCHANNEL > Protocols

[HKEY_LOCAL_MACHINESYSTEMCurrentControlSetControlSecurityProvidersSCHANNELProtocols]

[HKEY_LOCAL_MACHINESYSTEMCurrentControlSetControlSecurityProvidersSCHANNELProtocolsPCT 1.0]

[HKEY_LOCAL_MACHINESYSTEMCurrentControlSetControlSecurityProvidersSCHANNELProtocolsPCT 1.0Client]

[HKEY_LOCAL_MACHINESYSTEMCurrentControlSetControlSecurityProvidersSCHANNELProtocolsPCT 1.0Server]

[HKEY_LOCAL_MACHINESYSTEMCurrentControlSetControlSecurityProvidersSCHANNELProtocolsSSL 2.0]

[HKEY_LOCAL_MACHINESYSTEMCurrentControlSetControlSecurityProvidersSCHANNELProtocolsSSL 2.0Client]

[HKEY_LOCAL_MACHINESYSTEMCurrentControlSetControlSecurityProvidersSCHANNELProtocolsSSL 2.0Server]

[HKEY_LOCAL_MACHINESYSTEMCurrentControlSetControlSecurityProvidersSCHANNELProtocolsSSL 3.0]

[HKEY_LOCAL_MACHINESYSTEMCurrentControlSetControlSecurityProvidersSCHANNELProtocolsSSL 3.0Client]

[HKEY_LOCAL_MACHINESYSTEMCurrentControlSetControlSecurityProvidersSCHANNELProtocolsSSL 3.0Server]

[HKEY_LOCAL_MACHINESYSTEMCurrentControlSetControlSecurityProvidersSCHANNELProtocolsTLS 1.0]

[HKEY_LOCAL_MACHINESYSTEMCurrentControlSetControlSecurityProvidersSCHANNELProtocolsTLS 1.0Client]

"Enabled"=dword:00000000

[HKEY_LOCAL_MACHINESYSTEMCurrentControlSetControlSecurityProvidersSCHANNELProtocolsTLS 1.0Server]

"Enabled"=dword:00000000

[HKEY_LOCAL_MACHINESYSTEMCurrentControlSetControlSecurityProvidersSCHANNELProtocolsTLS 1.1]

[HKEY_LOCAL_MACHINESYSTEMCurrentControlSetControlSecurityProvidersSCHANNELProtocolsTLS 1.1Client]

"Enabled"=dword:00000000

[HKEY_LOCAL_MACHINESYSTEMCurrentControlSetControlSecurityProvidersSCHANNELProtocolsTLS 1.1Server]

"Enabled"=dword:00000000

[HKEY_LOCAL_MACHINESYSTEMCurrentControlSetControlSecurityProvidersSCHANNELProtocolsTLS 1.2]

[HKEY_LOCAL_MACHINESYSTEMCurrentControlSetControlSecurityProvidersSCHANNELProtocolsTLS 1.2Client]

[HKEY_LOCAL_MACHINESYSTEMCurrentControlSetControlSecurityProvidersSCHANNELProtocolsTLS 1.2Server]

Building a migration plan to move to TLS 1.2. *8

Note: Disabling TLS 1.0 could prevent clients from connecting to Windows Server 2008 R2 (2008 SP2 is not covered) and Windows 7, unless KB3080079 has been applied on the device you are connecting too, and you are using the latest release of the RDC client. *10

Windows 8 and Server 2012 and later already have this capability built-in.

You will also need to ensure that the destination device has been configured to "Negotiate" its RD session. *11

Digest/WDigest

Digest/WDigest was introduced back with Windows XP/Server 2003 and it has long since been found to be insecure. Microsoft highly recommends that this protocol be disabled. If you have installed KB2871997 Digest/WDigest still needs to be disabled on the device. KB2871997 provides the ability to disable its use, but by itself does not prevent its use. *13

Prior to disabling Digest/WDigest you will want to ensure it isn't in use. This can be accomplished by inspecting the Event logs and/or ensuring that reversible encryption is not set in Active Directory, Directory Service. For complete details see below.

Checking for the Use of These Legacy Protocols

SMBv1

From an elevated command prompt:
Get-WindowsFeature FS-SMB1

The PowerShell command above will provide details on whether or not the protocol has been installed on a device. Ralph Kyttle has written a nice Blog on how to detect, in a large scale, devices that have SMBv1 enabled. *9

Once you have found devices with the SMBv1 protocol installed, the device should be monitored to see if it is even being used. There is a PoSh command to Audit the use of SMBv1 to see if the protocol is in use: *5
Set-SmbServerConfiguration –AuditSmb1Access $true

Open up Event Viewer and review any events that might be listed.
Applications and Services Logs > Microsoft > Windows > SMB Server > Audit

LM/NTLMv1

To find the use of LM there are 3 choices NetLogon logging, network sniffing, or if you are on Windows Vista/Server 2008 or above, you can also use the event viewer. Rather than touch on everything here, it may be easier to take a look at https://blogs.technet.microsoft.com/ken_brumfield/2008/08/08/ntlmv2-or-not-ntlmv2-that-is-the-question/ and https://blogs.msdn.microsoft.com/openspecification/2010/05/03/ntlm-v1-no-excuse-me-ntlm-v2-oh-no-you-were-right-its-v1/ for a little more information on how to do this.

If you are on an operating system LOWER than Windows Server 2008/Vista, or for some reason you cannot enable to necessary security logging, then a network sniffing tool will be required to determine if NTLMv1 is in use. Unfortunately to find which version of NTLM is in use you have to look at the NTLM conversation itself in this case. Ned Pyle wrote a great article on how to capture and differentiate between v1 and v2 that can be found at https://blogs.technet.microsoft.com/askds/2012/02/02/purging-old-nt-security-protocols/.

*13

TLS 1.0

To help determine a specific clients TLS use, Qualys SSL Labs has a nice tool (If the device has internet access). The tool provides client and web server testing. *14

From an enterprise perspective you will have to look at the enabled ciphers on the device via the Registry as shown above.

Digest/WDigest

Digest authentication requires the use of reversibly encrypted copy of the user's password store in Active Directory, Directory Services (AD DS). To check to see if this is enabled with AD DS, review the setting on your user's accounts to see if your accounts have the box checked for "Store password using the reversible encryption". *15

Get-ADUSer -filter 'userAccountControl -band 128' -properties userAccountControl

If it is found that it is enabled, prior to disabling, Event Logs should be inspected so as to possibly not impact current applications. Event ID 4776 will appear in the Security Event log for any use of Digest/WDigest. To ensure that you are capturing authentication events ensure that you have this enabled – "Audit Credential Validation" = Enabled. This should be enabled on all of the enterprises DC's. *16

I think this is a topic many of you hadn't thought of and hopefully it can make your to do list to research your environment and find out what type of insecure protocols you might have running within your environment. Best of luck in your research and oh by the way "SKOL" Minnesota Vikings.

References

  1. https://www.ssllabs.com/
  2. https://docs.microsoft.com/en-us/msrc/customer-guidance-for-wannacrypt-attacks
  3. https://www.microsoft.com/en-us/wdsi/threats/malware-encyclopedia-description?Name=Ransom:Win32/Petya
  4. https://blogs.technet.microsoft.com/mmpc/2017/06/27/new-ransomware-old-techniques-petya-adds-worm-capabilities/
  5. https://blogs.technet.microsoft.com/filecab/2016/09/16/stop-using-smb1/
  6. https://support.microsoft.com/en-us/help/2696547/how-to-detect-enable-and-disable-smbv1-smbv2-and-smbv3-in-windows-and
  7. https://support.microsoft.com/en-us/help/2793313/security-guidance-for-ntlmv1-and-lm-network-authentication
  8. https://www.microsoft.com/en-us/download/details.aspx?id=55266
  9. https://blogs.technet.microsoft.com/ralphkyttle/2017/04/07/discover-smb1-in-your-environment-with-dscea/
  10. https://support.microsoft.com/en-us/help/3080079/update-to-add-rds-support-for-tls-1-1-and-tls-1-2-in-windows-7-or-wind
  11. https://technet.microsoft.com/en-us/library/ff458357.aspx
  12. https://blogs.technet.microsoft.com/askpfeplat/2016/04/18/the-importance-of-kb2871997-and-kb2928120-for-credential-protection/
  13. https://blogs.technet.microsoft.com/askds/2012/02/02/purging-old-nt-security-protocols/
  14. https://blog.pcisecuritystandards.org/are-you-ready-for-30-june-2018-sayin-goodbye-to-ssl-early-tls
  15. https://www.microsoft.com/technet/prodtechnol/WindowsServer2003/Library/IIS/22e38464-acb3-48cd-87e5-c554ef6e3ccd.mspx?mfr=true
  16. https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/dd772679(v=ws.10)

List of Azure Active Directory Audit Activities

$
0
0

Hi all,

Audit logs in Azure Active Directory help customers to gain visibility about users and group management, managed applications and directory activities in their cloud-based Active Directory.

Using the logs you can detect and investigate security incidents, and review important configuration changes.

By using the Graph API, which provides programmatic access to Azure AD, you can get a detailed list of all auditing activities. Because the access to Graph API is based on REST API calls you can use PowerShell scripts.

I wrote a quick script, based on Paulo Marques's post

Script code is here, just remember to change YOUR_Domain_Name

The full list is here (updated on 12/2/2018 and probably subject to change)

category activityResourceType activity
Account Provisioning Application process escrow
Account Provisioning Application administration
Account Provisioning Application directory operation
Account Provisioning Application synchronization rule action
Account Provisioning Application import
Account Provisioning Application export
Account Provisioning Application other
Application Proxy Application update application
Application Proxy Application delete application
Application Proxy Application add application
Application Proxy Application update application single sign-on mode
Application Proxy Directory enable desktop sso for a specific domain
Application Proxy Directory enable application proxy
Application Proxy Directory disable desktop sso
Application Proxy Directory disable passthrough authentication
Application Proxy Directory enable desktop sso
Application Proxy Directory disable desktop sso for a specific domain
Application Proxy Directory disable application proxy
Application Proxy Directory enable passthrough authentication
Application Proxy Resource register connector
Application Proxy Resource add application ssl certificate
Application Proxy Resource delete ssl binding
Automated Password Rollover Application automated password rollover
B2C Application get v1 and v2 applications
B2C Application retrieve v2 application permissions grants
B2C Application get v2 applications
B2C Application add v2 application permissions
B2C Application delete v2 application permission grant
B2C Application update v2 application permission grant
B2C Application delete v1 application
B2C Application create v2 application
B2C Application retrieve v2 application service principals
B2C Application update v2 application
B2C Application get v1 application
B2C Application update v1 application
B2C Application retrieve v2 application service principals in the current tenant
B2C Application get v2 application
B2C Application delete v2 application
B2C Application get v1 applications
B2C Application create v1 application
B2C Authorization get all certificates
B2C Authorization user authorization: access is denied
B2C Authorization gettenantprovisioninginfo
B2C Authorization create certificate
B2C Authorization retrieve v2 application service principals
B2C Authorization create admin policy
B2C Authorization adminpolicydatas-removeresources
B2C Authorization gettenantinfo
B2C Authorization getkeysets
B2C Authorization get list of tags for all admin flows for all users
B2C Authorization user authorization: user granted 'cpimservice admins' access rights
B2C Authorization adminuserjourneys-removeresources
B2C Authorization get tenant policy list
B2C Authorization get the details of an admin flow
B2C Authorization get tenant defined idp list
B2C Authorization delete a b2c directory resource
B2C Authorization get tenant defined local idp list
B2C Authorization get allowed application claims for user journey
B2C Authorization get the set of available supported cultures for cpim
B2C Authorization create new idp
B2C Authorization user authorization: user was granted 'authenticated users' access rights
B2C Authorization get trustframework policy as xml
B2C Authorization gets a cpim key container in jwk format
B2C Authorization add v2 application permissions
B2C Authorization get b2c directory resources in a resource group
B2C Authorization validate move resources
B2C Authorization create a custom domains in the tenant
B2C Authorization get user journey list
B2C Authorization create trustframework policy with configurable prefix
B2C Authorization deleteidentityprovider
B2C Authorization deleteoutputclaim
B2C Authorization gets cpim key as a certificate
B2C Authorization linkidentityprovider
B2C Authorization deleteinputclaim
B2C Authorization getinputclaims
B2C Authorization create trustframework policy to store
B2C Authorization gettrustframeworkwithouttenantobjectid
B2C Authorization updatetrustframeworkswithtenantobjectid
B2C Authorization recoverarchivedtenantwithtenantobjectid
B2C Authorization put ief policy
B2C Authorization get admin flows list
B2C Authorization delete trustframework policy from store
B2C Authorization adminpolicydatas-getresources
B2C Authorization get custom idp
B2C Authorization getb2cuserattributes
B2C Authorization create identityprovider
B2C Authorization getb2cpolicies
B2C Authorization getiefpolicies
B2C Authorization set ssl operation status for the custom domains operations in the tenant
B2C Authorization get resource properties of a tenant
B2C Authorization get policy
B2C Authorization get supported idp list of the user journey
B2C Authorization get user attribute
B2C Authorization delete idp
B2C Authorization create policy
B2C Authorization get tenant details for a user for resource creation
B2C Authorization get localized resource json
B2C Authorization update local idp
B2C Authorization get v1 application
B2C Authorization adminuserjourneys-getresources
B2C Authorization adminuserjourneys-setresources
B2C Authorization get trustframework policy
B2C Authorization verify if b2c feature is enabled
B2C Authorization gets the type of tenant
B2C Authorization get certificates
B2C Authorization getiefpolicy
B2C Authorization user authorization: user granted access as 'tenant admin'
B2C Authorization delete identityprovider
B2C Authorization update custom idp
B2C Authorization delete policy
B2C Authorization getkeyset
B2C Authorization create a new adminuserjourney
B2C Authorization enable b2c feature
B2C Authorization retrieve v2 application service principals in the current tenant
B2C Authorization get tenant allowed features
B2C Authorization get idp
B2C Authorization get v2 applications
B2C Authorization get the default supported culture for cpim
B2C Authorization get allowed self-asserted claims of policy
B2C Authorization create user attribute
B2C Authorization update idp
B2C Authorization update v2 application
B2C Authorization get list of tenants for a user
B2C Authorization create v2 application
B2C Authorization delete a cpim key container
B2C Authorization add a key based on ascii secret to a cpim key container
B2C Authorization move resources
B2C Authorization get the list of userjourneys for this tenant
B2C Authorization get user attributes
B2C Authorization get list of all admin flows
B2C Authorization getidentityproviders
B2C Authorization restore a cpim key container backup
B2C Authorization create v1 application
B2C Authorization creates or update an new adminuserjourney
B2C Authorization get and download certificate
B2C Authorization link inputclaim
B2C Authorization gettenants
B2C Authorization patch identityprovider
B2C Authorization get list of policies
B2C Authorization user authorization: api is disabled for tenant featureset
B2C Authorization get trustframework ids from store
B2C Authorization createtrustframeworkpolicy
B2C Authorization get idps for a specific admin flow
B2C Authorization delete v1 application
B2C Authorization authorization: the action is not allowed to make changes to config tenant
B2C Authorization migratetenantmetadata
B2C Authorization user authorization: user login tenant is different from target tenant
B2C Authorization get a b2c drectory resource
B2C Authorization get available output claims list
B2C Authorization verify if feature is enalbed
B2C Authorization get policies
B2C Authorization get tenant list
B2C Authorization get tenant info
B2C Authorization retrieve v2 application permissions grants
B2C Authorization get content definitions for user journey
B2C Authorization get b2c directory resources in a subscription
B2C Authorization get local accounts' self-asserted claims
B2C Authorization get supported idp list
B2C Authorization create trustframework policy
B2C Authorization update policy
B2C Authorization delete trustframework policy
B2C Authorization delete user attribute
B2C Authorization update subscription status
B2C Authorization delete v2 application permission grant
B2C Authorization update v2 application permission grant
B2C Authorization upload a cpim encrypted key
B2C Authorization add a key to a cpim key container
B2C Authorization get v1 applications
B2C Authorization get a user journey
B2C Authorization update v1 application
B2C Authorization user authorization: tenantid parameter is missing in request
B2C Authorization delete certificate
B2C Authorization create b2cuserattribute
B2C Authorization link outputclaim
B2C Authorization create ief policy
B2C Authorization getb2cpolicy
B2C Authorization get certificate
B2C Authorization get trustframework policy as xml from store
B2C Authorization get a specific admin flow
B2C Authorization adminpolicydatas-setresources
B2C Authorization get admin policy
B2C Authorization puttrustframeworkpolicy
B2C Authorization getidentityprovider
B2C Authorization gettrustframeworkpolicy
B2C Authorization create new custom idp
B2C Authorization get tenantdomains
B2C Authorization remove a user journey
B2C Authorization create or update a b2c directory resource
B2C Authorization get v1 and v2 applications
B2C Authorization get operations of microsoft.azureactivedirectory resource provider
B2C Authorization get v2 application
B2C Authorization get allowed self-asserted claims for user journey
B2C Authorization update user attribute
B2C Authorization gets list of key containers in the tenant
B2C Authorization delete v2 application
B2C Authorization get key container active key metadata in jwk
B2C Authorization create localized resource json
B2C Authorization get a list of custom domains in the tenant
B2C Authorization update a b2c directory resource
B2C Authorization get tenant defined custom idp list
B2C Directory enable b2c feature
B2C Directory get a list of custom domains in the tenant
B2C Directory get resource properties of a tenant
B2C Directory create a custom domains in the tenant
B2C Directory gettenantprovisioninginfo
B2C Directory set ssl operation status for the custom domains operations in the tenant
B2C Directory gets the type of tenant
B2C Directory get tenant list
B2C Directory verify if feature is enalbed
B2C Directory get tenant info
B2C Directory get tenant allowed features
B2C Directory verify if b2c feature is enabled
B2C Key maintenance key container. revoke first false, revoke last false, cleanup true, operation 'revoke', kid idtokensigningkeycontainer
B2C Key list all keys
B2C Key gets a cpim key container in jwk format
B2C Key add a key based on ascii secret to a cpim key container
B2C Key maintenance key container. revoke first true, revoke last false, cleanup true, operation 'undefined', kid undefined
B2C Key maintenance key container. revoke first false, revoke last false, cleanup true, operation 'revoke', kid twaj4qpb-l30fa0kc3nuaesy_z6ukvptiwvvyine-cw
B2C Key maintenance key container. revoke first false, revoke last false, cleanup true, operation 'revoke', kid j-yzdgvppiwfgjsgdmsucbcisdegkllfksiz51ulejs
B2C Key maintenance key container. revoke first false, revoke last false, cleanup true, operation 'revoke', kid x7kahnrq5gnu4eujwqqot_1jhlchwcetleimhdkdywg
B2C Key write new generated key container
B2C Key maintenance key container. revoke first false, revoke last false, cleanup false, operation 'rollback', kid undefined
B2C Key gets list of key containers in the tenant
B2C Key get key container active key metadata in jwk
B2C Key upload a cpim encrypted key
B2C Key gets cpim key as a certificate
B2C Key get certificates
B2C Key delete key container
B2C Key maintenance key container. revoke first false, revoke last false, cleanup true, operation 'revoke', kid key0
B2C Key add a key to a cpim key container
B2C Key get and download certificate
B2C Key create certificate
B2C Key save key container
B2C Key restore a cpim key container backup
B2C Key delete certificate
B2C Key maintenance key container. revoke first false, revoke last false, cleanup true, operation 'revoke', kid t8zpabofkcj9b-nfjzzyiikjgsjaka2p08ykwry_1ao
B2C Key maintenance key container. revoke first false, revoke last false, cleanup true, operation 'revoke', kid idtokensigningkeycontainer.v2
B2C Key get key container metadata
B2C Key get certificate
B2C Key change protection scheme
B2C Key delete a cpim key container
B2C Other issue an authorization code to the application
B2C Other issue an id_token to the application
B2C Resource recoverarchivedtenantwithtenantobjectid
B2C Resource gettenants
B2C Resource linkidentityprovider
B2C Resource link outputclaim
B2C Resource link inputclaim
B2C Resource getb2cpolicies
B2C Resource put ief policy
B2C Resource patch identityprovider
B2C Resource get admin flows list
B2C Resource get admin policy
B2C Resource delete trustframework policy from store
B2C Resource createtrustframeworkpolicy
B2C Resource adminuserjourneys-removeresources
B2C Resource getiefpolicies
B2C Resource get tenant defined idp list
B2C Resource get tenant defined local idp list
B2C Resource get supported idp list
B2C Resource create new idp
B2C Resource get the default supported culture for cpim
B2C Resource create trustframework policy
B2C Resource delete trustframework policy
B2C Resource create policy
B2C Resource get tenant details for a user for resource creation
B2C Resource get the list of userjourneys for this tenant
B2C Resource getidentityprovider
B2C Resource update custom idp
B2C Resource gettenantinfo
B2C Resource getkeyset
B2C Resource create identityprovider
B2C Resource get the details of an admin flow
B2C Resource create or update a b2c directory resource
B2C Resource get idp
B2C Resource get allowed application claims for user journey
B2C Resource get allowed self-asserted claims of policy
B2C Resource get allowed self-asserted claims for user journey
B2C Resource create user attribute
B2C Resource update idp
B2C Resource update user attribute
B2C Resource update subscription status
B2C Resource get b2c directory resources in a resource group
B2C Resource create localized resource json
B2C Resource validate move resources
B2C Resource get localized resource json
B2C Resource update a b2c directory resource
B2C Resource adminuserjourneys-getresources
B2C Resource get user attributes
B2C Resource create trustframework policy to store
B2C Resource create b2cuserattribute
B2C Resource deleteinputclaim
B2C Resource deleteoutputclaim
B2C Resource deleteidentityprovider
B2C Resource gettrustframeworkwithouttenantobjectid
B2C Resource get trustframework policy as xml from store
B2C Resource get list of policies
B2C Resource get a specific admin flow
B2C Resource get list of tags for all admin flows for all users
B2C Resource gettrustframeworkpolicy
B2C Resource create new custom idp
B2C Resource migratetenantmetadata
B2C Resource creates or update an new adminuserjourney
B2C Resource get a b2c drectory resource
B2C Resource get operations of microsoft.azureactivedirectory resource provider
B2C Resource get b2c directory resources in a subscription
B2C Resource get policy
B2C Resource get local accounts' self-asserted claims
B2C Resource get supported idp list of the user journey
B2C Resource update policy
B2C Resource get trustframework policy as xml
B2C Resource get tenant defined custom idp list
B2C Resource update local idp
B2C Resource create trustframework policy with configurable prefix
B2C Resource get tenant policy list
B2C Resource getb2cpolicy
B2C Resource delete identityprovider
B2C Resource create admin policy
B2C Resource adminpolicydatas-setresources
B2C Resource get trustframework ids from store
B2C Resource adminpolicydatas-getresources
B2C Resource delete policy
B2C Resource getkeysets
B2C Resource getidentityproviders
B2C Resource get idps for a specific admin flow
B2C Resource get list of all admin flows
B2C Resource remove a user journey
B2C Resource delete a b2c directory resource
B2C Resource delete idp
B2C Resource get list of tenants for a user
B2C Resource get trustframework policy
B2C Resource getinputclaims
B2C Resource getb2cuserattributes
B2C Resource updatetrustframeworkswithtenantobjectid
B2C Resource create ief policy
B2C Resource getiefpolicy
B2C Resource adminpolicydatas-removeresources
B2C Resource get custom idp
B2C Resource puttrustframeworkpolicy
B2C Resource create a new adminuserjourney
B2C Resource get available output claims list
B2C Resource get policies
B2C Resource get content definitions for user journey
B2C Resource get the set of available supported cultures for cpim
B2C Resource get user attribute
B2C Resource delete user attribute
B2C Resource move resources
B2C Resource adminuserjourneys-setresources
B2C Resource get a user journey
B2C Resource get user journey list
Core Directory Application add service principal
Core Directory Application update service principal
Core Directory Application update application
Core Directory Application remove service principal
Core Directory Application delete application
Core Directory Application add service principal credentials
Core Directory Application remove app role assignment from service principal
Core Directory Application remove owner from application
Core Directory Application consent to application
Core Directory Application add application
Core Directory Application add owner to service principal
Core Directory Application remove oauth2permissiongrant
Core Directory Application add oauth2permissiongrant
Core Directory Application add app role assignment to service principal
Core Directory Application remove service principal credentials
Core Directory Application remove owner from service principal
Core Directory Application add owner to application
Core Directory Application revoke consent
Core Directory Device add registered owner to device
Core Directory Device add registered users to device
Core Directory Device update device configuration
Core Directory Device remove registered owner from device
Core Directory Device delete device configuration
Core Directory Device update device
Core Directory Device add device
Core Directory Device add device configuration
Core Directory Device remove registered users from device
Core Directory Device delete device
Core Directory Directory update domain
Core Directory Directory remove partner from company
Core Directory Directory remove verified domain
Core Directory Directory add unverified domain
Core Directory Directory add verified domain
Core Directory Directory set dirsyncenabled flag
Core Directory Directory set directory feature on tenant
Core Directory Directory create company settings
Core Directory Directory update company settings
Core Directory Directory set company allowed data location
Core Directory Directory delete company settings
Core Directory Directory set company multinational feature enabled
Core Directory Directory update external secrets
Core Directory Directory set rights management properties
Core Directory Directory update company
Core Directory Directory verify domain
Core Directory Directory remove unverified domain
Core Directory Directory set domain authentication
Core Directory Directory set password policy
Core Directory Directory add partner to company
Core Directory Directory promote company to partner
Core Directory Directory set partnership
Core Directory Directory set accidental deletion threshold
Core Directory Directory demote partner
Core Directory Directory set company information
Core Directory Directory set federation settings on domain
Core Directory Directory create company
Core Directory Directory verify email verified domain
Core Directory Directory set dirsync feature
Core Directory Directory purge rights management properties
Core Directory Group add app role assignment to group
Core Directory Group start applying group based license to users
Core Directory Group delete group settings
Core Directory Group remove member from group
Core Directory Group set group license
Core Directory Group create group settings
Core Directory Group add member to group
Core Directory Group add group
Core Directory Group update group
Core Directory Group add owner to group
Core Directory Group finish applying group based license to users
Core Directory Group remove app role assignment from group
Core Directory Group set group to be managed by user
Core Directory Group delete group
Core Directory Group remove owner from group
Core Directory Group update group settings
Core Directory Policy update policy
Core Directory Policy add policy to service principal
Core Directory Policy delete policy
Core Directory Policy remove policy credentials
Core Directory Policy remove policy from service principal
Core Directory Policy add policy
Core Directory User update role
Core Directory User add role from template
Core Directory User update user
Core Directory User delete user
Core Directory User add user
Core Directory User convert federated user to managed
Core Directory User create application password for user
Core Directory User set license properties
Core Directory User restore user
Core Directory User remove member from role
Core Directory User remove app role assignment from user
Core Directory User remove scoped member from role
Core Directory User change user license
Core Directory User change user password
Core Directory User reset user password
Core Directory User add app role assignment grant to user
Core Directory User add member to role
Core Directory User set user manager
Core Directory User delete application password for user
Core Directory User update user credentials
Core Directory User add scoped member to role
Identity Protection Directory update alert settings
Identity Protection Directory update weekly digest settings
Identity Protection Directory onboarding
Identity Protection Other set user risk policy
Identity Protection Other download a single risk event type
Identity Protection Other set mfa registration policy
Identity Protection Other download all risk event types
Identity Protection Other download users flagged for risk
Identity Protection Other download free user risk events
Identity Protection Other admin dismisses/resolves/reactivates risk event
Identity Protection Other set sign-in risk policy
Identity Protection Other download admins and status of weekly digest opt-in
Identity Protection Policy set mfa registration policy
Identity Protection Policy set sign-in risk policy
Identity Protection Policy set user risk policy
Identity Protection User admin generates a temporary password
Identity Protection User admins requires the user to reset their password
Invited Users Other batch invites processed
Invited Users Other batch invites uploaded
Invited Users User viral tenant creation
Invited Users User invite external user
Invited Users User email not sent, user unsubscribed
Invited Users User assign external user to application
Invited Users User redeem external user invite
Invited Users User viral user creation
MIM Service Group create group
MIM Service Group remove member
MIM Service Group add member
MIM Service Group delete group
MIM Service Group update group
MIM Service User user password registration
MIM Service User user password reset
Self-service Group Management Group delete a pending request to join a group
Self-service Group Management Group set dynamic group properties
Self-service Group Management Group update lifecycle management policy
Self-service Group Management Group approve a pending request to join a group
Self-service Group Management Group request to join a group
Self-service Group Management Group create lifecycle management policy
Self-service Group Management Group reject a pending request to join a group
Self-service Group Management Group cancel a pending request to join a group
Self-service Group Management Group renew group
Self-service Password Management User reset password (self-service)
Self-service Password Management User unlock user account (self-service)
Self-service Password Management User reset password (by admin)
Self-service Password Management User self-serve password reset flow activity progress
Self-service Password Management User change password (self-service)
Self-service Password Management User user registered for self-service password reset
Self-service Password Management User blocked from self-service password reset
Terms Of Use Policy decline terms of use
Terms Of Use Policy accept terms of use
Terms Of Use Policy edit terms of use
Terms Of Use Policy unpublish terms of use
Terms Of Use Policy create terms of use
Terms Of Use Policy publish terms of use
Terms Of Use Policy delete terms of use

Expanded Data and Analytics technical journey – 5 new technical services added!

$
0
0

Leverage the new one-to-one consultations and technical webinar now available, focused on data platform modernization, Power BI and SQL. These technical engagements are designed to help you build your Data and AI technical skillset, so you can better sell and deploy customer solutions around these technologies.

Data Platform Modernization Starter Kit Consultation (5 partner advisory hours; L100-200)
  • Enable Data Platform Modernization with specific planning guidance for common data scenarios. Receive personalized technical guidance on modern data strategies, choosing a path to modernization, building an agile data analytics solution and transforming insights into action. Engage with Microsoft experts to help build your data platform strategy using Azure Data Services and SQL Server 2017. You’ll walk away with an understanding of cost estimation, reference architecture documentation and sample architectures.
Data Platform Modernization Presales Consultation (unlimited access at no cost; L200-300)
  • Receive technical guidance as you take your customers through the features, best practices and sample scenarios when moving their database to the cloud. During this one-on-one consultation with a Microsoft expert, you’ll learn about data migration to SQL deployment, tools to check compatibility and using SQL Server integration services. You’ll walk away with a better understanding of how to propose the right solutions and mitigate potential issues for data platform modernization deals.
Data Platform Modernization Deployment Consultation (5 partner advisory hours; L300-400)
  • Understand the technical requirements and steps required to deploy Data Platform Modernization solutions. Let our Partner Technical Consultants help ensure a smooth deployment process during this one-to-one consultation. We’ll guide you through scenarios that you may encounter along with an overview of the various deployment options available.
Power BI Presales Consultation (unlimited access at no cost; L200-300)
  • Grow and build your Power BI (Business Intelligence) practice, enabling you to offer new solutions and services to customers. Transform your customer business with predictive analytics, data visualization, and real-time intelligence. Prepare your customers for their Power BI deployments, by understanding scenarios and data source options. With Power BI, you can transform your customers’ data into rich visuals they can collect and organize, helping them focus on what matters most to their businesses.
Technical Deep Dive on SQL Server (Unlimited access at no cost; L200-300 - Events coming later this month)
  • Prepare your customers for their SQL deployments, by understanding common scenarios and guidelines on migrating to on-premises databases. You’ll receive technical guidance on migrating to SQL Server, cloud services (IaaS or PaaS) and begin to understand SQL Server on Linux. You’ll learn how to increase the database security and keep your data safe with business continuity solutions. With the help of Partner Technical Consultations, you’ll receive a guided walk through of the features and functionality within SQL on both on-premises and cloud side, helping you to gain more customers.

 

Next steps: Access the full suite of technical webinars and one-on-one consultations available for the Data Platform and Analytics technical journey by visiting https://aka.ms/DataAITechJourney.

ICYMI: Recent Microsoft AI Updates, Including in Custom Speech Recognition, Voice Output, and Video Indexing

$
0
0

A quick recap of three recent posts about Microsoft AI platform developments – just in case you missed it.

1. Start Building Your Own Code-Free, Custom Speech Recognition Models

Microsoft is at the forefront of speech recognition, having reached human parity on the Switchboard research benchmark. This technology is truly capable of transforming our daily lives, as it indeed already has started to, be it through digital assistants, or our ability to dictate emails and documents, or via transcriptions of lectures and meetings. These scenarios are possible thanks to years of research and recent technological jumps enabled by neural networks. As part of our mission to empower developers with our latest AI advances, we now offer a spectrum of Cognitive Services APIs, addressing a range of developer scenarios. For scenarios that require the use of domain specific vocabularies or the need to navigate complex acoustic conditions, we offer the Custom Speech Service which lets developers automatically tune speech recognition models to their needs.

As an example, a university may be interested to accurately transcribe and digitize all their lectures. A given lecture in biology, to cite one example, may include a term such as "Nerodia erythrogaster". Although extremely domain-specific, terms like these are nevertheless critically important to detect accurately in order to transcribe these sessions right. It is also important to customize acoustic models to ensure that the speech recognition system remains accurate in the specific environment where it will be deployed. For instance, a voice-enabled app that will be used on a factory floor must be able to work accurately despite persistent background noise.

The Custom Speech Service enables acoustic and language model adaptation with zero coding. Our user interface guides you through the entire process, including data importation, model adaptation, evaluation and optimization by measuring word error rates, and tracking improvements. It also guides you through model deployment at scale, so models can be accessed by your apps running on any number of devices. Creating, updating, and deploying models takes only minutes, making it easy to build and iteratively improve your app. To start building your own speech recognition models, start from the original blog post here – see why so many Microsoft customers are previewing these services across a wide range of scenarios and environments today.


2. Use Microsoft Text-to-Speech for Voice Output in 34 Languages

With voice becoming increasingly prevalent as a mode of interaction, the ability to provide voice output (or Text-to-Speech, aka TTS), is becoming a critical technology in support of AI scenarios. The Speech API, a Microsoft Cognitive Service, now offers six additional TTS languages to all developers, namely Bulgarian, Croatian, Malay, Slovenian, Tamil and Vietnamese. This brings the total number of available languages to 34. Powered by the latest AI technology, these 34 languages are available across 48 locales and 78 voice fonts.

Developers can access the latest generation of speech recognition and TTS models through a single API. Developers can use the Text-to-Speech API across a broad swathe of use cases, either on its own – for accessibility or hands-free communication and media consumption, for instance – or in combination with other Cognitive Services APIs such as Speech to Text or Language Understanding to create comprehensive voice driven solutions. Learn more about this exciting development at the original blog post here.

3. Gain Richer Insights on Your Video Recordings

There's a misconception that AI for video is simply extracting frames from a video and running computer vision algorithms on each video frame. While one could certainly take that approach, it generally does not help us get to deeper, richer insights. Video presents a new set of challenges and opportunities for optimization and insights that makes this space quite different from processing a sequence of images. Microsoft's solution, the Video Indexer, implements several such video-specific algorithms.

To take one such example, take the situation of detecting people present in a video. Such people will present their heads and faces in different poses and they're likely to appear under varying lighting conditions as well. In a frame-based approach, we would end up with a list of possible matches from a face database but with different confidence values. Some of these matches may not be the same across a sequence of frames even when it was the very same person in the video the entire time. There's a need of an additional logic layer to track a person across frames, evaluate the variations, and determine the true matching face. There is also an opportunity for optimization whereby we can reduce the number of queries we make by selecting an appropriate subset of frames to query against the face recognition system. These are capabilities provided with the Video Indexer, resulting in higher quality face detection.

Next, take the example of tracking multiple people present in a video and displaying their presence in that video (or lack thereof) in a timeline view. Simple face detection on each video frame will not help us get to a timeline view of who was present during which part of a given video – getting to a timeline view requires us to track faces across frames, including accounting for side views of faces and other variations. Video Indexer does this sort of sophisticated face tracking and as a result you can see full timeline views on a video.


Similarly, videos offer an opportunity for extracting potentially relevant topics or keywords via optical character recognition – for instance, with signage and brands that may appear in the backdrop of a given video. However, if we process a video as a sequence of stills, we will often end up with lots of partial words as such signage / words may be partially obscured in specific frames. Extracting the right keywords across the sequence of frames requires that we apply algorithms on top of partial words. This, again, is something the Video Indexer does, thus yielding better insights. Learn more about the Video Indexer from the original blog post here.


ML Blog Team

Microsoft Next Up Exam Prep for Managing Office 365 Identities & Requirements and Enabling Office 365 Services

$
0
0

TimTetrickPhoto

Tim Tetrick

 

Microsoft Next Up is a structured step-by-step exam prep course spanning a 5-week period that will remove roadblocks and set you up for success in achieving certification for one of these exams:

70-346: Managing Office 365 Identities and Requirements

  • Course starts February 26, 2018
  • Exam Day March 26, 2018

70-347: Enabling Office 365 Services

  • Course starts February 26, 2018
  • Exam Day March 26, 2018

Cost to register is $299 USD and includes a practice test, 100% off exam voucher, and a complementary retake. Registration will close when all seats have been committed.

Ready to get started?

Choose the exam that aligns with your goals and register today! Learn more and register

Once registered, you will receive a welcome email from US Partner Readiness Support that guides you through next steps. Courses begin February 26, 2018 with weekly checkpoints to help keep you on track. Exam Day begins at 9:00AM ET with a virtual Exam Cram before participants head into the exam in the afternoon.

Microsoft Next Up has been built for IT professionals looking to extend their skills and build on existing technical knowledge. Next Up is not recommended for anyone new to cloud technologies.

We look forward to your participation at this training event. If you have questions regarding the event, please email pldssup@microsoft.com.

 

What’s included?

Guided study paths

Full access to an online Learning Portal for five (5) weeks leading up to Exam Day. Each week we’ll guide you through video content and labs in a structured learning path -- all available online anytime.

Online community to ask questions and connect

Collaborate with peers to deepen your learning. Reinvent the way you study with Yammer – connect up in a virtual classroom to share your journey, receive support from like-minded students, ask questions and get answers from Microsoft certified experts.

Exam Day

Get in the zone with a virtual exam cram in the morning. Delivered over Skype and facilitated by Microsoft Certified Trainers, you’ll review the exam topics and receive invaluable test taking techniques in a fast-paced 4-hour session. Then head out to a test center to sit for the exam in the afternoon, or grab a quick break before settling into an online proctored exam from the comfort of your own desk.

The registration fee includes a practice test, 100% off exam voucher, and a complementary retake.

Recognition

Wear your achievement with pride. Pass the exam and you’ll have verifiable proof – a Microsoft Badge. Share it with your professional network. Bolster your credentials. Discover your salary potential. Elevate your prospects with new employees

….a redo

Need a redo? No problem. Take advantage of an additional week of access on the learning portal to brush up on the material that stumped you, then schedule a retake – this course has you covered

SAP on Azure の更新まとめ – 2018 年 1 月

$
0
0

執筆者: Cameron - MSFT SAP Program Manager

このポストは、2018 年 1 月 4 日に投稿された SAP on Azure: General Update – January 2018 の翻訳です。

 

SAP とマイクロソフトは、Azure クラウド プラットフォーム向けの新機能を継続的にリリースしています。今回の記事では、最近数か月にわたってリリースされた更新、不具合の修正、機能強化、推奨されるベスト プラクティスなどをまとめてお伝えします。

1. NetWeaver で M、Dv3、Ev3 シリーズ VM を認定

SAP は、NetWeaver AnyDB ワークロード用に新たに 3 種類の VM を認定し、サポートを開始しました。AnyDB は、SQL Server、Oracle、DB2、Sybase、MaxDB で実行されている NetWeaver アプリケーションをさします。

現在、この VM シリーズの一部では Hana への認定も進められています。

Dv3 シリーズは、CPU あたり 4GB の RAM を搭載する、SAP アプリケーション サーバーや小規模な DBMS サーバーに適した VM です。

Ev3 シリーズは、CPU あたり 8GB の RAM (E2v3 ~ E32v3)、または 432GB の RAM (E64v3) を搭載する、大規模 DBMS 向けの VM です。

M シリーズは、最大 3.8TB の RAM と 128 の CPU を搭載可能で、非常に大規模な DBMS ワークロードに適した VM です。

多くの新機能が備わった 3 つの新しい VM シリーズにおいてネットワークパフォーマンスが大幅に向上しています。Dv3 および Ev3 の詳細については、こちらのブログ記事を参照してください。

Azure サービスのサイトでは、データセンターごとの VM シリーズのリリース状況を掲載しています。Ev3 および Dv3 は、ほぼすべての地域でご利用いただけます。

新しい VM の種類と SAPS 値

VM の種類 CPU と RAM SAPS
D2s_v3 CPU × 2、8 GB 2,178
D4s_v3 CPU × 4、16 GB 4,355
D8s_v3 CPU × 8、32 GB 8,710
D16s_v3 CPU × 16、64 GB 17,420
D32s_v3 CPU × 32、128 GB 34,840
D64s_v3 CPU × 64、256 GB 69,680
E2s_v3 CPU × 2、16 GB 2,178
E4s_v3 CPU × 4、32 GB 4,355
E8s_v3 CPU × 8、64 GB 8,710
E16s_v3 CPU × 16、128 GB 17,420
E32s_v3 CPU × 32、256 GB 34,840
E64s_v3 CPU × 64、432 GB 70,050
M64s CPU × 64、1,000 GB 67,315
M64ms CPU × 64、1,792 GB 68,930
M128s CPU × 128、2,000 GB 134,630

SAP NetWeaver アプリケーションで認定された VM の公式リストは、「SAP Note 1928533 – SAP Applications on Azure: Supported Products and Azure VM types 」に記載されています。

SAP クラウドのベンチマーク結果のリストはこちら (英語)

E64v3 のベンチマーク結果はこちら (英語)

D64v3 のベンチマーク結果はこちら (英語)

M128 のベンチマーク結果はこちら (英語)

M128 BW Hana のベンチマーク結果はこちら (英語)

2. Azure 上でのSAP Business One (B1) on Hana & SQL Server が認定

SAP Business One は、一般的な中堅中小企業向け ERP ソリューションです。現在、多くの 顧客がSAP B1 を SQL Server 上で実行しています。このたび、Azure VM 上での SAP B1 on SQL Server が一般提供になりました。

Hana にも SAP B1 が移植されており、約40 名のユーザーで Azure DS14v2 が認定されました (英語)

Azure で SAP B1 を運用するユーザは、新しいバージョンの SAP B1 のブラウザー アクセス機能により、コストを低く抑えられることがあります。ブラウザー アクセス機能により、Azure のターミナル サーバー VM への B1 クライアントのインストールが不要になるためです。

詳細については、以下の SAP Note を参照してください。
2442627 – Troubleshooting Browser Access in SAP Business One

2194215 – Limitations in SAP Business One Browser Access

2194233 – Behavior changes in Browser Access mode of SAP Business One as compared to Windows desktop mode

SAP on Azure の認定に関しては、こちらのページを参照してください。

SAP on Azure のドキュメントは、こちらの概要ページから参照してください。

3. SAP on Azure における Managed Disks の推奨

一般的に、すべての新規導入において Managed Disks の使用が推奨されています。

Managed Disks では、可用性セット内の 各VM のストレージを複数のノードに自動的に分散し、複雑性を緩和しながら可用性を向上させることができます。これにより、1 つのストレージ ノードの故障が可用性セット内の 2 台以上の VM の停止につながってしまうことを避けることができるためです。

注:

1. SAP NetWeaver アプリケーション サーバーや DBMS サーバーでは、Standard レベルの Managed Disks はサポートされていません。Azure ホスト監視エージェントでは、Standard レベルの Managed Disks はサポートされていません。

2. 一般に、SAP アプリケーション サーバーのデプロイ時には、データ ディスクを追加しないことと、ブート ディスクに /usr/sap/<SID> をインストールすることが推奨されています。ブート ディスクは最大 1TB まで使用できますが、通常 SAP アプリケーション サーバーでは、それほどの容量や高い IOPS を必要としません。

3. 1 つの可用性セット内の VM に Managed Disks と Unmanaged Disks の両方のディスクを追加することはできません。

4. Blob のデータ ファイルを直接使用する SQL Server VM では、Managed Disks を使用することはできません。

5. 一般に、SAP アプリケーション サーバー用には Premium 管理ディスク を使用することが推奨されます。単一の VM で 99.9% の可用性を保証する SLA (注: 実際に達成される値は、99.9% よりも高い場合がほとんど) で返金保証を受けられるためです。

6. 「SAP Note 2367194 – Use of Azure Premium SSD Storage for SAP DBMS Instance 」に記載されている通り、一般に、SAP DBMS サーバーにも Premium レベルの Managed Disks の使用することが推奨されます。

Managed Disks の概要は、こちらのドキュメントを参照してください。

Managed Disks の詳細については、こちらのブログ記事 (英語) をご覧ください。

Azure Disks の料金とパフォーマンスに関しては、こちらのページを参照してください。

よく寄せられる質問は、こちらのページを参照してください。

4. Azure 用 Sybase ASE 16.3 PL2 の Always-On 機能

SAP Note 1928533 – SAP Applications on Azure: Supported Products and Azure VM typesに記載されているとおり、Azure は Windows と Linux の両方で Sybase ASE 16 SP2 以降をサポートしています。

Sybase ASE には、「Always-On」と呼ばれる HA/DR ソリューションが含まれています。これは、SQL Server AlwaysOn の機能とは大きく異なります。このHAソリューションでは共有ディスクは必要ありません。

Sybase ASE のリリース スケジュールについては、こちらのページ (英語)
を参照してください。

Sybase HA ソリューションの概要ドキュメントは、こちらのページ (英語)
をご覧ください。

SAP Note 2410733 – Always-On (HADR) support for 1-to-many replication – SAP ASE 16.0 SP02 PL05 によると、SAP は複数のレプリカ データベースをサポートしています。

Sybase Always-On は Azure 内部ロード バランサーが不要で、Azure へのインストールもかなり簡単に行うことができます。通常構成では、内部ロード バランサーの構成は不要です。Sybase 16 SP3 PL2 では、SAP ユーザー向けの新機能をリリースしています。

Azure への Sybase のセットアップに関するご不明な点やドキュメントの矛盾点などがありましたら、BC-DB-SYB への OSS メッセージからお問い合わせください。

5. リソース グループ、タグ、ロール ベースのアクセス制御、請求、VNet、NSG、UDR、リソースのロック

Azure へのデプロイの前に、まず Azure 上の IaaS および PaaS リソースをサポートするための中核となる「基盤サービス」と構成を設計する必要があります。

設計や構成に関するすべての推奨事項は、とても 1 回で紹介しきれる量ではありませんが、今回は、SAP 環境での Azure デプロイを検討しているお客様向けに、概要とよく寄せられる質問をご紹介します。

1. リソース グループでは、各種 Azure オブジェクトの監視、アクセス制御、プロビジョニング、請求管理が可能です。SAP では、サンドボックス、開発、品質保証、運用などの環境ごとにリソース グループをデプロイすることがよくあります。これにより、環境ごとに簡単に請求を分けることができます。新たなビジネス プロセスのテスト用に運用環境のクローンが必要な場合、Azure の組み込み機能でクローンを作成し、「Project」という名前の新しいリソース グループにコピーすることができます。使用する部署ごとの月額料金が監視されており、使用した分だけが請求されます。

2. Azure タグを使用すると、特定の VM やその他の Azure オブジェクトの属性を詳細に表示することができます。VM に付与されるタグには、「ECC 6.0」や「NetWeaver Application Server」などがあります。これにより、細かい請求管理や、ロール ベースのアクセス制御でのセキュリティ管理が可能になります。また、タグに関するクエリを実行し、SAP の VM とそれ以外の VM を区別したり、アプリケーション サーバー用の VM と DBMS サーバー用の VM を特定したりできます。

3. ロール ベースのアクセス制御を使用すると、職務を分けたり、SAP Basis チームなどのチームごとの管理者権限を制限したり、きめ細かいセキュリティ モデルを作成したりできます。通常、Basis チームには Azure IaaS に関するさまざまな権限が委任されます。これは、VM をはじめ多くの Azure リソースの作成や変更を行う必要があるためです。ただし、Basis チームは、VNet やネットワーク レベルのリソースの作成や変更は許可されません。

4. 請求機能を使用すると、オンプレミス ソリューションよりもコストの透明性が高くなります。Azure のリソース グループとタグにより、SAP システムや環境に対応した Azure 月額料金の各項目が明確化されます。このため、追加のプロジェクト システムや部署から個別に要求されたシステムのチャージバックを行うことができます。

5. Azure Vnet、NSG、UDR の設計は、通常、SAP Basis チームではなくネットワークのエキスパートが担当します。設計時には、以下のような点を考慮する必要があります。

a. SAP アプリケーション サーバーと DBMS サーバーの間の通信では、仮想アプライアンスを使用したルーティングや検査は避ける必要があります。これは、SAP がアプリケーション サーバーと DBMS サーバーの間のレイテンシの影響を受けやすいためです。ハブ & スポーク型のネットワーク トポロジなどを使用することで、SAP のトラフィックを検査せずにクライアントのトラフィックの保護と検査を行うこともできます。

b. DBMS サーバーとアプリケーション サーバーが同一 VNet の異なるサブセットに設置されることは珍しくありませんが、この場合、それぞれのサーバーに異なる NSG が適用されます。

c. UDR では、不要なトラフィックがオンプレミスのプロキシ サーバーに戻るようなルーティングは避ける必要があります。よくある構成ミスでは、Blob 上のデータ ファイルを使用する SQL Server にオンプレミスのプロキシ サーバー経由でアクセスしたり (パフォーマンスが大幅に低下)、SAP アプリケーション間の HTTP(S) 接続がオンプレミスのプロキシ サーバーに戻るようにルーティングされたりしています。

現在、ハブ & スポーク型のネットワーク トポロジが一般的です。

https://docs.microsoft.com/ja-jp/azure/virtual-network/virtual-networks-overview

https://docs.microsoft.com/ja-jp/azure/virtual-network/virtual-networks-nsg

以下のブログも参考になります。
https://blogs.msdn.microsoft.com/igorpag/2016/05/14/azure-network-security-groups-nsg-best-practices-and-lessons-learned/ (英語)

https://docs.microsoft.com/ja-jp/azure/virtual-network/virtual-networks-udr-overview

6. Azure リソース ロックでは、VM などの Azure オブジェクトやストレージが誤って削除されるのを防ぐことができます。必要な Azure リソースは、プロジェクト開始時に作成しておくことをお勧めします。追加、移動、変更の作業が完了し、Azure のデプロイが安定したら、すべてのリソースをロックします。以降は、スーパー管理者のみがリソースをロック解除して VM などのリソースを削除することができます。

https://blogs.msdn.microsoft.com/cloud_solution_architect/2015/06/18/lock-down-your-azure-resources/ (英語)

これらのベスト プラクティスは、システム稼働前に実装する方がはるかに簡単です。以下の図のように、VM などの Azure オブジェクトをサブスクリプション間やリソース グループ間で移動することができます (Managed Disksは、2018 年の早い時期に完全にサポートされる予定です。それまでの間は、Azure ポータルの [Export] ボタンを使用して Managed Disks VM の VHD ファイルをダウンロードすることができます)。

https://docs.microsoft.com/ja-jp/azure/virtual-machines/windows/move-vm

https://docs.microsoft.com/ja-jp/azure/azure-resource-manager/resource-group-move-resources (「Virtual Machines の制限事項」のセクションを参照してください)


6. Linux 用 AzCopy をリリース

AzCopy は、Azure 内の Blob オブジェクトのコピーや、オンプレミスと Azure 間のオブジェクトのアップロードまたはダウンロードに使用される一般的なユーティリティです。UNIX/Oracle から Win/SQL on Azure への移行などで、オンプレミスから Azure に R3load のダンプ ファイルをアップロードする際に使用します。

Linux プラットフォーム用の AzCopy がリリースされましたこのユーティリティを使用するには、Linux 用 .Net Framework 2.0 がインストールされている必要があります。

Windows 用の AzCopy はこちらのページから入手できます。AzCopy のスループットを向上させるには、/NC:<xx> パラメーターを指定します。帯域幅や接続のレイテンシに応じて値を 16 ~ 32 に設定することで、スループットが大幅に向上します。32 よりも高くすると、インターネット接続が飽和状態になる場合があります。

AzCopy の代わりに Blobxfer (英語) を使用することもできます。

7. 読み取り専用ドメイン コントローラー (RODC): Azure での RODC と DC の安全性

読み取り専用ドメイン コントローラーは、以前から使用されている機能です。詳しくはこちらのドキュメント (英語) を参照してください。

読み取り専用ドメイン コントローラーと書き込み可能なドメイン コントローラーの違いについては、こちらのドキュメント (英語) を参照してください。

最近は、より安全とされる Azure での RODC を検討するお客様が増えています。

RODC のセキュリティ プロファイルは、ExpressRoute 経由でオンプレミスのドメイン コントローラーに接続する Azure の書き込み可能なドメイン コントローラーのプロファイルによく似ています。唯一異なるのが「フィルタリングされた属性セット」で、AD 属性のいくつかが RODC に複製されない場合がある点です (ただし、ほぼすべての属性が複製されます)。

Azure および一般的なドメイン コントローラーを保護する場合に、以下のような推奨事項があります。

1. Active Directory の RODC も書き込み可能な DC も同様にクエリを実行できることを悪用して、侵入者は脆弱性や弱点が保護されていないユーザー アカウントを捜索する、いわゆる「調査」を行います。この調査を検出するために、IDS ソリューションおよび IPS ソリューションを、Azure とオンプレミスの両方にデプロイすることを推奨しています。

2. セキュリティを大幅に強化するために、多要素認証を実装する方法もあります。Azure には、多要素認証 (英語) サービスが標準で含まれています。

3. ブート ディスクや、DS データベース、ログ、SYSVOL を含むディスクでは、Azure Disk Encryption を使用することを推奨しています。これにより、VM 全体のクローニング、VHD ファイルのダウンロード、RODC や書き込み可能な DC の起動などを防ぐことができます。AD データベースの情報を盗むために、デバッグ ツールが使用される場合があります。

まとめ: 書き込み可能な DC の代わりに RODC をデプロイしても、ExpressRoute でオンプレミスの AD インフラストラクチャに接続されている Active Directory ソリューションのセキュリティ プロファイルは大幅には変わりません。代わりに IDS、多要素認証、Azure Disk Encryption などのセキュリティ機能を組み合わせて使用することで、安全な AD 環境を構築することができます。ドメイン コントローラーが読み取り専用でも、必ず他のセキュリティ メカニズムと組み合わせることが重要です。

8. Azure Site Recovery: 最新サポート状況

強力なプラットフォーム機能である Azure Site Recovery では、競合他社のソリューションよりも低いコストで最高クラスの災害復旧機能を実装することができます。

SAP アプリケーション向けの Azure Site Recovery をデプロイする方法については、ブログ記事とホワイトペーパーをお読みください。

https://docs.microsoft.com/ja-jp/azure/site-recovery/site-recovery-sap

http://aka.ms/asr-sap (英語)

以下、Azure Site Recovery の新機能と、既存の機能をご紹介します。

1. Azure Disk Encryption では、Azure のブート ディスクやデータ ディスクのコンテンツを暗号化できます。この機能のサポートのプレビューが間もなく開始されます。この機能をご希望のお客様は、マイクロソフトまでお問い合わせください。

2. ストレージ スペースと SIOS のサポートの一般提供を開始しました。

3. Managed Disks を使用するVMのサポート を近日リリースします。

4. サブスクリプション間のレプリケーションが 2018 年の早い時期に利用可能になります。

5. Suse 12.x のサポートを 2018 年に開始します。

ASR および ADE については、以下の資料を参照してください。

https://azure.microsoft.com/ja-jp/blog/tag/azure-site-recovery/ (英語)

https://azure.microsoft.com/ja-jp/services/site-recovery/

https://docs.microsoft.com/ja-jp/azure/security/azure-security-disk-encryption-faq

9. 非運用環境での Hana システム.

Hana の認定を受けていないハードウェアやクラウド プラットフォームでも、Hana DBMS サーバーを実行することが可能です。詳しくは「SAP Note 2271345 – Cost-Optimized SAP HANA Hardware for Non-Production Usage」を参照してください。

この SAP Note で紹介されている PowerPoint および Word ドキュメントでは、ハイパースケールのクラウドでよく使用される「ホワイトボックス」型のサーバーを、非運用システムや仮想化ソリューションで使用できることが説明されています。

よって、非運用の Hana システムはAzure VM 上で実行することが可能です。

なお、一般的に災害復旧システムは、運用ワークロードとして実行される可能性があるため、運用環境と見なされると考えられます。

10. Oracle 11g/12c 用 Azure で Oracle Linux 7.x を認定

Azure プラットフォームでは、さまざまな種類のオペレーティング システムとデータベースがサポートされています。このたび、SAP は Linux VM で稼動する Oracle DBMS のサポートを開始しました。

Azure でサポートするオペレーティング システムとデータベースの組み合わせの全リストは、「SAP Note 1928533 – SAP Applications on Azure: Supported Products and Azure VM types 」で確認してください。

注:

1. SAP、Oracle、Linux、Azure の組み合わせが完全にサポートされ、一般提供されました。

2. Oracle DBMS は Oracle Linux 7.x にインストールする必要があります

3. SAP アプリケーション サーバーおよびエンジン単体には、Oracle Linux 7.x または Windows を使用することができます (詳しくは PAM を参照)。

4. SWPM を起動する前に最新版の Oracle Linux をインストールすることを強くお勧めします。

5. 「SAP Note 2015553 – SAP on Microsoft Azure: Support prerequisites」に記載のとおり、Linux のホスト監視エージェントをインストールする必要があります。

6. Oracle Linux で Accelerated Networking の使用を検討中のお客様は、マイクロソフトまでお問い合わせください。

7. Suse または RHEL での Oracle DBMS の実行はサポートされていません。

重要な SAP Note およびその他の情報は以下のとおりです。

https://wiki.scn.sap.com/wiki/display/ORA/Oracle (英語)

2039619 – SAP Applications on Microsoft Azure using the Oracle Database: Supported Products and Versions

2069760 – Oracle Linux 7.x SAP Installation and Upgrade

405827 – Linux: Recommended file systems

2171857 – Oracle Database 12c – file system support on Linux

2369910 – SAP Software on Linux: General information

1565179 – This note concerns SAP software and Oracle Linux

注: SAP、Oracle、Windows、Azure の組み合わせでは、完全なサポートが一般提供されています (長年サポートされており、多くのお客様が Azure をテラバイト単位で使用しています)

11. Oracle 12c Release 2 のSAP認定と、Windows2016でのリリース、Azure での ASM のサポートの計画

SAP Note 2133079 – Oracle Database 12c: Integration in SAP environment」で発表されたとおり、SAP は、SAP NetWeaver アプリケーション用 Oracle 12c Release 2 を認定しました。

Oracle 12c Release 2 は、Linux での認定に加えて Windows Server 2016 でもサポートが開始されます。

2017 年 12 月 18 日より、Oracle Database バージョン 12.2.0.1 (RDBMS 12.2.0.1、Grid Infrastructure 12.2.0.1、Oracle RAC 12.2.0.1 を含む) が SAP NetWeaver をベースとする SAP 製品に認定されました。RDBMS 12.2.0.1 の最小限の初期 SAP Bundle Patch (SBP) は、SAP12201P_1711 (Unix 用) または PATCHBUNDLE12201_1711 (Windows 用) です。

Oracle 12.2.0.1 では、SAP Kernel のバージョン 7.21_EXT 以降が必要です。

SAP Note 2470660 では、Oracle 12.2.0.1 を SAP 環境で使用する場合の、データベースのインストールおよびアップグレード、ソフトウェアのダウンロード、修正プログラムの適用、機能のサポート、OS の前提条件などに関する重要な技術情報が提供されています。

Oracle バージョン 12.1 でサポートされている Oracle 機能 (Oracle In-Memory、Oracle Multitenant、Oracle Database Vault、Oracle ILM/ADO など) は、バージョン 12.2.0.1 でもサポートされます。

2470660 – Oracle Database Central Technical Note for 12c Release 2 (12.2)

2133079 – Oracle Database 12c: Integration in SAP environment

マイクロソフトは Azure の Oracle ASM の認証取得を進めており、最初に Oracle Linux 7.4 と Oracle 12c R1/R2 の組み合わせを予定しています。今後、改めてブログ記事でお伝えします。

998004 – Windows での Oracle Instant Client の更新

12. 中規模以上の SAP システムでのAccelerated Networkingの推奨

Accelerated Networking を使用すると、2 台の Azure VM 間のレイテンシを劇的に短縮でき、帯域幅が大きく拡大します。

Accelerated Networking は、Windows および Linux の VM 用に一般提供されています。

一般に、中規模および大規模な新規 SAP プロジェクトでは、Accelerated Networking をデプロイすることをお勧めします。

Accelerated Networking については、以下の点に考慮する必要があります。

1. 既存の VM に対し Accelerated Networking を有効化することはできません。VM の新規作成時に有効化する必要があります。なお、VM を削除した後 (既定ではブート ディスクとデータ ディスクは保持されます)、同じディスクを使用して VM を再作成することができます。

2. Accelerated Networking は、Ev3、Dv3、M、Dv2 など、4 基以上の物理 CPU を搭載する新しい VM シリーズ (2017 年 12 月現在 – E8v3 は物理 CPU 4 基と 8 つのハイパースレッド) のほとんどで使用できます。

3. Accelerated Networking は、G シリーズ VM では使用できません。

4. Blob ストレージに直接保存されたデータ ファイルを使用する SQL Server の場合大きな効果があります。

5. Suse 12 Service Pack 3 (Suse 12.3) を強く推奨します (2017 年 12 月現在、Hana の認定取得を進行中)。RHEL 7.4 も推奨しています。Oracle Linuxについてはマイクロソフトまでお問い合わせください。

6. 1 つ以上の Accelerated Network NIC と Accelerated Network 非対応の従来の NIC は、同じ VM で使用することができます。

7. SAP アプリケーション サーバーとデータベース サーバーの間に、Azure VNet UDR や、その他のセキュリティおよび検査デバイスを設置することは推奨しません。この接続では、パフォーマンスを最大限に維持する必要があります。

8. SAP アプリケーション サーバーからデータベース サーバーへの通信レイテンシをテストするには、ABAP レポートの [/SSA/CAT] に "ABAPMeter" と入力します。

9. 効率の悪い ABAP コードや、大規模 Payroll ジョブまたは IS-Utilities Billing ジョブなどの高負荷な操作には、Accelerated Networking は非常に効果的です。

Azure Networking のさらに詳しい情報については、以下の資料を参照してください。

https://docs.microsoft.com/ja-jp/azure/virtual-network/virtual-network-create-vm-accelerated-networking

https://docs.microsoft.com/ja-jp/azure/virtual-network/virtual-network-optimize-network-bandwidth

https://docs.microsoft.com/ja-jp/azure/virtual-network/virtual-network-bandwidth-testing

https://blogs.msdn.microsoft.com/igorpag/2017/04/06/my-personal-azure-faq-on-azure-networking-v3/ (英語)

https://blogs.technet.microsoft.com/jpitpro/2017/12/27/moving-from-sap-2-tier-to-3-tier-configuration-and-performance-seems-worse/

以下は、ネットワークに大きな影響を及ぼす非効率な ABAP コードの例です。LOOP 構文内に SELECT 文を配置するのは、コーディング ルールとして望ましくありません。Accelerated Networking によりこのような非効率な ABAP コードのパフォーマンスを改善することが可能ですが、基本的に LOOP 構文内で SELECT 文を使用しないことをお勧めします。このコードは拡張性がないため、反復実行数が増加するほどパフォーマンスが大幅に低下します。


13. Azure の新機能

Azure プラットフォームでは、多くの新機能や機能強化が継続的にリリースされています。

新機能についてはこちらの記事でまとめて紹介しています。

SAP ユーザー向けに、以下の優れた新機能をご紹介します。

1. VNet 間ゲートウェイ接続を介して、2 つの異なるデータセンター間で通信することができます。現在プレビュー中のグローバル ピアリング (英語) も使用することができます。

2. ファイル保存、DIR_TRANS、インターフェイスなどに使用可能な SoftNAS (英語) は、NFS と SMB プロトコルをサポートしています。

3. Azure Data Box は、データセンターマイグレーションシナリオで有効です。

4. CIS イメージ (英語) – 強化された Windows イメージを提供しています。これらは、全 SAP アプリケーションで完全にテストされたものではありません。このイメージは、SAPWebDispatcher や SAP Router などで使用することができます。

5. SAP LaMa に Azure 用コネクタが実装されました (SAP Note 2343511 – Microsoft Azure connector for SAP Landscape Management (LaMa) )。

6. 今後のブログ記事では、Hana Large インスタンス ネットワークについて解説する予定です。なお、こちらのホワイトペーパー (英語) ではハブ & スポーク型ネットワーク (英語) の詳細情報をご確認いただけます。

7. Azure サービス エンドポイント (英語) は、一部のパブリック エンドポイントを削除し、Azure VNet に移行しました。

関連リンク

VM: パフォーマンス (英語) | 起動しない場合 (英語) | エージェント | Linux のサポート

ネットワーク: ExpressRoute (英語) | VNet トポロジ (英語) | ARM の LB 構成 | VNet 間 VPN | VPN デバイス | サイト間 VPN | ILPIP | 予約済み IP | ネットワーク セキュリティ

ツール: PowerShell のインストール | VPN の診断 (英語) | 各種プラットフォーム用 CLI (英語) | Azure Resource Explorer | ARM の JSON テンプレート (英語) | iPerf (英語) | Azure Diagnostics

Azure のセキュリティ: 概要 (英語) | ベスト プラクティス (英語)| トラスト センター | プレビュー機能 | プレビューのサポート

その他のトピック

SAP on Windows/Oracle の Windows 用 Oracle 機能のプレゼンテーション (英語)

Oracle クロス プラットフォームのトランスポータブル表領域 (英語): UNIX プラットフォームから一般的な Intel サーバーへの移行を検討している UNIX/Oracle ユーザー向けの魅力的な新機能

以下の図は、UNIX Big Endian システムで実行可能なバックアップを作成し、Intel Little End システム (Windows または Linux) で復元するプロセスを示したものです。

詳細については、「SAP Note 552464 – What is Big Endian / Little Endian? What Endian do I have?」を参照してください。


IBM Power サーバーに SAP Hana をインストールしているお客様から、同様の質問を頂くことがありますが、Hana on Power (比較的珍しいソリューション) からの移行やパブリック クラウドでの DR の実行を検討している場合がほとんどです。SAP Note 1642148 – FAQ: SAP HANA Database Backup & Recovery」では、Hana 2.0 のバックアップを IBM Power (Little Endian) から Intel ベースのシステムへ復元するしくみを説明しています。

Content from third party websites, SAP and other sources reproduced in accordance with Fair Use criticism, comment, news reporting, teaching, scholarship, and research

 

Viewing all 34890 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>