Quantcast
Channel: TechNet Blogs
Viewing all 34890 articles
Browse latest View live

How to Diable POP/IMAP protocol for all users “By Default” in Office 365?

$
0
0

Our security requires us to do a lot of crazy stuff. But one genuine ask is to disable some protocols for users to connect to the mailbox.

One important requirement most of the organizations want is to disable IMAP and POP protocol access to Mailbox.  In Office 365 POP and IMAP are enabled by default for all users.

Currently, we can do it multiple ways

  • Use Portal and disable for each user once created. 
  • Use Powershell connected to ExO and disable for individual user
    • Use Get- and Set- Commands in a single command to disable it for multiple users. 

Ok, that is fine. But If we create new users?

Well, in that case, we need to redo the above steps again. Problem with this is

  • All admins need to be made aware of this procedure
    • New admins must be specifically told to perform the disable operation post creating a new mailbox.
  • Error prone as it can be missed(Human error)

Wouldn’t it be nice if we could just create a new mailbox and things are set the way they we want?

To do this we need to follow the below 

Get-CASMailboxPlan | fl Name

Name : ExchangeOnline-449cbb2645664646

Name : ExchangeOnlineDeskless-3a08d145664564

Name : ExchangeOnlineEnterprise-a0ce8d44545

Name : ExchangeOnlineEssentials-912457c782-7

Now we need to change the mailbox Plan so that IMAP and POP protocol is disabled by default. I am doing this for Enterprise plan. 

Set-CASMailboxPlan ExchangeOnlineEnterprise-a0ce8d44545 -ImapEnabled $false -PopEnabled $false

Now when we create a new mailbox, by default the POP and IMAP protocol for the user will be disabled.

–   Praveen Kumar E     

www.Modern365.co.in  

 


Moving to an offering model with more repeatable IP

$
0
0

 

By Kajal Mukherjee, Cloud Solutions Architect and Chris Kahrs, Technology Solutions Professional

System Integrators make up a large part of the Microsoft partner community. These companies focus on various services that involve Microsoft products and technologies to build solutions that meet customer needs. System Integrators with technical expertise in the Microsoft product suite have primarily focused on technology related services. These companies have excelled with their strong technical team capabilities – their technical specialists are experts in many Microsoft and other technology solutions.

Traditionally, this has been an excellent business model for partners and has provided exceptional value to customers. The extended technical team supports specific project needs with the value of the partner being determined primarily by the strength of their technical team and project delivery costs. These partners don’t focus on building repeatable IP. Every project is a new delivery opportunity starting from scratch. It is, however, difficult for partners to differentiate their value to customers following this business model. The customer also assumes the entire risk of project failure as the partner is only responsible for technical expertise of the resources.

Drivers for change

Successful cost management through efficient project delivery has helped many partners build a strong business model. But a partner’s business growth through cost reduction alone can’t continue indefinitely. Partners need to provide business value to customers by offering solutions that help customers increase their revenues.

This business model offers strong growth opportunities. There is a clear difference between a partner solution offering and a software product. A solution offering is not necessarily a fully baked product. The strength of a systems integrator is its ability to build customer specific solutions. However, it does not always mean starting from scratch.

Often, a partner builds vast knowledge about an industry or specific business process through solution delivery to many customers. An offering is a quasi-built service for a vertical industry or a horizontal business process where the partner has gained knowledge through the delivery of these solutions. An offering consists of various assets including presentation and marketing materials, technical assets (data model, code snippets, ETL mapping, scripts, etc.), high-level design documents, and more. Assets must be copyrighted and owned by the partner and may not contain any material or data that may be considered customer assets. Microsoft does not assume any liability for solution offerings built by the partner community.

An offering can help an IT services focused partner on having a conversation with business stakeholders based on the value of the offering. There are many Microsoft partners who have expertise in various technologies, such as Microsoft Dynamics 365, Microsoft Power BI, and extract, transform, load process (ETL). It is not easy for a partner to differentiate their value with technical expertise alone. However, it is relatively easy for the same partner to have a conversation with business users regarding call center analytics offerings built using Dynamics 365, ETL, and Power BI. The partner value proposition is considered on top line revenue terms instead of IT bottom line cost reduction alone.

How to get started

Successful partners crossing this divide often combine these strategies:

  1. Continue doing Time and Material work. Solution offerings are a process, and continuing on a high growth path helps fund the transition to a more vertical approach.
  2. Rely on the expertise they have in house to build their first few offerings. If they have done projects that require similar expertise as solution offerings they are looking to build, they gain valuable experience and knowledge of the offering and vertical.

Offerings are also very appealing to customers as they provide the opportunity to engage with partners who have experience in building solutions focused on growth. It significantly reduces the risk associated with project delivery and shortens the overall project delivery timeline. Also, offerings help partners engage Microsoft sales teams to reach customers outside of their own sales territory.

Some of the offerings developed by Microsoft system integration partners have been successful in various industries like Retail/CPG, Manufacturing, Life Science, and Financial Services. Many of these offerings are available through AppSource. If you’re interested in offering centric solutions using Microsoft Azure data and analytics capabilities, consider additional services such as hosting solutions on Azure on behalf of customers through the Cloud Solution Provider program.

Engage your local partner teams and work with them to learn more about how solution offerings can help grow your business for many years to come.

Data Platform, Intelligence, and Analytics Partner Community

Pre-Upgrade tasks for SCSM 2012 R2 to 2016

$
0
0

I thought I would share a few things I do before upgrading a 2012 R2 environment to 2016.

For reference, the upgrade steps for SCSM 2016 are here: https://docs.microsoft.com/system-center/scsm/upgrade-environment. However, before you upgrade to SCSM 2016, I highly recommend you check a few things so that you try to catch something that may cause the upgrade to fail. The first, is what is discussed here: https://blogs.technet.microsoft.com/servicemanager/2016/08/03/scsm-2016-upgrade-steps-for-custom-development/ This outlines what needs to be changed in custom solutions for them to work in SCSM 2016. I did this recently for a solution from the TechNet Gallery and it was pretty simple. Make sure that you plan for each of your custom add-ons to SCSM, as every free tool, portal and custom solution will need to be upgraded. Check with the developers of your tools (if not yourself) and see if they have a 2016-compatible version (most do).

After verifying that you have the updated tools available, I usually run a few SQL Queries to see if anything in the ServiceManager database or DWStagingAndConfig is stuck. If there is a problem now, it will get worse with an upgrade (general rule of thumb). If you have a result from any of these, figure out why. Either post a comment, look it up online, or open a case.

Run these queries against the ServiceManager database and DWStagingAndConfig database:

 

Select * from DeploySequenceView where DeploymentStatusId !=6

Select * from DeploySequenceStaging

select * from DeployItemStaging

Select * from infra.process p

left join infra.Batch b on p.ProcessId = b.ProcessId

left join infra.WorkItem w on w.BatchId = b.BatchId

where w.StatusId NOT IN (3,6)

 

After checking the DB, check the OperationsManager event log on the workflow server. If you see errors like 33880 or 33333, figure out why they are happening. These and several other errors can indicate there is something corrupt in SCSM (like an SLO that hasn’t been created through the console). Address these errors before upgrading. Lastly, MAKE SURE YOU HAVE YOUR RECOVERY KEYS!!!!!

Hopefully this will help a few companies who would have otherwise had to implement a disaster recovery because the upgrade failed.

-Chris Howie

What’s new for US partners the week of June 12, 2017

$
0
0

Find out what’s new for Microsoft partners. We’ll connect you to resources that help you build and sustain a profitable cloud business, connect with customers and prospects, and differentiate your business. Read previous issues of the newsletter and get real-time updates about partner-related news and information on our US Partner Community Twitter channel.

You can subscribe to receive posts from this blog in your email inbox or as an RSS feed.

Looking for partner training courses, community calls, and information about technical certifications? Read our MPN 101 blog post that details your resources, and refer to the Hot Sheet training schedule for a six-week outlook that’s updated regularly as we learn about new offerings. Monthly recaps of the US Partner Community calls and blog posts are also available.

To stay in touch with me and connect with other partners and Microsoft sales, marketing, and product experts, join our US Partner Community on Yammer and see other options to stay informed.

Top stories

Announcing the Microsoft Inspire US Partner Awards

Get secure and stay secure with Microsoft

Save up to 50 percent on select cloud certification exams, through June 15

New virtual workshops series: Build a cloud ready partner business

Build your managed services business with the Cloud Solution Provider program

Our guide to finding and taking the technical training you need

What to know about how Microsoft manages and uses customer data

Simplify your access to Microsoft tools with SOAP

Help your customers maximize their investment in Windows 10 and Office 365

US Partner Community partner call schedule

Community calls and a regularly updated, comprehensive schedule of partner training courses are listed on the Hot Sheet.

CMTrace without CMTrace

$
0
0

A quick blog to give cheap pop to PowerShell and it’s rescuing power, yet again. Just a nice, quick and dirty trick to emulate CMTrace when you are in an environment that may whitelist software from running on clients or they simply disallow tools from being copied to systems without change control.

For those not familiar with CMTrace, it is a tool that comes with Configuration Manager.  It is a text based viewer that scrolls text files in real time as they are written and has been a long time favorite since SMS, when it was called Trace32.  It is, by default, located in the installation folder of Configuration ManagerTools.  So, C:Program FilesMicrosoft Configuration ManagerTools.

So, say I want to look at the locationservices.log file and don’t have CMTrace – well, PowerShell and the old school get-content cmdlet to the rescue. So, if I want to view that file in realtime like trace, I can use the tail parameter as such:

Get-Content -Path c:windowsccmlogslocationservices.log -Tail 1 –Wait

…then restart the ccmexec (SMS Agent Host) service, or any other text based file (such as log) you know will be written to…  Go ahead, try this, I’ll wait…

This will show the log content like notepad would render it, but at least you can monitor the file. Sorry about the lack of reds and yellows that CMTrace is so known for, but a nice “second best”.

— If you like my blogs, please share it on social media, rate it, and/or leave a comment. —

PS without BS: Extracting DHCP Reservations to a CSV

$
0
0

Just a quick snippet that shows how to extract existing reservations into a text file using PowerShell and Windows Server. Some have a need to export a list of IP addresses into a CSV file that you can quickly drop into Excel.

For those wondering, yes, I like using active DHCP reservations for all my servers, except domain controllers and Hyper-V hosts. It just makes things so much easier and consistent to configure.

That being said, the following PowerShell should do the trick…

Get-DHCPServerV4Scope | ForEach {
Get-DHCPServerv4Lease -ScopeID $_.ScopeID | where {$_.AddressState -like ‘*Reservation’}
} | Select-Object ScopeId,IPAddress,HostName,ClientID,AddressState | Export-Csv “.$($env:COMPUTERNAME)-Reservations.csv” -NoTypeInformation

Your output will look like this…
“ScopeId”,”IPAddress”,”HostName”,”ClientID”,”AddressState”
“10.0.1.0”,”10.0.1.10″,”client1.domain.com”,”00-15-5d-01-81-09″,”ActiveReservation”
“10.0.1.0”,”10.0.1.11″,”client2.domain.com”,”00-15-5d-01-6e-16″,”ActiveReservation”

Happy scripting.

— If you like my blogs, please share it on social media, rate it, and/or leave a comment. —

PS without BS: Multiple Replace of a string

$
0
0

Been a busy blog day, but the last one today (promise). There is feature that is unique to PowerShell that makes it better than say your old VB/VBScript and really sets it apart.

To replace text in a string multiple times in VBScript, you had to do something like the following:
dim string1 as string
String1=”Some text needs to be replaced”
String1=Replace(String1,”Some”,”Not All”)
String1=Replace(String1,”needs”,”should be”)

In PowerShell (starting in v3), you can actually combine these into one statement by using multiple replace commands:
$string1=”Some text needs to be replaced”
$string1=$string1.Replace(“Some”,”Not All”).Replace(“needs”,”should be”)

Your output from PowerShell: Not All text should be replaced

Great time saving tip in a pinch. Happy scripting.

— If you like my blogs, please share it on social media, rate it, and/or leave a comment. —

テクノロジの力で環境を守る 3 つの方法【6/10 更新】

$
0
0

(この記事は 2017 年 4 月 21 日にMicrosoft Partner Network blog に掲載された記事 3 Ways Technology Will Save the Environment の翻訳です。最新情報についてはリンク元のページをご参照ください。)

 

テクノロジと環境保護との直接的な結び付きはなかなか想像しにくいかもしれません。一見すると、この 2 つはまったく関連性がなく、むしろ相反する (英語) ように思えるでしょう。テクノロジとは、身の回りの環境を自分たちに都合良く作り変えようとする人間社会の欲求から生み出されるものです。しかし、テクノロジは人々の生産性を向上させるだけでなく、より豊かな自然環境作りにも大きく貢献する力を備えており、マイクロソフトはそうしたプラスの側面に注目しています。

今回の記事では、今まさにテクノロジを活用した環境保護に取り組んでいるマイクロソフトのプロジェクトやパートナー様の事例をいくつかご紹介したいと思います。

 

スマート アグリカルチャー

バイオテクノロジは、人間がテクノロジを活用することで従来とは異なる形で環境に影響を与えられるようになる、すばらしい研究分野です。バイオテクノロジを導入すると、収穫量が増加するだけでなく、水や土壌をはじめとする環境全体の質の保全も可能 (英語) になります。

マイクロソフトでは 1 つの取り組みとして、農家の皆様が健康な農作物を育てて全体的な収穫量を拡大できるように、水中の pH レベルや栄養塩分布を監視するシステムを強化し、大きな成果を収めました。マイアミに拠点を置く家族経営の企業グループ Costa Farms は、Azure の IoT システムを利用して効率化と生産性向上を図っています。このソリューションには、マイクロソフト パートナーである Adafruit の Feather M0 Wifi (英語) や pHSensor、Microsoft Azure、IoT Hub、Stream Analytics、Event Hub、Azure Functions、SQL Azure が使用されており、給水系統全体の pH 測定機器を監視して pH レベルが適切かどうかをリアルタイムで確認できます。

現在では、こうした農業の自動化 (英語) や革新的な農業テクノロジの登場により、大幅な効率化と無駄の削減が実現され、環境への悪影響も最小限に抑えられるようになりました。

 

ビッグ データの活用による環境保護

マイクロソフトやパートナーの皆様が認識しているとおり、地球上の資源を監視・管理するためには、データが不可欠です。マイクロソフトのチーフ環境ストラテジストである Rob Bernard は先ごろ、2017 年のグリーン テクノロジ予測に関するブログ記事 (英語) で次のように述べました。「地球についてはまだまだわからないことばかりです。しかし、地球上の炭素貯蔵量や大気中に含まれるガス、生態系、そしてそこから生み出される価値を明らかにして理解を深めることに多くの企業が目を向け始めており、そのために必要なツールやプラットフォームを開発するべく多額の投資を行っています。」データを活用すれば、環境保護や生物多様性管理の問題に対し、先を見越して現実的な対策を立てられるようになります。

マイクロソフト パートナーの HP は、自然保護団体 Conversation International および世界各国の研究者グループ Tropical Ecology Assessment and Monitoring (TEAM) Network (英語) と協力し、すばらしい取り組みを進めています。それは、新しいテクノロジを活用して、熱帯雨林の環境変化や世界各地の生態系について調査しようというものです。生息している生物種、植物の生育状況、降水量、気温、炭素貯蔵量、湿度、太陽放射などのデータを、気象センサーやカメラ トラップを使用してほぼリアルタイムに収集し、野生動物の生息数や生物多様性についてリモートで観測しています。このプロジェクトでは、最先端の科学とテクノロジを駆使してデータや偏りのない情報を収集し、環境を最優先に考えて意思決定を下せるように支援することを目指しています。

 

インテリジェントなインフラストラクチャ

MPN ブログではこれまでにも、パートナー様の取り組みやテクノロジによってスマート ビルディングやエネルギー利用の効率化が実現された事例を数多くご紹介してきました。中でも特筆すべきなのが、マイクロソフト パートナーである Accenture とシアトル市による取り組み (英語) です。シアトル市は Accenture の協力を得て、市中心部での電力使用量を最大 25% 削減することを目標とした、地域規模のスマート ビルディング プログラムを計画しました。そして、Azure と SQL Server 2012 を使用して既存のビル管理システムに予測分析を導入し、機器の最適化を行って、エネルギー削減を図りました。

このスマート ビルディング ソリューションは、医療研究施設、主要なオフィス ビル、産業施設、ホテルを含む 5 棟に導入されました。このソリューションにより、エネルギー費用と保守費用を 10 ~ 25% 削減できる見込みです。マイクロソフトのクラウド ベース ソフトウェアを利用して既存のビルのデータから有益な情報をリアルタイムで収集する方法は、ビル所有者だけでなく電力会社にとっても実に画期的でした。マイクロソフトの CityNext プログラムでは引き続き、さまざまな方法で都市のデジタル改革を後押しして行政運営の最適化を可能にすると共に、都市環境への影響を大幅に低減できる革新的な取り組みを支援してまいります。

 

新しいクラウド テクノロジを活用して地球環境を保護するアイデアをお持ちでしたら、ぜひお聞かせください。皆様のご意見をお待ちしております。

 

 


Moving the ConfigMgr site database to an Always On Availability Group

$
0
0

In a previous post we started talking about how we’re moving our workloads into Azure, this post continues that conversation. One important design decision is (or should be) High Availability. We’ve been running our SQL servers in failover clusters and using a SAN for storage, but this isn’t possible in Azure. It is possible, however, to use Always On Availability Groups (I’ll refer to this as an “Availability Group” or an “AG” from now on) in Azure.

This post will explain how to efficiently move the CM DB to an Availability Group. This process will be the same whether the AG is in Azure or not. However, if you’ve got a small database this method could be overkill and perhaps not worth the effort (depends on how “small” it is), but if you’ve got a large database this should be quite helpful.

Prerequisites

Naturally, the SQL servers should be created and configured already, including the permissions. I highly recommend creating the AG ahead of time with a dummy database so you can ensure everything in the AG is working and setup correctly without having to do that troubleshooting when trying to move your production database. Thus, I’m going to assume that the AG has already been created. As for permissions, because the real work is performed by site recovery (a database move) check that the proper permissions are in place – just like you’d do for any recovery.

The last item before we get into the steps to move your CM database into an AG, is to make sure you have the server level settings (for SQL) properly configured for CM. Specifically, the server must allow CLRs and use a defined replication text size.* To do that run the following on each of the AG nodes.

USE [master];
GO
EXECUTE sp_configure 'show advanced options', 1;
RECONFIGURE WITH OVERRIDE;
GO
EXECUTE sp_configure 'clr enabled', 1;
RECONFIGURE WITH OVERRIDE;
GO
EXECUTE sp_configure 'max text repl size (B)', 2147483647;
RECONFIGURE WITH OVERRIDE;
GO

*If I get my way CM will also accept a value of “-1” for the “max text repl size (B)” since that’s much easier to tell people to use and is how to tell SQL to use the max size of each data type.

When I said we’d move the CM database into an AG “efficiently” what I meant was “with as little downtime as possible”. That means we’re going to do as much as possible before even starting our downtime – this could be days before but I don’t recommend waiting more than a couple days.

Pre-Downtime Activities

The first change we need to make on the CM database is to put it into the FULL recovery model. This is a requirement for being in an Availability Group. Something not for this blog post but worthy of mentioning is the need for SQL log backups for a database in the Full recovery model. If you’re going to use an Availability Group then you need to brush up on SQL backups (full and log backups specifically) and make sure to have a plan in place.

ALTER DATABASE [CM_xxx] SET RECOVERY FULL;

Because we’ve changed the recovery model we need to take a new backup (even if you had one right before updating to full recovery model. So, take a Full backup of the CM database at this point. If you have a job that does this you can kick this job off or run the backup manually (see sample code below). However, if you do have a backup job you’ll want to disable it. After we take this backup we don’t want any new full backups taken (at least until we’ve fully moved the CM database to the AG).

BACKUP DATABASE [CM_xxx] TO DISK = N'E:**YOUR DESIRED BACKUP LOCATION**CM_xxx_Full.bak'
WITH COMPRESSION, STATS = 1;

Now that we’ve got a new full backup we need to backup the log as well. Take a log backup of the CM database using the wizard, a job, or use something like this:

BACKUP LOG [CM_xxx] TO DISK = N'E:**YOUR DESIRED BACKUP LOCATION**CM_xxx_Log1.trn'
WITH COMPRESSION, STATS = 1;

In the next step you’ll be restoring the database using the backups which were just taken. The restore can be performed locally or from a network location. If you’re going to do it from local files, copy the backup files to the nodes. If you’re going to restore the database over the network (using a UNC path rather than a local path) you can skip this copy step.

Restore these backups to both of the AG nodes. In this step you have to restore the full backup first and then the log. And this is very important, you must use the NORECOVERY option! If you don’t use the NORECOVERY option in the restore then all of this is for naught. So make sure this is not forgotten. Trust me…I forgot one time and lost 6 hours of preparation.

RESTORE DATABASE [CM_xxx] FROM DISK = N'**THE LOCATION OF THE BACKUP FILES**CM_xxx_Full.bak'
WITH NORECOVERY, STATS = 1;
RESTORE LOG [CM_xxx] FROM DISK = N'**THE LOCATION OF THE BACKUP FILES**CM_xxx_Log1.trn'
WITH NORECOVERY, STATS = 1;

Now, and this is also very important, you must continue to take log backups until you’re ready to make the official move to the AG. So, if you’re going to wait a day or two it would be best to have a job scheduled to take a log backup every couple of hours unless you want to remember to do this yourself.
This is important to do for several reasons. The biggest reason being, if you don’t the log will continue to grow and perhaps fill up your disk, which means SQL stops working (for this DB at least). Another reason is because taking them more often will create smaller log backup files to copy and/or restore.

You will need to name each log backup something different than what was previously used. You’ll notice that the example log backup file has a “1” appended to the filename. That was intentional so that as you take additional backups you increment that number (or do something else to make the name unique).

These log backups will need to be restored to each of the AG nodes just like the previous backups were. So, you can either copy the backups locally or perform the restore over the network just as previously done (again, making sure to use the NORECOVERY option). The restores can either wait until right before taking the downtime for the move or can be done throughout the time so there is less to do when it comes time to take the downtime. Oh, and if it isn’t clear, the restores will need to be performed in ‘oldest to newest’ order.

Downtime Activities

If you’re familiar with SQL then you’re realizing that we’ve essentially just created our own log shipping routine for our CM database – from the production server to our new AG nodes. Now that we’ve got our ‘log shipping’ activities happening and have waited until our downtime window we’re ready to begin the real move. If you have a job running to perform the log backups, it’s time to disable that job. That’s because we want to make sure to control exactly when the last log backup is taken.
Turn off the SMS services and wait 10-15 minutes. We’ll let things stop processing and wind down before calling things ready.

Once you’re comfortable with where things are at, take one last log backup and restore it to both nodes (just like previously done). At this point both AG nodes will have the latest data from the CM database. Both nodes should show the database in the “RESTORING” state. If not…you’re not ready and should turn the services back on and start over…and don’t forget to use NORECOVERY in the restore statements next time :). You can check the status in Object Explorer or with the following query.

SELECT  name
       ,state_desc
  FROM sys.databases
 WHERE name LIKE N'CM[_]___';

If things look correct at this point, then you’re ready to make one of the nodes the primary server for the CM database. To do this, on one node AND ONE NODE ONLY, run the following:

RESTORE DATABASE [CM_xxx] WITH RECOVERY;

Now this database should be in the “ONLINE” state and you can join the database to the Availability Group which you’ve already created (with the dummy database). Use the “Add Database…” wizard under “AlwaysOn High Availability –> Availability Groups –> [Your AG Name] –> Availability Databases”. In the screenshot below showing the location of this wizard you can see the dummy database I used to create the AG is named “CM_AAG” (for Azure Availability Group). I’ll be adding the CM database “CM_EA1” (for future screenshot reference).

When going through the “Add Database” wizard it is very important to choose “Join only” on the “Select Initial Data Synchronization” screen (should be the third screen to show up). If this option isn’t chosen then all the work we’ve done so far to minimize downtime is useless since SQL will have to take a backup and restore it to the other node. Since we’ve already done this via our ‘log shipping’, SQL does not need to perform a new backup and restore (a time consuming activity for larger dbs).
Finish going through the wizard and at the end your “Results” screen should look something like this:

At this point the database is part of the Availability Group on the new servers, and almost ready for the site recovery activity (database move). We’re not completely ready to perform this action because CM requires some database settings to be set for CM to work – and these settings aren’t kept in the backup/restore actions.

To ensure the required settings are set on the database, connect to the primary node and run the following SQL statements. We need to update these database settings on each database in the Availability Group because these settings aren’t replicated by SQL. Therefore, you’ll need to failover to each secondary (using SQL Server Management Studio, NOT the Failover Cluster Manager!) and run the same statements while the node is the “Primary” node. After each database has had these statements run against it while it was the “Primary” in the AG you can failover to whatever your preferred node is.

USE [master];
GO
ALTER DATABASE [CM_xxx] SET TRUSTWORTHY ON;
ALTER DATABASE [CM_xxx] SET HONOR_BROKER_PRIORITY ON;
ALTER DATABASE [CM_xxx] SET ENABLE_BROKER WITH ROLLBACK IMMEDIATE;
GO

The last step is to perform the database move via a Configuration Manager Recovery. You’ll specify the listener name for the SQL instance. ConfigMgr will take care of the rest and since we’ve configured all the database and server settings the site recovery process will not try to do something it can’t do to a database in an Availability Group – namely, put it into single user mode to change the configurations.

There you have it, nice and easy right?

One thing to note about running CM in an Availability Group: at the time of writing this blog post the AG must be in the “manual failover” mode when performing a CM upgrade, however, before and after you can (and should) run the AG in “automatic failover” mode.

Checklists

Looking for an easy checklist for your own activities? Well, look no further!

Pre-Downtime Checklist

  • Run the following on each node that will be in the Availability Group
USE [master];
GO
EXECUTE sp_configure 'show advanced options', 1;
RECONFIGURE WITH OVERRIDE;
GO
EXECUTE sp_configure 'clr enabled', 1;
RECONFIGURE WITH OVERRIDE;
GO
EXECUTE sp_configure 'max text repl size (B)', 2147483647;
RECONFIGURE WITH OVERRIDE;
GO
  • Set permissions as in any site recovery process
  • Create the Availability Group using a “dummy” database
  • Optional: On the current CM DB server install SQL backup jobs – full and log backup job
  • Change the CM DB to FULL recovery model
  • Take a Full database backup (then disable this job for now)
  • Take a Log backup (and ensure the job is running every couple of hours)
  • Restore the first full and log backups to both of the nodes in the Availability Group (“WITH NORECOVERY”!)
  • Continue to backup/restore logs until ready for downtime

Downtime Checklist

  • Disable the log backup job
  • Turn off SMS services
  • Wait 10-15 minutes
  • Take one last log backup and restore this on both the nodes as previously done
  • Check that the CM database is in a status of “Recovering” on both nodes
  • On one node (and one node only!) run the following statement
RESTORE DATABASE [CM_xxx] WITH RECOVERY;
  • Add the database to the Availability Group via the Availability Group “Add Database” wizard (JOIN ONLY!)
  • Run the following script on the primary node
USE [master];
GO
ALTER DATABASE [CM_xxx] SET TRUSTWORTHY ON;
ALTER DATABASE [CM_xxx] SET HONOR_BROKER_PRIORITY ON;
ALTER DATABASE [CM_xxx] SET ENABLE_BROKER WITH ROLLBACK IMMEDIATE;
GO
  • Failover to the other node (via SSMS) and run the previous script again
  • Perform a DB Move via Configuration Manager Recovery
  • Make sure you have jobs or at least a plan for SQL backups including log backups!

Moving the ConfigMgr site database to an Always On Availability Group

$
0
0

Back in 2012 I really wanted to be able to have the ConfigMgr database in an Always On Availability Group (I’ll refer to this as an “Availability Group” or an “AG” from now on). So, a coworker and I made that happen – sure, it wasn’t supported and auto-failover didn’t work but nevertheless we had it working.

So now that it’s legit to have the database in an AG I’m very happy. Especially since we are moving our workloads into Azure and you can’t have an Active/Passive cluster (for high availability) in Azure. You can, however, have an Availability Group in Azure (which could also be used as a DR solution)! This post will explain how to efficiently move the CM DB to an Availability Group. This process will be the same whether the AG is in Azure or not. However, if you’ve got a small database this method could be overkill and perhaps not worth the effort (depends on how “small” it is), but if you’ve got a large database this should be quite helpful.

Prerequisites

Naturally, the SQL servers should be created and configured already, including the permissions. I highly recommend creating the AG ahead of time with a dummy database so you can ensure everything in the AG is working and setup correctly without having to do that troubleshooting when trying to move your production database. Thus, I’m going to assume that the AG has already been created. As for permissions, because the real work is performed by site recovery (a database move) check that the proper permissions are in place – just like you’d do for any recovery.

The last item before we get into the steps to move your CM database into an AG, is to make sure you have the server level settings (for SQL) properly configured for CM. Specifically, the server must allow CLRs and use a defined replication text size.* To do that run the following on each of the AG nodes.

USE [master];
GO
EXECUTE sp_configure 'show advanced options', 1;
RECONFIGURE WITH OVERRIDE;
GO
EXECUTE sp_configure 'clr enabled', 1;
RECONFIGURE WITH OVERRIDE;
GO
EXECUTE sp_configure 'max text repl size (B)', 2147483647;
RECONFIGURE WITH OVERRIDE;
GO

*If I get my way CM will also accept a value of “-1” for the “max text repl size (B)” since that’s much easier to tell people to use and is how to tell SQL to use the max size of each data type.

When I said we’d move the CM database into an AG “efficiently” what I meant was “with as little downtime as possible”. That means we’re going to do as much as possible before even starting our downtime – this could be days before but I don’t recommend waiting more than a couple days.

Pre-Downtime Activities

The first change we need to make on the CM database is to put it into the FULL recovery model. This is a requirement for being in an Availability Group. Something not for this blog post but worthy of mentioning is the need for SQL log backups for a database in the Full recovery model. If you’re going to use an Availability Group then you need to brush up on SQL backups (full and log backups specifically) and make sure to have a plan in place.

ALTER DATABASE [CM_xxx] SET RECOVERY FULL;

Because we’ve changed the recovery model we need to take a new backup (even if you had one right before updating to full recovery model. So, take a Full backup of the CM database at this point. If you have a job that does this you can kick this job off or run the backup manually (see sample code below). However, if you do have a backup job you’ll want to disable it. After we take this backup we don’t want any new full backups taken (at least until we’ve fully moved the CM database to the AG).

BACKUP DATABASE [CM_xxx] TO DISK = N'E:**YOUR DESIRED BACKUP LOCATION**CM_xxx_Full.bak'
WITH COMPRESSION, STATS = 1;

Now that we’ve got a new full backup we need to backup the log as well. Take a log backup of the CM database using the wizard, a job, or use something like this:

BACKUP LOG [CM_xxx] TO DISK = N'E:**YOUR DESIRED BACKUP LOCATION**CM_xxx_Log1.trn'
WITH COMPRESSION, STATS = 1;

In the next step you’ll be restoring the database using the backups which were just taken. The restore can be performed locally or from a network location. If you’re going to do it from local files, copy the backup files to the nodes. If you’re going to restore the database over the network (using a UNC path rather than a local path) you can skip this copy step.

Restore these backups to both of the AG nodes. In this step you have to restore the full backup first and then the log. And this is very important, you must use the NORECOVERY option! If you don’t use the NORECOVERY option in the restore then all of this is for naught. So make sure this is not forgotten. Trust me…I forgot one time and lost 6 hours of preparation.

RESTORE DATABASE [CM_xxx] FROM DISK = N'**THE LOCATION OF THE BACKUP FILES**CM_xxx_Full.bak'
WITH NORECOVERY, STATS = 1;
RESTORE LOG [CM_xxx] FROM DISK = N'**THE LOCATION OF THE BACKUP FILES**CM_xxx_Log1.trn'
WITH NORECOVERY, STATS = 1;

Now, and this is also very important, you must continue to take log backups until you’re ready to make the official move to the AG. So, if you’re going to wait a day or two it would be best to have a job scheduled to take a log backup every couple of hours unless you want to remember to do this yourself.
This is important to do for several reasons. The biggest reason being, if you don’t the log will continue to grow and perhaps fill up your disk, which means SQL stops working (for this DB at least). Another reason is because taking them more often will create smaller log backup files to copy and/or restore.

You will need to name each log backup something different than what was previously used. You’ll notice that the example log backup file has a “1” appended to the filename. That was intentional so that as you take additional backups you increment that number (or do something else to make the name unique).

These log backups will need to be restored to each of the AG nodes just like the previous backups were. So, you can either copy the backups locally or perform the restore over the network just as previously done (again, making sure to use the NORECOVERY option). The restores can either wait until right before taking the downtime for the move or can be done throughout the time so there is less to do when it comes time to take the downtime. Oh, and if it isn’t clear, the restores will need to be performed in ‘oldest to newest’ order.

Downtime Activities

If you’re familiar with SQL then you’re realizing that we’ve essentially just created our own log shipping routine for our CM database – from the production server to our new AG nodes. Now that we’ve got our ‘log shipping’ activities happening and have waited until our downtime window we’re ready to begin the real move. If you have a job running to perform the log backups, it’s time to disable that job. That’s because we want to make sure to control exactly when the last log backup is taken.
Turn off the SMS services and wait 10-15 minutes. We’ll let things stop processing and wind down before calling things ready.

Once you’re comfortable with where things are at, take one last log backup and restore it to both nodes (just like previously done). At this point both AG nodes will have the latest data from the CM database. Both nodes should show the database in the “RESTORING” state. If not…you’re not ready and should turn the services back on and start over…and don’t forget to use NORECOVERY in the restore statements next time :). You can check the status in Object Explorer or with the following query.

SELECT  name
       ,state_desc
  FROM sys.databases
 WHERE name LIKE N'CM[_]___';

If things look correct at this point, then you’re ready to make one of the nodes the primary server for the CM database. To do this, on one node AND ONE NODE ONLY, run the following:

RESTORE DATABASE [CM_xxx] WITH RECOVERY;

Now this database should be in the “ONLINE” state and you can join the database to the Availability Group which you’ve already created (with the dummy database). Use the “Add Database…” wizard under “AlwaysOn High Availability –> Availability Groups –> [Your AG Name] –> Availability Databases”. In the screenshot below showing the location of this wizard you can see the dummy database I used to create the AG is named “CM_AAG” (for Azure Availability Group). I’ll be adding the CM database “CM_EA1” (for future screenshot reference).

When going through the “Add Database” wizard it is very important to choose “Join only” on the “Select Initial Data Synchronization” screen (should be the third screen to show up). If this option isn’t chosen then all the work we’ve done so far to minimize downtime is useless since SQL will have to take a backup and restore it to the other node. Since we’ve already done this via our ‘log shipping’, SQL does not need to perform a new backup and restore (a time consuming activity for larger dbs).
Finish going through the wizard and at the end your “Results” screen should look something like this:

At this point the database is part of the Availability Group on the new servers, and almost ready for the site recovery activity (database move). We’re not completely ready to perform this action because CM requires some database settings to be set for CM to work – and these settings aren’t kept in the backup/restore actions.

To ensure the required settings are set on the database, connect to the primary node and run the following SQL statements. We need to update these database settings on each database in the Availability Group because these settings aren’t replicated by SQL. Therefore, you’ll need to failover to each secondary (using SQL Server Management Studio, NOT the Failover Cluster Manager!) and run the same statements while the node is the “Primary” node. After each database has had these statements run against it while it was the “Primary” in the AG you can failover to whatever your preferred node is.

USE [master];
GO
ALTER DATABASE [CM_xxx] SET TRUSTWORTHY ON;
ALTER DATABASE [CM_xxx] SET HONOR_BROKER_PRIORITY ON;
ALTER DATABASE [CM_xxx] SET ENABLE_BROKER WITH ROLLBACK IMMEDIATE;
GO

The last step is to perform the database move via a Configuration Manager Recovery. You’ll specify the listener name for the SQL instance. ConfigMgr will take care of the rest and since we’ve configured all the database and server settings the site recovery process will not try to do something it can’t do to a database in an Availability Group – namely, put it into single user mode to change the configurations.

There you have it, nice and easy right?

One thing to note about running CM in an Availability Group: at the time of writing this blog post the AG must be in the “manual failover” mode when performing a CM upgrade, however, before and after you can (and should) run the AG in “automatic failover” mode.

Checklists

Looking for an easy checklist for your own activities? Well, look no further!

Pre-Downtime Checklist

  • Run the following on each node that will be in the Availability Group
USE [master];
GO
EXECUTE sp_configure 'show advanced options', 1;
RECONFIGURE WITH OVERRIDE;
GO
EXECUTE sp_configure 'clr enabled', 1;
RECONFIGURE WITH OVERRIDE;
GO
EXECUTE sp_configure 'max text repl size (B)', 2147483647;
RECONFIGURE WITH OVERRIDE;
GO
  • Set permissions as in any site recovery process
  • Create the Availability Group using a “dummy” database
  • Optional: On the current CM DB server install SQL backup jobs – full and log backup job
  • Change the CM DB to FULL recovery model
  • Take a Full database backup (then disable this job for now)
  • Take a Log backup (and ensure the job is running every couple of hours)
  • Restore the first full and log backups to both of the nodes in the Availability Group (“WITH NORECOVERY”!)
  • Continue to backup/restore logs until ready for downtime

Downtime Checklist

  • Disable the log backup job
  • Turn off SMS services
  • Wait 10-15 minutes
  • Take one last log backup and restore this on both the nodes as previously done
  • Check that the CM database is in a status of “Recovering” on both nodes
  • On one node (and one node only!) run the following statement
RESTORE DATABASE [CM_xxx] WITH RECOVERY;
  • Add the database to the Availability Group via the Availability Group “Add Database” wizard (JOIN ONLY!)
  • Run the following script on the primary node
USE [master];
GO
ALTER DATABASE [CM_xxx] SET TRUSTWORTHY ON;
ALTER DATABASE [CM_xxx] SET HONOR_BROKER_PRIORITY ON;
ALTER DATABASE [CM_xxx] SET ENABLE_BROKER WITH ROLLBACK IMMEDIATE;
GO
  • Failover to the other node (via SSMS) and run the previous script again
  • Perform a DB Move via Configuration Manager Recovery
  • Make sure you have jobs or at least a plan for SQL backups including log backups!

Fixing issue in making cross domain Ajax call to SharePoint REST service in Chrome

$
0
0

This post is a contribution from Jing Wang, an engineer with the SharePoint Developer Support team
Symptom:
Remote Ajax Application is configured with Windows Authentication. It makes XMLHttpRequest to SharePoint 2013 Web Service, listdata.svc.
Sample code:

<!DOCTYPE html>
<html>
<head>
<script src="http://ajax.cdnjs.com/ajax/libs/json2/20110223/json2.js" type="text/javascript" ></script>
<script src="http://code.jquery.com/jquery-1.9.1.js" type="text/javascript" ></script>
</head>
<body>
<h1>test page</h1>
                <script type="text/javascript">
                 //Ajax call to use listdata.svc
            var restUrl = "http://SharePointSiteUrl/_vti_bin/listdata.svc/List1";

  $.ajax({
                url: restUrl,
                type: "GET",
                dataType: 'JSON',
         headers: {
                         "Content-Type": "'application/json;odata=verbose'",
                         "Accept": "application/json;odata=verbose",
                         "crossDomain": "true",
                         "credentials":"include"
                         },
                xhrFields: { withCredentials: true        },
                success: function(response) {
                    alert("Success");
                },
                error: function(response){
                    alert("Error" );
                }
            });
                    </script>
</body>
</html>

When use Chrome to browse to the above page, you will see the below error
Failed to load resource: the server responded with a status of 401 dev.contoso.com/_vti_bin/listdata.svc/EMSPropertyLibrary()?$filter……
(Unauthorized)
Below is screen shot of error in Browser Developer tool console window :

Cause:
The XMLHttpRequest was sent with added custom headers, like,
headers.append(‘Content-Type’, ‘application/json;odata=verbose’);
headers.append(‘credentials’, ‘include’);
These custom headers made the request NOT a “Simple Request”, see reference, https://developer.mozilla.org/en-US/docs/Web/HTTP/Access_control_CORS
Since the request here with header (‘Content-Type’, ‘application/json;odata=verbose’), it is not a Simple Request and the following process will happen.

  1. Browser (Chrome) sent preflight OPTIONS request to SharePoint WFE server, which hosts the listdata.svc, without credential first,
  2. Server returned HTTP/1.1 401 Unauthorized response for the preflight request.
  3. Due to 401 Unauthorized response from server the actual Web Service request will get dropped automatically.

Fiddler trace shows:

Solution:

Step I,

Force SharePoint WFE Server to send Http Status code of 200 for the preflight requests by us IIS’s new URL Rewrite tool:

  1. Install “web platform installer” from https://www.microsoft.com/web/downloads/platform.aspx
  2. Go to “Applications” tab and search for “URL Rewrite” and download it
  3. Open IIS configuration tool (inetmgr) and select the root node having the machine name in the IIS. Double click “URL Rewrite” in the features view on the right hand side.
  4. Add a new blankrule by clicking on Add Rule –> New Blank Rule from the menu on the right
  5. Give it any name
  6. In “Match URL”, specify this pattern: .*
  7. In “Conditions” click on Add and  specify this condition entry: {REQUEST_METHOD} and this pattern: ^OPTIONS$
  8. In “Action”, specify: action type Personalized response (or Customized reponse), state code 200, reason Preflight, description Preflight
  9. Click on Apply

Now, the server should reply with a 200 status code response to the preflight request, regardless of the authentication.

Step II,
Since this is a CORS request, above change is not enough to make the XMLHttpRequest call go through.
With the changes in Step I, Chrome Browser console shows a different error:
(index):1 XMLHttpRequest cannot load http://***/_vti_bin/listdata.svc…
Request header field crossDomain is not allowed by Access-Control-Allow-Headers in preflight response.

Make the following changes to the web.config for the SharePoint Web Application, to allow more custom headers required to enable CORS:

Sample code block in Web.Config. You will need to update the value of Access-Control-Allow-Origin to point to your remote ajax application.
—-

<httpProtocol>
      <customHeaders>
        <add name="Access-Control-Allow-Origin" value="http://AngularjsSiteUrl" />
        <add name="Access-Control-Allow-Headers" value="Content-Type,Accept,X-FORMS_BASED_AUTH_ACCEPTED,crossDomain,credentials " />
        <add name="Access-Control-Allow-Methods" value="GET, POST, PUT, DELETE, OPTIONS" />
        <add name="Access-Control-Allow-Credentials" value="true" />
      </customHeaders>
</httpProtocol>

The above changes will help to fix the issue and the Ajax request will now execute successfully.

Modern Management of Internet Clients

$
0
0

The release of ConfigMgr Tech Preview – 1705 introduced new cloud based client management capabilities like on-boarding Azure AD users and deploying ConfigMgr client over Internet.

Common scenarios like BYOD or Un-Managed/Workgroup devices in the field can now join Azure-AD which gets enrolled into Intune & automatically pushes the ConfigMgr agent for full management.

The coolest part is the Azure-AD joined devices won’t even require a client auth. certificate for HTTPS communication.

Here’s a step by step accomplishing the same –

  1. Configure Azure Services in ConfigMgr Console
  2. Prepare Azure for Device registration
  3. Cloud Management Gateway
  4. ConfigMgr Client Package

    1.  Configure Azure Services in ConfigMgr Console

The first step is to associate Azure AD with ConfigMgr & discover the AAD users. This piece is critical because the information will be cross-verified when the clients from Internet try to register.

Run the wizard to create a Server App & Client App

The Application Name, HomePage & Identifier URLs can be anything.

Follow the same steps to create Client Application. You can re-use the same URL used above.

Enable Azure AD Discovery.

Click OK and finish the wizard.

You can verify the Server and Client Apps created in Azure and listed in ConfigMgr console.

From Azure console select Server App [CM-ServerApp in my case], click Grant Permissions and click Yes. Although the app is already configured for Read Directory Data, this step is still necessary to activate.

Repeat the above steps for Client App [CM-ClientApp in my case]CM-ClientApp. It’s important to follow this order else the discovery will fail.

In case you are wondering where to look for this info. there is a new log file for this component named – SMS_AZUREAD_DISCOVERY_AGENT.log

    ERROR: Error occurred. StatusCode = Forbidden, reason = Forbidden    SMS_AZUREAD_DISCOVERY_AGENT    [Failed]

    Total AAD Users Found: 9. Total AAD User Record Created: 9    SMS_AZUREAD_DISCOVERY_AGENT        [Success]

    Full sync completed successfully at X:XX:XX    SMS_AZUREAD_DISCOVERY_AGENT

    Successfully published UDX for Azure Active Directory users.    SMS_AZUREAD_DISCOVERY_AGENT


2.  Prepare Azure for Device registration

Allow users to join their devices to Azure AD.

Make sure, MDM authority is set to Intune.

This will allow the machine to join to Azure AD and enroll to Intune.

 

3. Cloud Management Gateway

In case you haven’t already configured one before, follow the step by step blog post.

Since its possible to host CMG on a HTTP-MP, an important requirement in this scenario is to ensure the MP communicating with CMG is on HTTPS mode.

Additionally, install ASP.Net 4.5

 

4. ConfigMgr Client Package

Finally, it’s time to deploy the client over internet. You can leverage Intune to do this job.

Intune supports deploying .msi files, we will use ccmsetup.msi with command line parameters to install the ConfigMgr agent.

From Azure console, open Intune > Mobile Apps and Add app, choose Line of business app, browse to CCMSetup.msi

The App information has the name auto populated, you can modify and add additional information.

In the Command-line box enter info as the reference table below = CCMSETUPCMD=”/NoCrlCheck /Source:C:CLIENT CCMHOSTNAME=CONTOSOCMG.CLOUDAPP.NET/CCM_Proxy_MutualAuth/720575XXXX SMSSiteCode=TP1 AADTENANTID=a2950cba-b6a5-4273-93b8-98e4994f33bb AADCLIENTAPPID=3f5d8103-4dc6-4c84-8b1c-b842XXX AADRESOURCEURI=https://contsoapps AADTENANTNAME=Contoso”

Table reference for the command line switches –

Once the app is ready, go to Assignments tab to deploy against a group. The app can be Available/Required

Testing & Validation

Login to an Azure-AD joined device with your Azure AD credentials. Based on the above configuration, the ConfigMgr client will either install automatically or will be available.

In my case I left it as available and logged on to https://portal.manage.microsoft.com to install.


You can review the CCMSetup.log for troubleshooting. If you don’t see any log and wondering if Intune even kicked off the install, you can review the MDM Event logs – Microsoft-Windows-DeviceManagement-Enterprise-Diagnostics-Provider/Admin

Here’s a screenshot post a successful client registration. The device will be listed in ConfigMgr console


Troubleshooting Tips –

  • Ensure you are logged in with AAD User ID
  • Check ADALOperationProvider.log – To validate successful association of the existing AAD User with ConfigMgr onboarded in Step1.
  • Check CCMMessaging.log – To validate successful connection to gateway.
  • If you didn’t use public CA for CMG, you need to ensure the Client Root Cert is added to the Trusted Root CA on client machine.

With the installed client, possibilities are endless with more improvements coming in future releases.

 

Thanks,

Arnab Mitra

ブロックチェーン ビジネスの進め方を変えるソリューション【6/11 更新】

$
0
0

(この記事は 2017 年 4 月 24  日にMicrosoft Partner Network blog に掲載された記事 Blockchain: A Solution That’s Changing How People Do Business の翻訳です。最新情報についてはリンク元のページをご参照ください。)

 

 

 

パートナー様が手掛けるビジネスや業界、そしてパートナー様ご自身にも大きなインパクトを与えるであろう新しいテクノロジを 1 つ挙げるなら、ブロックチェーンがぴったりでしょう。

ブロックチェーンとは、暗号化によって安全性が確保された分散共有型の台帳です。従来の中央集中型のシステムとは基本的なプロセスやモデルが異なり、次のようなメリットがあります。

マイクロソフトでは、Azure Marketplace で新たにブロックチェーンをサポートすることで、パートナー様にこの新たなテクノロジをご活用いただけるようにすると共に、パートナー様のビジネスや顧客サービスのアプローチを変革するためのお手伝いができればと考えています。

 

ブロックチェーンの基本

ブロックチェーンは、企業、業界、公的組織がほぼリアルタイムに取引を行い、検証できる新たな方法です。ビジネス プロセスの効率化、コストの節減、不正リスクの軽減といったメリットがあります。基本的には、デジタルの取引台帳を作成するためのデータ構造であり、特定のプロバイダーに台帳の管理を一任するのではなく、分散されたコンピューター ネットワークで共有します。

 

ブロックチェーンを活用すれば、さらにオープンで透明性が高く公的に検証可能なシステムを構築できます。これにより、業界の枠を越えた価値や資産の取引、契約の履行やデータ共有に対する考え方が根本から覆されることになるでしょう。

 

 

Azure の Blockchain as a Service

Blockchain as a Service (BaaS) は、各企業が連携して新しいビジネス プロセスを試せるように、高速・低コスト・低リスクのフェイルファストのためのプラットフォームを提供します。このサービスは、業界最大のコンプライアンス ポートフォリオを備えたクラウド プラットフォームによって支えられています。

マイクロソフトは、Azure のブロックチェーン サポートを強化することを先ごろ発表 (英語) しました。これにより Azure は、複数の企業間でブロックチェーン ネットワークを構築できる初のパブリック クラウドとなります。このサポート強化の目的は、複数の Azure リージョン、サブスクリプション、Azure Active Directory (Azure AD) テナントをまたいでプライベート ネットワークを展開する必要がある大規模企業のシナリオに対応することです。組織間でインフラストラクチャやネットワークを構築・設定するには時間がかかりますが、マイクロソフトは最初に発表したブロックチェーン ソリューションと同様に、こうした煩雑な作業を自動化することで、パートナーの皆様がコンソーシアムや運用パイロットの構築に専念できるようにしています。

ブロックチェーンの今後

ブロックチェーンを活用すれば、難しいビジネス課題を解決する新たなソリューションを構築できるようになります。金融業界で担保管理やクラウドファンディングに利用したり、医療業界で処方箋の共有や DNA 配列の解読に活用したり、さまざまな用途が考えられます。現に世界経済フォーラムのレポート (英語) によれば、2025 年までに全世界の GDP の 10% がブロックチェーンまたはブロックチェーン関連のテクノロジ上に保管されるようになる見込みです。

ここからは、どのようなブロックチェーンの活用方法があるのか、業界別にご紹介したいと思います。

 

金融: コストのかかる従来のワークフローを見直して、流動性を高め、資本を解放します。インフラストラクチャへのコストを削減し、透明性を高め、不正のリスクを軽減して、取引や決算のスピードアップを図ります。

 

 

医療: 患者の記録を病院や会計部門に直接つないで、医療情報のやり取りなどに関する第三者の検証を省きます。複数の医療機関や地域において、安全性の高い認証方法で個人の医療記録にすばやくアクセスできるようにします。

 

 

行政: 支出の透明性と追跡可能性を高めます。車両などの資産登録を追跡できるようにします。また、不正のリスクや運用コストを低減します。

 

 

小売および製造: サプライ チェーン管理、スマート コントラクト用プラットフォーム、デジタル通貨を強化し、より強固なサイバーセキュリティを可能にします。

 

 

 

ブロックチェーンの活用にご興味がおありでしたら、以下のステップでお試しいただけます。

  1. Azure アカウントにサインアップします。
  2. Azure でブロックチェーン (英語) をセットアップして、テンプレートを使ってみます。
  3. ブロックチェーン アドバイザリーの Yammer グループ (英語) に参加します。
  4. 使えそうだと判断できたら、ラボを作成します。

 

皆様はこれまでブロックチェーンをどのように利用してこられましたか。また、今後どのように利用したいとお考えでしょうか。ぜひご意見をお寄せください。

 

Update Rollup 3 for System Center 2016 Operations Manager #4016126

$
0
0

Update Rollup 3 for Operations Manager 2016 has been released. You can access information about this update from Microsoft Support Page. Also you can download updates from Update Catalog.
Update Rollup 3 for Microsoft System Center 2016 – Operations Manager WebConsole (KB4016126) (ENU)
Update Rollup 3 for Microsoft System Center 2016 – Operations Manager Server (KB4016126)
Update Rollup 3 for Microsoft System Center 2016 – Operations Manager Gateway (KB4016126)
Update Rollup 3 for Microsoft System Center 2016 – Operations Manager Console (KB4016126) (ENU-AMD64)
Update Rollup 3 for Microsoft System Center 2016 – Operations Manager Agent (KB4016126) (AMD64)
Issues that are fixed in Operations Manager:

  • When you run the Agents module version 1.6.2-337, you may receive the following alert:
    Module was unable to convert parameter to a double value
  • When you run System Center 2016 Operations Manager in an all-French locale (FRA) environment, the Date column in the Custom Event report appears blank
  • The Enable deep monitoring using HTTP task in the System Center Operations Manager console doesn’t enable WebSphere deep monitoring on Linux systems.
  • When overriding multiple properties on rules that are created by the Azure Management Pack, duplicate override names are created. This causes overrides to be lost.
  • When the heartbeat failure monitor is triggered, a “Computer Not Reachable” message is displayed even when the computer is not down
  • The Get-SCOMOverrideResult PowerShell cmdlet doesn’t return the correct list of effective overrides
  • When creating a management pack (MP) on a client that contains a Service Level (SLA) dashboard and Service Level Objects (SLO), the localized names of objects aren’t displayed properly if the client’s CurrentCulture settings don’t match the CurrentUICulture settings. In cases where the localized settings are English English, ENG, or Australian English, ENA, there’s an issue when the objects are renamed.
  • The Event ID: 26373 error, which may cause high memory consumption and affect server performance, has been changed from a “Critical” message to an “Informational” message.
  • The Application Performance Monitoring (APM) feature in System Center 2016 Operations Manager Agent causes a crash for the IIS Application Pool that’s running under the .NET Framework 2.0 runtime. Microsoft Monitoring Agent should be updated on all servers that use the .NET Framework 2.0 application pools for APM binaries update to take effect. Restart of the server might be required if APM libraries were being used at the time of the update.
  • The UseMIAPI registry subkey prevents collection of processor performance data for RedHat Linux system. Also, custom performance collection rules are also impacted by the UseMIAPI setting
  • Organizational Unit (OU) properties for Active Directory systems are not being discovered or populated
  • The Microsoft.SystemCenter.Agent.RestartHealthService.HealthServicePerfCounterThreshold recovery task fails to restart the agent, and you receive the following error message:
    LaunchRestartHealthService.ps1 cannot be loaded because the execution of scripts is disabled on this system
  • The DiscoverAgentPatches.ps1 script in Microsoft.SystemCenter.Internal.xml fails and you experience an exception
  • An execution policy has been added as unrestricted to PowerShell scripts in Inbox management packs
  • SQL Agent jobs for maintenance schedule use the default database. If the database name is not the default, the job fails
  • This update adds support for OpenSSL1.0.x on AIX computers. With this change, System Center Operations Manager uses OpenSSL 1.0.x as the default minimum version supported on AIX, and OpenSSL 0.9.x is no longer supported

 

System Center Management Pack for UNIX and Linux Operating Systems


Top Contributors Awards! June’2017 Week 2

$
0
0

Welcome back for another analysis of contributions to TechNet Wiki over the last week.

First up, the weekly leader board snapshot…

 

As always, here are the results of another weekly crawl over the updated articles feed.

 

Ninja Award Most Revisions Award
Who has made the most individual revisions

 

#1 Richard Mueller with 123 revisions.

 

#2 Nourdine MHOUMADI with 82 revisions.

 

#3 M.Qassas with 51 revisions.

 

Just behind the winners but also worth a mention are:

 

#4 RajeeshMenoth with 25 revisions.

 

#5 Peter Geelen with 20 revisions.

 

#6 S.Sengupta with 17 revisions.

 

#7 .paul. _ with 14 revisions.

 

#8 JunaidJan with 11 revisions.

 

#9 Ken Cenerelli with 11 revisions.

 

#10 pituach with 8 revisions.

 

 

Ninja Award Most Articles Updated Award
Who has updated the most articles

 

#1 Richard Mueller with 71 articles.

 

#2 M.Qassas with 43 articles.

 

#3 RajeeshMenoth with 22 articles.

 

Just behind the winners but also worth a mention are:

 

#4 Nourdine MHOUMADI with 19 articles.

 

#5 Peter Geelen with 11 articles.

 

#6 Ken Cenerelli with 6 articles.

 

#7 Carsten Siemens with 3 articles.

 

#8 S.Sengupta with 3 articles.

 

#9 .paul. _ with 2 articles.

 

#10 pituach with 2 articles.

 

 

Ninja Award Most Updated Article Award
Largest amount of updated content in a single article

 

The article to have the most change this week was C# language best practices, by HR.Rony

This week’s revisers were Peter Geelen, HR.Rony & Nourdine MHOUMADI

 

Ninja Award Longest Article Award
Biggest article updated this week

 

This week’s largest document to get some attention is Master Data Services Capacity Guidelines 2016, by Smartysanthosh

This week’s revisers were Richard Mueller, Nourdine MHOUMADI, Burak Ugur & Smartysanthosh

 

Ninja Award Most Revised Article Award
Article with the most revisions in a week

 

This week’s most fiddled with article is Universal Windows application: Play a YouTube video, by Nourdine MHOUMADI. It was revised 28 times last week.

This week’s revisers were pituach, Nourdine MHOUMADI, M.Qassas & Peter Geelen

 

Ninja Award Most Popular Article Award
Collaboration is the name of the game!

 

The article to be updated by the most people this week is TechNet Guru Competitions – June 2017, by Peter Geelen

This week’s revisers were RajeeshMenoth, .paul. _, Mohsin_A_Khan, Nourdine MHOUMADI & Richard Mueller

 

The article to be updated by the most people this week is Xamarin Troubleshooting: offline sync issues against Azure App Service using WireShark, by ahmed.rabie

This week’s revisers were M.Qassas, Nourdine MHOUMADI, ahmed.rabie, Burak Ugur, Peter Geelen & Richard Mueller

 

Ninja Award Ninja Edit Award
A ninja needs lightning fast reactions!

 

Below is a list of this week’s fastest ninja edits. That’s an edit to an article after another person

 

Ninja Award Winner Summary
Let’s celebrate our winners!

 

Below are a few statistics on this week’s award winners.

Most Revisions Award Winner
The reviser is the winner of this category.

Richard Mueller

Richard Mueller has been interviewed on TechNet Wiki!

Richard Mueller has featured articles on TechNet Wiki!

Richard Mueller has won 183 previous Top Contributor Awards. Most recent five shown below:

Richard Mueller has TechNet Guru medals, for the following articles:

Richard Mueller’s profile page

Most Articles Award Winner
The reviser is the winner of this category.

Richard Mueller

Richard Mueller is mentioned above.

Most Updated Article Award Winner
The author is the winner, as it is their article that has had the changes.

HR.Rony

This is the first Top Contributors award for HR.Rony on TechNet Wiki! Congratulations HR.Rony!

HR.Rony has not yet had any interviews, featured articles or TechNet Guru medals (see below)

HR.Rony’s profile page

Longest Article Award Winner
The author is the winner, as it is their article that is so long!

Smartysanthosh

This is the first Top Contributors award for Smartysanthosh on TechNet Wiki! Congratulations Smartysanthosh!

Smartysanthosh has not yet had any interviews, featured articles or TechNet Guru medals (see below)

Smartysanthosh’s profile page

Most Revised Article Winner
The author is the winner, as it is their article that has ben changed the most

Nourdine MHOUMADI

Nourdine MHOUMADI has won 3 previous Top Contributor Awards:

Nourdine MHOUMADI has not yet had any interviews, featured articles or TechNet Guru medals (see below)

Nourdine MHOUMADI’s profile page

Most Popular Article Winner
The author is the winner, as it is their article that has had the most attention.

Peter Geelen

Peter Geelen has been interviewed on TechNet Wiki!

Peter Geelen has featured articles on TechNet Wiki!

Peter Geelen has won 182 previous Top Contributor Awards. Most recent five shown below:

Peter Geelen has TechNet Guru medals, for the following articles:

Peter Geelen’s profile page

 

ahmed.rabie

This is the first Top Contributors award for ahmed.rabie on TechNet Wiki! Congratulations ahmed.rabie!

ahmed.rabie has not yet had any interviews, featured articles or TechNet Guru medals (see below)

ahmed.rabie’s profile page

Ninja Edit Award Winner
The author is the reviser, for it is their hand that is quickest!

Nourdine MHOUMADI

Nourdine MHOUMADI is mentioned above.

 

Another great week from all in our community! Thank you all for so much great literature for us to read this week!
Please keep reading and contributing!

 

Best regards,
— Ninja [Kamlesh Kumar]

 

マイクロソフトクラウドで成功するためのビジネスモデル考察まとめ【6/12 更新】

$
0
0

クラウド ビジネスモデルの収益性に関する考察

 

企業がクラウドの世界で儲かるビジネスを行おうとしたときに、従来の IT 市場で成り立っていた単純な「再販 (リセール)」は、今後差別化が難しくなり利益率が低下していくため、企業は何らかの継続的な付加価値、つまり「マネージド サービス (運用管理)」や「知的財産のサービス化」(クラウド プラクティス) を実装する必要が出てきます。

どういう業種、どういう顧客セグメントでどういった種類のクラウド プラクティスが作られているか、それらがどのような仕組みで実装されているか、そしてどれくらいの期間で利益を出しているか、など、マイクロソフトでは多くのパートナー企業の例を調査したものをホワイトペーパーとして公開しています。

この記事では、様々な場所に掲載されているクラウド ビジネスモデル関連のドキュメントや調査を一か所にまとめてみました。クラウド プラクティスを実装する際に参考になれば幸いです。

また、あわせて、タグ「ビジネスモデル」で連載されているブログ記事や、パートナー マーケティング センターの「クラウドパートナー施策」キャンペーン資料もあわせてご覧ください。

 

クラウドビジネス一般論

 

Microsoft Azure でクラウド プラクティス構築

 

調査

 

 

 

 

Windows 10 のスタート メニューについて

$
0
0

こんにちは。

Windows プラットフォーム サポートの神田です。

今回は Windows 10 のスタート メニューについて、サポート窓口でお受けするお問い合わせと、ご案内する対策方法について、この場を借りて記載したいと思います。新しいスタート メニューに興味がある方や、メニューが出てこないなどでお悩みの方は、ご一読いただけますと幸いです。

  • Windows 10 のスタート メニューについて

Windows 8.1Windows Server 2012 R2 ではタイル形式でスタート スクリーンを表示していましたが、Windows 10 では従来のスタート メニューのようにタスクバーからメニューとして展開して表示する形式に変更されました。

動作的には Windows 7 以前の操作感に戻った印象がありますが、Windows 10 UWP アプリ (Universal Windows Platform いわゆるストア アプリ) をサポートするので、スタート メニューにそれらのアプリを表示させる必要があり、内部的にはスタート メニューは UWP のプロセスとして、エクスプローラーのプロセス Explorer.exe とは別のプロセスで起動されます。

起動されるプロセスは Shellexeperiencehost.exe というもので、他のストア アプリと同様にログオンする各ユーザー毎にインストールされます。

  • スタート メニューのタイル表示を構成するサービスについて

スタート メニュー内に表示されるタイルは、専用のデータベース ファイルで構成され、表示されます。このデータベースは、サービス tiledatamodelsvc (表示名 : Tile Data model server) によって管理されます。
Windows 8.1 Windows Server 2012 R2 では、タイル表示は同じくデータベースで管理されていましたが、管理プロセスが Explorer.exe のみだったためリモート デスクトップ サービスで同じユーザーが別セッションでログオンした場合に、データベース ファイルの共有違反が発生する問題がありました。
Windows 10 では、この問題を解消する意味もあり、データベースの管理を、Explorer.exe とは異なるサービスで行うようになっています。

  参考資料
  Windows Server 2012 および Windows Server 2012 R2 環境で発生する Explorer.exe のクラッシュについて
  https://blogs.technet.microsoft.com/askcorejp/2016/03/07/windows-server-2012-windows-server-2012-r2-explorer-exe-1238/

  • スタート メニューのトラブルについて

スタート メニューが起動しない、というお問い合わせがサポート窓口に入ることが多く、特にドメインに参加してから問題が発生した、という報告を良くいただきます。その場合、ドメインに参加したタイミングで適用されたグループ ポリシーの影響である可能性が高いです。
弊社に寄せられた事例では、下記のポリシーが設定されていることが原因だったものがございました。

  • レジストリやフォルダーのアクセス権を変更している
  • プログラムの実行をポリシー (ソフトウェア制限のポリシー、Applocker など) で制限している
  • Firewall サービスを無効にしている

上記ポリシーによる問題については、以前の OS を対象に展開していたもの (Windows 10 での動作を確認していないもの) があるか、ご確認いただくことをお勧めします。
ポリシー以外の問題では、以下のような問題でスタート メニューが表示されない問題が発生する事があります。

  • Shellexperiencehost.exe のライセンス認証情報が破損している
  • Shellexperiencehost.exe のプログラム展開情報が破損している
  • スタート メニューのタイル データベースが破損している

弊社から提供しているツールや更新プログラム、Powershell のコマンドレット、Windows のコマンドで修復を行うことで改善する可能性があります。
以下、方法をご紹介いたしますので、トラブルに遭遇しましたら一度お試しいただければと思います。

  • スタート メニューとコルタナのトラブルシューティング ツール

スタート メニューとコルタナについて、トラブルシューティング ツールを提供しています。

[スタート] メニューまたは Cortana を開くときの問題のトラブルシューティング
https://support.microsoft.com/ja-jp/help/12385/windows-10-troubleshoot-problems-opening-start-menu-cortana

このツールを実行することで、以下の問題が改善される場合があります。

  • スタート メニューに必要なアプリケーションが正しくインストールされていない
  • レジストリ キーのアクセス許可が正しくない
  • タイル データベースが壊れている
  • アプリケーション マニフェストが壊れている

文書内にある [トラブルシューティング ツールを実行する] をクリックし、 Startmenu.diagcab をダウンロードして実行して、修復を試みてください。

 

  • ロールアップ プログラムをインストールする

ツールを実行しても問題が再発したり、改善しても定期的に問題が再発する場合は、最新の更新されたロールアップ プログラムをインストールして事象が改善するかお試しください。
上述したスタート メニューのプロセス Shellexperiencehost.exe   tiledatamodelsvc サービスは累積された更新のロールアップ プログラムで度々更新されており、不具合改善やパフォーマンス向上が期待できます。
Windows 10 は初期バージョンから、 151116071703 とバージョン アップが行われており、それぞれのバージョン向けにロールアップ プログラムを提供しております。最新のロールアップ プログラムを適用すれば、それまでに修正された問題も改善します。

Windows 10 の更新履歴 – Windows 10 (2015 7 月にリリースされた初期バージョン) 向けの更新プログラム。
https://support.microsoft.com/ja-jp/help/4000823
Windows 10 の更新履歴 – Windows 10 Version 1511 向けの更新プログラム。
https://support.microsoft.com/ja-jp/help/4000824
Windows 10 および Windows Server 2016 の更新履歴 – Windows 10 バージョン 1607 および Windows Server 2016 の更新プログラム。
https://support.microsoft.com/ja-jp/help/4000825
Windows 10 の更新履歴 –  Windows 10 Version 1703 向けの更新プログラム。
https://support.microsoft.com/ja-jp/help/4018124

  • スタート メニュー アプリを再インストールする

スタート メニュー アプリは UWP アプリであり、Powershell のコマンドレットで再インストールを行うことができます。アプリケーション マニフェストが破損していたり、展開情報が破損している場合は、問題が発生しているユーザーで再インストールするコマンドレットを実行することで、問題が改善する場合があります。

  1. 事象が発生するユーザーでログオンし、下記フォルダーを開きます。
         C:WindowsSystem32WindowsPowerShellv1.0
  2. PowerShell.exe をダブル クリックで起動します。管理者で起動する必要はありません。
  3. 下記コマンドレットを実行しスタート メニュー アプリを現在のログオン ユーザー向けに再インストールします。
    Get-AppXPackage |Where-Object {$_.InstallLocation -like “*shellexperience*”} | Foreach {Add-AppxPackage -DisableDevelopmentMode -Register “$($_.InstallLocation)AppXManifest.xml”}
  4. PowerShell にて再インストールのインジケーター (緑色のプロンプト画面が上部に表示され 〇 () が表示されます) が右端まで進みエラーなく完了することを確認します。
  5. スタートメニューが正しく表示されるか確認します。
  • タイル データベースを再構築する

Windows 10 バージョン 1511 以降には、タイル データベースが破損した場合に、構成をリセットするコマンドが用意されています。このコマンドを実行することで、カスタマイズしたスタートメニューは初期化されますが、タイル データベースが破損している場合は、初期化のコマンドを実行することでスタートメニューが表示されない問題が改善する場合があります。

  1. [Windows] キーと [X] キーを同時に押し表示されたメニューからコマンド プロンプトをクリックして起動します。管理者で起動する必要はありません。
  2. コマンド プロンプトで下記のコマンドを実行します。tdlrecover.exe -reregister -resetlayout -resetcache
  3. コマンドの実行完了を待ちます。実行完了には20 秒程度かかる場合があり、また成功した場合でもメッセージは表示されません。
  4. コマンド プロンプトが入力可能な状態に戻ったら、 tdlrecover.exe コマンドの実行が完了していますので、スタート メニューが正しく表示されるか確認します。

最新のロールアップ プログラムを適用したり、タイル データベースを初期化することで、多くのお客様の問題は改善することが多いです。これらの操作を実施しても問題が改善しない場合や、実施内容について確認したいことがある場合には、サポートまでお問い合わせいただければ、解決に向けて協力をさせていただきます。

Windows Platform 担当 : 神田

 

A long tale and a short answer

$
0
0

Hello Y’all,

This is an interesting problem that I worked a couple of weeks ago, and it was a thought-provoking mystery until we figured out what was happening, as always, with a little help from my friends.

The mystery and the problem:

Here is the deal, no new computer objects that were being discovered by SCCM were showing up under

Assets and ComplianceOverviewDevices, but why?

We started looking for clues under the Data Discovery Manager logs, DDM.log:

*** Exec spGetNextIDInARange N’NextIds’, N’NextMachineID’, N’System_DISC’, 16777216, 25165823, N’ItemKey’ SMS_DISCOVERY_DATA_MANAGER
*** [42000][229][Microsoft][SQL Server Native Client 11.0][SQL Server]The SELECT permission was denied on the object ‘NextIds’, database ‘CM_PRI’, schema ‘dbo’. SMS_DISCOVERY_DATA_MANAGER
WARNING – GetNextIDInARange() failed to execute SMS_DISCOVERY_DATA_MANAGER CDiscoveryDataManager::ProcessDDR – could not get next available item key. SMS_DISCOVERY_DATA_MANAGER
CDiscoverDataManager::ProcessDDRs_PS – SQL problem detected. Will retry later. SMS_DISCOVERY_DATA_MANAGER CDiscoverDataManager::THREAD_ProcessNonUserDDRs – Failed to manage files in inbox. Will retry in at least 60 seconds SMS_DISCOVERY_DATA_MANAGER
Refreshing site settings….. SMS_DISCOVERY_DATA_MANAGER
Processing file adum3ayb.DDR SMS_DISCOVERY_DATA_MANAGER *** Exec spGetNextIDInARange N’NextIds_G’, N’NextUserID’, N’User_DISC’, 2063597568, 2080374783, N’ItemKey’ SMS_DISCOVERY_DATA_MANAGER
*** [42000][229][Microsoft][SQL Server Native Client 11.0][SQL Server]The SELECT permission was denied on the object ‘NextIds_G’, database ‘CM_PRI, schema ‘dbo’. SMS_DISCOVERY_DATA_MANAGER
WARNING – GetNextIDInARange() failed to execute SMS_DISCOVERY_DATA_MANAGER CDiscoveryDataManager::ProcessDDR – could not get next available item key. SMS_DISCOVERY_DATA_MANAGER
CDiscoverDataManager::ProcessUserDDRs_PS – SQL problem detected. Will retry later. SMS_DISCOVERY_DATA_MANAGER CDiscoverDataManager::THREAD_ProcessUserDDRs – Failed to manage files in inbox. Will retry in at least 60 seconds SMS_DISCOVERY_DATA_MANAGER

What we were seeing was an access denied for the ‘NextIds‘ and ‘NextIds_G’. OK, we get that, without a NextId, no new computers will be added… but who was trying to select those objects? Important: the SCCM environment was a Primary Site and a Remote SQL Database, so maybe the Computer account did not have the necessary access to the database, or something corrupted the access of the Primary server computer account in the SQL database. That was the theory, and theories are nothing if you can’t test them, right?

So, let’s test the theory by removing the computer account from the SQL database, and re-adding it. Well, THAT was something I haven’t done before, and it was a great learning experience.

(SPOILER ALERT! The steps below to remove and re-add the account did not solve the problem, but they were part of the process that I went through to try solve the problem, and of course, this is the whole idea of these posts, so that even though some troubleshooting I’m showing you here did not solve the problem, it may help you to resolve yours, because although the problems and error messages may be similar, what I have learned is that understanding why you took some troubleshooting steps can be as important as solving the problem itself.

To remove and re-add the account:

  1. You need to be logged in via an account with SA (aka sysadmin) permissions for the database.
  2. PLAN! Find a maintenance window, because we’ll have to drop the connections between the Primary server and the database, and that will be downtime! Because that was the only functionality that we could see that was not working, (package and application deployment seemed ok, and software updates also ok) we didn’t want to cause any new problems while fixing the current one.
  3. From the Primacy Site Server we manually stopped all SMS services, and as a good reminder, try to use the command preinst /stopsite, because this will trigger a site reset once you start the server again. (You don’t have to believe me, you can read this article that explains it.)
  4. From the SQL box, we restarted the SQL Server service, and monitored the connection using the sp_who2, very useful Stored procedure built-in that will show us all the current connections. Once you don’t see the primary server connections, we are ready to go.

Here is an example from my lab on how to remove, and re-add the computer account:

Once all the Services are stopped, we will delete the computer account from the User’s Security Node for the SCCM database: in my example, the CMCBTX-PRI-01.

Now we will remove the account from the Security Node

Next we will re-add the account, right click under logins and under login name, we use domaincomputername$ i.e CMCBTX-PRI-01$

Once added, right click and under properties > Server Roles, make sure to add the account as SYSADMIN

Under User Mapping, select the SCCM Database and make sure to use the default schema as dbo, and as a role, add it with db_owner.

After that, you should be able to start the SCCM services at the Primary site server.

I warned you, right? Yes, it did not work…

And we don’t give up just because one action plan didn’t work. Time to bring the big guns, the SQL Profiler. That dude will tell us what is happening, right? Partially… Damn spoil alerts again!

SQL profiler is a snitch, but this time, it was hiding something from us. This is where the mystery gets very interesting, at least for me.

So we ran a SQL Profiler, no big deal, selected the SQL server name, connect, event selection, show all events, show all columns, right click errors and warnings, select all, right click Security Audit and Sessions, deselect all and right click TSQL and Store Procedures and select all. Easy, right? More than we need, but sometimes in SQL Profiler, it’s good to have that extra.

So, we ran SQL Profiler for about 5 mins, just to be sure, and stopped the trace. I know, I know, I should have used SQL Profiler from the beginning, and I did, and I am showing you what I saw…

User Error Message        The SELECT permission was denied on the object 'NextIds_G', database 'CM_PRI', schema 'dbo'.     SMS_DISCOVERY_DATA_MANAGER        S-1-9-3-1264295285-1190618126-1402209955-3677970905        S-1-9-3-1264295285-1190618126-1402209955-3677970905

User Error Message        The INSERT permission was denied on the object 'NextIds_G', database 'CM_PRI', schema 'dbo'.     SMS_DISCOVERY_DATA_MANAGER        S-1-9-3-1264295285-1190618126-1402209955-3677970905        S-1-9-3-1264295285-1190618126-1402209955-3677970905

Interesting, right? So tell me: who is S-1-9-3-1264295285-1190618126-1402209955-3677970905?

That is the big question. He’s the one to blame for those access denied errors and the root cause of me not being able to sleep at night, but, what, who, how, where, when?

I was expecting to see an account, like the computer account and/or user account. No, it was not that easy to find that GUID, and it was not in the Active Directory, we checked, so what on earth is happening?

Researching the Interwebs we found this very interesting function that would convert GUIDs to SIDs. Now we’re on the right track, so let’s keep going. Here is the function, in case you don’t already have it in your database:

CREATE FUNCTION fn_SIDToString
(
@BinSID AS VARBINARY(100)
)
RETURNS VARCHAR(100)
AS BEGIN

IF LEN(@BinSID) % 4 <> 0 RETURN(NULL)

DECLARE @StringSID VARCHAR(100)
DECLARE @i AS INT
DECLARE @j AS INT

SELECT @StringSID = 'S-'
+ CONVERT(VARCHAR, CONVERT(INT, CONVERT(VARBINARY, SUBSTRING(@BinSID, 1, 1))))
SELECT @StringSID = @StringSID + '-'
+ CONVERT(VARCHAR, CONVERT(INT, CONVERT(VARBINARY, SUBSTRING(@BinSID, 3, 6))))

SET @j = 9
SET @i = LEN(@BinSID)

WHILE @j < @i
BEGIN
DECLARE @val BINARY(4)
SELECT @val = SUBSTRING(@BinSID, @j, 4)
SELECT @StringSID = @StringSID + '-'
+ CONVERT(VARCHAR, CONVERT(BIGINT, CONVERT(VARBINARY, REVERSE(CONVERT(VARBINARY, @val)))))
SET @j = @j + 4
END
RETURN ( @StringSID )
END

After creating the function. we tried to look inside the server principals, but nothing was found there with that SID.

Any more ideas? Yes! Always! What about EXEC sp_helprolemember?

And what if we could use the Function we inserted before to convert the MemberSID into a GUI?

Yes! SQL Again!

--Step 1 - Creating a TEMP TABLE to later on, insert the value of the sp_helprolemember SPROC
 CREATE TABLE #temp_table(DBROLE varchar(30), MemberName varchar(30), MemberSID varbinary(200))

-- Step 2 - Inseting the output of the sp_helprolemember into the TEMP TABLE
 INSERT INTO #temp_table
 EXEC sp_helprolemember

-- Step 3 - Using the function we previously created to convert the Hexadecimal into the readable SID

select *, dbo.fn_SIDToString(MemberSID) from #temp_table

-- Last Step after you are done with step 3, Dropping the Table.
 drop table #temp_table

That part of the mystery solved, we were now able to see what account was using that GUID, but why was this account ‘smsdbuser_ReadWrite’ being used, and why was this account not showing up as ‘smsdbuser_ReadWrite’? Instead, it was showing as S-1-9-3-1264295285-1190618126-1402209955-3677970905. We needed answers, so with assistance of an SQL guy, the great PFE George Manson, we solved the problem by opening one of the store procedures that were being used in the process of the data discovery. The answer was in the sproc!

spGetNextIDInARange_internal ALTER PROCEDURE [dbo].[spGetNextIDInARange_internal]
@NextIDsTableName sysname,
@IDName nvarchar(30),
@ArchTableName sysname,
@RangeStart int,
@RangeEnd int,
@ColumnName sysname = N'ItemKey'
-- This uses dynamic SQL so default permission chaining doesn't work.
WITH EXECUTE AS 'smsdbuser_ReadWrite' -- needed to provide permissions for dynamic SQL statements

And we can see that the stored procedure is calling the account with the code: WITH EXECUTE AS ‘smsdbuser_ReadWrite’

If you want to read more about the EXECUTE AS argument here is the documentation.

So that answered a lot of the questions I had, because another thing I was seeing in the SQL profiler traces was that the computer account was being used, to the point of the error. When the error started happening, the account changed to that GUID. we just needed to add the accesses back, but which ones? Looking at my lab, that was not altered: db_datareader and db_datawriter. Under the security node for the CM_PRI database > Security > Users >, right click on the smsdbuser_ReadWrite and in Membership, add those accesses.

Told ya, the answer was boring, but not the journey! Root cause? Someone looked at that guy and said, you should not have access to anything, I never heard of you, I will fix you, bye bye accesses, and…from what we could see…bye bye discovery.

Thanks to Umair Khan and Arjun Mohan for their time and help to tackle this problem and the wife for correcting the text.

See you next time!

Renato Pacheco | Support Engineer | Microsoft Configuration Manager

Disclaimer: This posting is provided “AS IS” with no warranties and confers no rights.

最大サイズの Azure ディスクのプレビューを発表

$
0
0

執筆者: Yuemin Lu (Program Manager, Azure Storage)

このポストは、5 月 30 日に投稿された Announcing the preview of Azure’s Largest Disk sizes の翻訳です。

 

マイクロソフトは先日の Build カンファレンスで、Azure に最大 4 TB の容量を持つ新しいディスク サイズを追加することを発表しました。この新しいディスク サイズは、最大 250 MBps のストレージ スループット、7,500 IOPS の入出力に対応しています。この発表の詳細については、Build セッションのビデオ (英語) をご確認ください。

今回追加された新しいディスク サイズは、Managed/Unmanaged Premium Disks の P40 (2 TB) と P50 (4 TB)、および Standard Managed Disks の S40 (2 TB) と S50 (4 TB) の 2 種類です。Standard Unmanaged Disks については、最大で 4,095 GB のディスクを作成できます。これらのサイズは、現時点では米国中西部リージョンで ARM から Azure Powershell や CLI を使用する場合にのみ利用できます。提供範囲は今後数か月かけて徐々に拡大され、世界各地の多くのリージョンで Azure ポータルから利用できるようになる予定です。これに併せて Azure ツールも刷新され、1 TB を超える容量の VHD をアップロードできるようになります。

新しいディスク サイズの詳細

この新しいディスク サイズの詳細を以下の表にまとめています。

  P40 P50 S40 S50
ディスク サイズ 2,048 GB 4,095 GB 2,048 GB 4,095 GB
ディスクの入出力速度 7,500 IOPS 7,500 IOPS 最大 500 IOPS 最大 500 IOPS
ディスクの帯域幅 250 MBps 250 MBps 最大 60 MBps 最大 60 MBps

 

Viewing all 34890 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>