Quantcast
Channel: TechNet Blogs
Viewing all 34890 articles
Browse latest View live

Key takeaways from Microsoft Inspire 2018

$
0
0

Microsoft Inspire 2018 provided an unprecedented opportunity for partners to make meaningful connections with even more of the Microsoft community. As we kicked off our fiscal year together, it was more apparent than ever that partnership is the key to staking a claim on $1.7 trillion of digital transformation opportunity in the U.S.

Speakers included partners, business leaders, and Microsoft executives like Gavriella Schuster, Brad Smith, Ron Markezich, and CEO Satya Nadella. Our partner ecosystem is strong and growing, having built more than 28,000 applications, services, and solutions with partners in the last year. With that success and momentum in mind, here are some of the key takeaways from Microsoft Inspire 2018:

Leaning into marketplaces and growing capabilities

We were proud to announce efforts to enhance AppSource by making even more resources available to partners, customers, and our sales team. Azure Marketplace is your one place for everything from finished apps and partner-to-partner bundles to managed services. This online store can be used to increase discoverability and app usage, drive more leads, and promote your solutions. MPN Partners have access to a Microsoft Marketplaces consultation at no cost from now through December 21, 2018.

Enhancing Go-to-Market benefits

Go-to-Market connects what we build with how we sell—together—and we’re not only expanding core benefits but offering more flexible benefits packages. New benefits will guide partners through building their marketing practice and support them in generating leads, improving lead velocity, and boosting close rates. Partners can also choose the benefits package that aligns to your business focus, whether that’s Modern Workplace, Biz Apps, Data & AI, or Apps & Infrastructure. Stay tuned for more details at partner.microsoft.com or through our upcoming MPN 101 calls, to review benefits in-depth.

Supporting partner development

Differentiating your offerings is key to partner success, which is why we’re launching the Microsoft Azure Expert MSP initiative as well as offering new areas of specializations and competencies. The Microsoft Azure Expert MSP qualification gives customers confidence when selecting a partner to help them on their digital transformation journey by vouching for your highest degrees of capability. In FY19, we’ll be launching advanced areas of specialization beyond Microsoft Gold competency, including: Modern Workplace (GDPR, Teams), Biz Apps with Dynamics 365, Azure Stack, Cloud Migration, and Data & AI (machine learning, cognitive services). To improve the resources available to our partners, there are also 4 new Digital Transformation eBooks and 2 Cloud Practice Playbooks upcoming.

For more in-depth info and highlights from Microsoft Inspire, watch our recap on-demand. Then, head over to our MPN page to get plugged in, and follow @msuspartner for the latest updates.

For more info and ways to stay connected, check out our additional resources below:


Breaking Into Windows Server 2019: Network Features: Software Defined Networking (SDN)

$
0
0

Happy Wednesday to all of our great readers! Brandon Wilson here once again to give yet another pointer to some more outstanding content/information from the Windows Core Networking team on the Top 10 networking features in Windows Server 2019. This time around, they are covering some of the new Software Defined Networking (SDN) capabilities in Windows Server 2019, and its an excellent read in my humble opinion, but don't take my word for it! Here is some initial information straight from the product group:

"This week, the Windows Core Networking team continues their Top 10 Networking features in Windows Server 2019 blog series with: #7 - SDN Goes Mainstream

Each blog contains a "Try it out" section so be sure to grab the latest Insider's build and give them some feedback!

Here's an Excerpt:

If you've ever deployed Software Defined Networking (SDN), you know it provides great power but is historically difficult to deploy. Now, with Windows Server 2019, it's easy to deploy and manage through a new deployment UI and Windows Admin Center extension that will enable anyone to harness the power of SDN.  Even better, these improvements also apply to SDN in Windows Server 2016!

"

As always, if you have comments or questions on the post, your most direct path for questions will be in the link above.

Thanks for reading, and we'll see you again soon!

Brandon Wilson

Android passwords may not be enforced when selecting “device default” password type.

$
0
0

Not all Android devices will be guaranteed to prompt for password creation if you have left the required password type as “device default”.  Also, if set inside a Device compliance policy, these devices may still show as compliant.  In the future, Intune will be removing this value to prevent the creation of policies that may not act as expected.  To prevent inconsistent behaviors in policies created prior to the Intune removal, edit your profiles and policies to select a different password type. Hybrid (Intune with Configuration Manager) customers do not need to take any action.

Note that if you select “any” or “required” as password types, a biometric password will be acceptable.  To enforce additional password values such as Minimum password length, choose one of the numeric or alphanumeric password types.

These password configurations are currently located in these locations:

Device configuration > Profiles > Android device restrictions profile > Password tab

Androidpasswordpost1

 

Device configuration > Profiles > Android enterprise, Work profile only device restriction profile > Work profile settings tab

Androidpasswordpost2

 

Device configuration > Profiles > Android enterprise, Work profile only device restriction profile > Device password tab

Androidpasswordpost3

 

Device configuration > Profiles > Android enterprise, Device Owner only device restriction profile > Device password tab

Androidpasswordpost4

 

Device compliance > Policies > Android policy > System Security tab

Androidpasswordpost5

 

Device compliance > Policies > Android enterprise policy > System Security tab

Androidpasswordpost6

 

In addition to removing the “device default” value this fall, we will be making slight changes to align the controls between the different areas in the portal. Customers who currently have profiles or policies with “device default” will see communications in the Office Message Center about the need to modify this value and more definitive timelines on enforcement. Let us know if you have any questions!

BlogMS Microsoft Team Blogs – July 2018 Roll-up

Microsoft 365 Partner Launchpad

$
0
0

TimTetrickPhoto

Tim Tetrick

 

image

If you serve small business customers, and you don’t have a department of savvy marketers, consider Microsoft 365 Partner Launchpad—a new selling tool that can help expand your Modern Workplace selling and marketing capabilities. Launchpad applies a practical approach to preparing for and closing sales faster, with improved profitability.

Launchpad can help you expand your solutions knowledge across practice areas—including Security and Compliance and Teamwork. You can combine cloud products, services, and hardware to create compelling customer offers, guide customer conversations, and generate customer demand.

For more information, see Announcing Microsoft 365 Partner Launchpad, then take a minute to explore and become familiar with Launchpad.

 

Datasheet builder: NEW Microsoft 365 Partner Launchpad tool

Launchpad datasheet builder tool makes it simple to generate customized datasheets for your small business customers.

Datasheets can help small business customers understand Microsoft 365 benefits and your offerings in the context of their specific needs.

Compelling datasheets in four easy steps:

  1. Select either the Security and Compliance or Teamwork practice area.
  2. Add your offering’s unique value propositions.
  3. Upload your company logo and contact information.
  4. Deliver your newly created, two-page datasheet digitally as a PDF or print to use as a leave-behind following a customer presentation or as a handout at events.

Try the new Launchpad datasheet tool.  Take advantage of all the Microsoft 365 Partner Launchpad tools.

Moving Windows Server and SQL Server licenses from SPLA to CSP

$
0
0

As you may heard, now CSP partners can buy Windows Server and SQL Server licenses on behalf of their customers directly in Partner Center. It became possible because on new functionality in Partner Center, described here.

New functionality allows partners, migrating SPLA-licenses VMs with Windows Server and SQL Server to Azure, to switch from SPLA licenses to CSP with minimal efforts. Also it can be easily combined with Azure Reserved VM Instances for better pricing.

Current list of products includes:

  1. Windows Server Standard 8 Core License Pack + CALs
  2. SQL Server Standard 2 Core License Pack
  3. SQL Server Enterprise 2 Core License Pack
  4. Windows Server RMS CAL.

The list will be extended in the future with other on-prem Microsoft products.

SPLA licensing doesn't allow to use Windows Server and SQL Server per-Core licenses in the public cloud. Ability to use SPLA licenses in the public cloud for different Microsoft products is granted by DCP Eligible property in Service Provider Use Rights (SPUR) document, which is specified as SAL Editions only for Windows Server and SQL Server. So it means that non-SAL SKUs (per-CPU or per-Core) don’t provide DCP Eligible rights and can't be used in the public cloud like Azure.

Windows Server DCP rights

 

SQL Server DCP rights

 

DCP definition from SPUR

So if you are moving VMs of your managed customers from your datacenter to Azure, you can't keep using per-Core licenses of Windows Server and SQL Server from SPLA after the migration. Possible options are:

  1. Use Azure Marketplace images with pay-as-you-go (PAYG) licenses for Windows Server and SQL Server included and pay for Windows Server and SQL Server based on the number of cores per hour.
  2. Use this new capability of Partner Center to buy Windows Server and SQL Server licenses for 1 year or 3 years on behalf of your customers.

While 1st option provides more flexibility because of per-hour billing, option with software subscription upfront purchase is more cost efficient and provides more predictability. Both options are different from monthly billing in SPLA, so you should decide what is better in your scenario - per-hour billing or purchase for 1/3 year upfront.

Keep in mind that Windows Server licenses purchased in Partner Center require Client Access Licenses (CALs) for every user or device accessing the (physical) server if it runs outside Azure. If VMs runs in Azure, then Azure Hybrid Benefits for Windows Server is applied, which doesn't require CALs.

Prices for software subscriptions are available in a separate price list in Partner Center.

Remember - currency is determined by customer location, not by partner location as for regular Azure services. E.g. if you are selling Windows Server licenses using your Partner Center account for Netherlands partner location, and customer is in Norway, you will be invoiced in Norway krone for those licenses, not in Euros.

Azure Hybrid Benefit for Windows Server

After procuring Windows Server licenses on behalf of your customer in Partner Center, you can use them for Windows Server VMs in Azure by utilizing Azure Hybrid Benefit (AHB) for Windows Server. Windows Server VM with Azure Hybrid Benefit for Windows Server will be priced as a Linux VM of the same size.

Regular Azure Hybrid Benefit for Windows Server rules apply to Windows Server licenses, purchased through Partner Center:

  1. Number of required core licenses is based on the number of VM cores (no matter if they are hyper-threaded or not).
  2. Minimal amount of Windows Server licenses in order to use Azure Hybrid Benefit per customer - 16 cores (at least two 8-core packs).
  3. 16 core licenses allow to run two VMs with up to 8 core each, or single 16-core VM.
  4. VMs with less than 8 cores still require 8 core licenses assigned.
  5. VM price with Azure Hybrid Benefit for Windows Server is equal to the Linux VM price of the same size.
  6. You can create VMs with Azure Hybrid Benefit activated using Azure portal, PowerShell or even ARM template. You can enable Azure Hybrid Benefit in the existing Azure VM, so it will be converted from PAYG Windows Server license to AHB. You can filter all VMs that are using AHB in the VM list. Those topics are covered on this page.
  7. If you are migrating an existing VM to Azure, don't forget about Windows Server activation - switch to Azure KMS servers after the migration as described in this article.

Review Azure Hybrid Benefit FAQ for more details.

Azure Hybrid Benefit for SQL Server

After procuring SQL Server licenses on behalf of your customer in Partner Center, you can use them for Azure SQL Databases by utilizing Azure Hybrid Benefit (AHB) for SQL Server. It reduces the cost of single database or elastic pool in vCore-purchase model, and also for Azure SQL Database Managed Instance (which is available in vCore-purchase model only). So you pay only for compute, and don't pay for SQL Server license twice.

Regular Azure Hybrid Benefit for SQL Server rules apply to SQL Server licenses, purchased through Partner Center:

  1. 1 core license for SQL Server Standard grants rights to 1 vCore for Azure SQL Database, General Purpose tier only.
  2. 1 core license for SQL Server Enterprise grants rights to 4 vCores for Azure SQL Database in General Purpose tier, or 1 vCore for Azure SQL Database in Business Critical tier.

Keep in mind that Azure Hybrid Benefit for SQL Server applied to Azure SQL Database. If you want to use SQL Server licenses for Azure VMs with SQL Server, you can use License Mobility benefit to avoid paying for SQL Server twice. Minimal number of SQL Server core licenses per VM is four. You can install SQL Server Standard in the Azure VM is you have enough SQL Server Standard Core licenses, same for Enterprise edition.

Process walkthrough

Let's take a look on the process by creating a sample Windows Server VM with Windows Server licenses, procured through Partner Center.

  1. Go to Partner Center portal, select customer account from the list, and then select Software -> Add products.
  2. Select Segment: Commercial, Type: Software subscriptions, Terms: 1 Year or 3 Years.
  3. Select the quantity of required licenses and click Add to cart.
  4. Then click Review and Buy.

To use Windows Server licenses that you've just procured for new VM:

  1. Go to Azure portal and create a new VM using any Windows Server image.
  2. Choose Yes for Already have a Windows license? question and confirm that you have enough licenses for Windows Server with active Software Assurance. All licenses that you buy in Partner Center include Software Assurance for the duration of software subscription.
  3. Proceed through the VM creation UI as usual.
  4. Go to Configuration menu in VM settings and ensure that Use existing Windows license is set to Yes.

For existing Windows Server VMs:

  1. If you want to activate Azure Hybrid Benefit for VM, that was created without Azure Hybrid Benefit, than just select Yes on VM Configuration page.
  2. If you want to activate Azure Hybrid Benefit for VMs that you want to migrate to Azure using Azure Site Recovery, check this guide.
  3. If you've migrated this VM to Azure from your environment and it was previously activated using your internal KMS servers, than run the following command inside the guest OS to set Azure KMS servers for Windows activation instead:

To use SQL Server licenses that you've just procured for new Azure SQL Database:

  1. Go to Azure portal and start creating new Azure SQL Database.
  2. In the Pricing tier menu select vCore-based purchasing options, then select General Purpose or Business Critical tier.
  3. Click Yes for Save money option to activate Azure Hybrid Benefit for SQL Server and confirm that you enough SQL Server licenses with Software Assurance. All licenses that you buy in Partner Center include Software Assurance for the duration of software subscription.
  4. Proceed through SQL Database creation UI as usual.
  5. If you want to activate Azure Hybrid Benefit for SQL database, that was created without Azure Hybrid Benefit, than just select Yes in Configure page in database settings.

To use SQL Server licenses that you've just procured for new Windows Server VM with SQL Server:

  1. Go to Software menu in customer account page on Partner Center portal.
  2. Select SQL Server version and language.
  3. Click Get keys and downloads.
  4. Download SQL Server ISO. Product key is not required for SQL Server installation, it is embedded to the installer.
  5. Create a new VM using regular Windows Server image. Don't use SQL Server Azure Marketplace images since you will be charged for SQL Server license using pay-as-you-go meter.
  6. Activate Azure Hybrid Use Benefit for Windows Server during the VM creation if you want to utilize existing Windows Server licenses.
  7. Proceed through new VM creation UI as usual.
  8. Install SQL Server on that VM manually using the SQL Server installation media, downloaded from Partner Center.

To use SQL Server licenses that you've just procured for new Linux VM with SQL Server:

  1. Create new VM in Azure using regular Linux template for one of the supported Linux platforms. Don't use SQL Server Azure Marketplace images since you will be charged for SQL Server license using pay-as-you-go meter.
  2. Proceed through new VM creation UI as usual. BYOL images for SQL Server are not available in CSP yet.
  3. Install SQL Server on that Linux VM manually. Installation media or product key is not required to install SQL Server on Linux.
  4. Select edition on the last step - Standard or Enterprise, regarding SQL Server license edition that you've purchased through Partner Center.

To migrate existing SQL Server environment from on-prem to Azure and use SQL Server licenses, purchased in Partner Center, you have three options:

  1. Migrate the source VM to Azure using Azure Site Recovery or similar VM lift&shift tool, activate Azure Hybrid Benefit for Windows Server if needed.
  2. Deploy new VM and install SQL Server as described above. Migrate existing SQL Server database using Azure Database Migration service or similar tool.
  3. Create Azure SQL Database or Azure SQL Database Managed Instance and activate Azure Hybrid Benefit for SQL Server. Migrate an existing SQL Server database using Azure Database Migration service or similar tool.

Try this new great functionality in your Partner Center account today. It will help you during the migration from your environment to Azure and will simplify the license conversion from SPLA to CSP. Also, combine it with Azure reserved VM instances and upcoming Azure SQL Database reserved capacity to maximize savings after moving workloads to Azure.

Does SharePoint 2019 still need the SMTP Service?

$
0
0

Summary

Now that SharePoint 2019 public preview has been released, the official deprecated / removed list has been published. After reading this you may have noticed that "Incoming email automatic mode" has been removed from the product.

 

Incoming email automatic mode

As announced by the Windows Server team, Microsoft is deprecating the IIS 6 Management compatibility features in Internet Information Services (IIS). The automatic mode of the SharePoint incoming email feature relies on IIS 6 APIs to manage the IIS SMTP service. Because no alternative APIs exist to manage the IIS SMTP service, we are removing support for automatic mode in the incoming email feature of SharePoint Server 2019 Public Preview. Customers using incoming email are recommended to use advanced mode instead, which allows you to manually manage the IIS SMTP service and drop folder.

 

Also, you may have seen my blog detailing the changes in SharePoint 2019 stating that the SMTP Authentication is now supported and we longer need a local SMTP server to support anonymous authentication.

 

Lastly, the SMTP Service has been added to the deprecation list since the release of Windows 2012 server.

 

SMTP

SMTP and the associated management tools are deprecated. Though the functionality is still available in Windows Server 2012, you should begin using System.Net.Smtp. With this API, you will not be able to insert a message into a file for pickup; instead configure Web applications to connect on port 25 to another server using SMTP.

 

So, what does this all mean?

SharePoint no longer "requires" a local SMTP service. SharePoint can still utilize the service to process incoming e-mail from the drop folder. However, it's not really "required".

 

Incoming e-mail with advanced mode gives you a simple input "E-mail drop folder".

 

Example:

 

When configuring advanced mode, Admin's must enter the drop folder location manually. Since the SharePoint Farm Service Account (the account running the timer service) only needs access to e-mail from a drop location, it can be local or remote file path that contains the incoming e-mail destined for the SharePoint Farm.

 

Example:

Wrapping it all up

When the time comes, and the SMTP service is removed from Windows, customers will need a method to move e-mail destined for Sharepoint to a Windows folder for processing.

ビジネスの成功は従業員の育成から【8/9公開】

$
0
0

(この記事は2018年7月26日にMicrosoft Partner Network blog に掲載された記事 Business success starts with empowered employees の翻訳です。最新情報についてはリンク元のページをご参照ください。)

 

 

ビジネスの成功と高い収益性を実現するための原動力となるのが従業員です。そして、ビジネスの変革にはイノベーションを生み出す企業文化の確立が欠かせません。

ビジネスの変革を真に達成したパートナー様は、まず従業員と企業文化の変革から取り組んでいます。企業文化は企業にとっての魂です。ビジネスの形態がタイム アンド マテリアル (T&M) 契約のプロジェクトから年間契約のサービス提供へとシフトするにつれて、企業文化には創造性やイノベーションが求められるようになり、従業員がビジネスの成長につながる画期的なアイデアを追及できるように支援する必要性が高まっています。

 

 

また、進化の一途をたどるデジタル環境に対応し、アジリティを向上させるために、従業員の採用、研修、給与、意欲向上のしくみの改革に取り組むパートナー様もいらっしゃいます。IDC が世界各国の 600 社のパートナー様を対象として実施した最近の調査では、デジタル環境に対応しているパートナー様が、社内の人事エコシステムの刷新を進めていることが明らかになりました。上記のグラフは、パートナー様がデジタル成熟度を高めるために実施している従業員向けの取り組みの内訳を示したものです。

このデジタル トランスフォーメーションに関する調査全体を通じて、企業がアジリティを重視し、採用システムによって強化していることが伺えました。

 

企業文化はすべての中心

この点に関して、BitTitan CEO である Geeman Yip 氏は次のように述べています。「社内の変革を行う前に対外的な改革を達成することはできません。コラボレーションを生み出し、促進する文化を醸成して、イノベーションの創出につながる環境作りを進める必要があります」

変革を成功に導く企業文化は、情熱、楽しさ、開放性、透明性、責任、誠実さ、多様性を内包しています。多様性が重要であるのは、多様なアイデアや目的の共有が変革に欠かせないからです。

McKinsey 社が 2015 年に発表したレポート『Diversity Matters (多様性の重要性)』では、ジェンダーの多様性の面で上位 25% に入る企業は、財務利益が業界内の中央値を上回る可能性が 15% 高くなるという結果が出ています。

従業員の継続的な学習と成長を促すには、リスクテイキングや試行錯誤を許容する企業文化を作り上げることが重要です。そのためには、上層部のリーダーが率先して、意見に耳を傾けることの価値を体現し、優れたアイデアを実践することから始めましょう。

私が以前投稿したブログ記事「トランスフォーメーションの文化を育む方法」では、企業文化の変化はトップから始まること、そしてリーダーはあらゆる従業員のアイデアを受け入れ、従業員がイノベーションや変革に欠かせない存在だと認識する必要があることを説明しています。

 

人材採用

企業が求めるスキルはテクノロジと共に変化しており、競合他社も人材の採用や育成の戦略について見直しを行っています。新たなスキルを持つ人材は、引く手あまたなうえに絶対数も不足しています。優秀な人材の獲得と定着は、収益にも直結する課題です。

お客様の成長に伴う課題に対応するには、漠然としがちな問題にも迅速に対処できる、多様な専門知識を持ったチームが必要です。迅速な対応力を持つチームを結成するには、変化の激しい環境で能力を発揮できる従業員が適しています。企業文化への適性を有するかどうか、業務上の判断力を備えているかどうかを確認したうえで、必要なスキルを育成しましょう。競争力を維持するには、新しいスキルを意識しながら、最新のテクノロジを取り入れる必要があります。

採用した人材には、課題に挑戦し、達成するための機会の提供を続けましょう。競争の激しい市場では、専門能力開発プログラムが差別化に役立ちます。

 

スキル開発

人材育成は、デジタル成熟度の高いパートナー様にとって、継続して進めるべき戦略的活動です。デジタル成熟度の高いパートナー様は、従業員のデータを活用して、配属の決定やスキル開発を実施しており、企業文化の 1 つとして、学習への意欲的な取り組みが見られます。

新しいビジネスに対応できる人材の育成には、体系的かつ発展的な学習の道筋を用意することが重要です。セルフサービス型のトレーニングや継続的な認定プログラムを整備することで、従業員は自分のペースでスキルを習得できます。特定のデジタル スキルを持つ人材を見つけることは困難でコストもかかるため、適切な気質、経験、主要スキルを備えた従業員に研修を行うのが最適な方法と言えるでしょう。

人材育成を促進する方法の 1 つとして、キャリア開発のインセンティブを付与することが挙げられます。マイクロソフトのパートナー様である Dimension Data では、従業員の研修プロファイルを作成し、修了後にプロファイルのステータスを更新するようにしています。従業員は研修プロファイルに応じて、募集中の役職を確認することができます。

デジタル トランスフォーメーションの取り組みでは、従来の IT スキルや経験にも価値があることは変わりません。マイクロソフトのパートナー様である Hanu では、長年自社に在席している IT 担当者のために再研修プログラムを実施しています。同社では、データ サイエンスなどの新しい分野では新世代のスキルが利用されている一方で、お客様の多様なシナリオの中には従来のテクノロジが伴う場合が多く、幅広いスキルが求められています。

学びの文化を維持していくには、スキルの育成と従業員の定着が人材管理の KPI であると考えましょう。

従業員は継続的な成長とデジタル トランスフォーメーションを実現するための重要な鍵です。戦略の策定にあたっては、継続的な成功のために、従業員をどのように支援していくかについて、時間をかけて検討してください。

電子ブックでお届けしているマイクロソフト デジタル トランスフォーメーション シリーズの次回作『Optimizing Operations (業務の最適化)』は、間もなく公開予定です。

 

 


Finally Remove SMB1 with Project VAST

$
0
0

Many years ago, I read a fascinating article from 1948 in which various travel writers predicted the air travel experience of 1984.* In reading the article, I remember being surprised both by what they got right and what was off base. Namely, the article made no mention of airport security. By today’s standards (like the standards of 1984), this might seem naïve and even reckless. But recall that the article was written for a bygone world – one that tells us more about the world of 1948 than the one of 1984. It was written for a reality that no longer exists, in a time and place in which security was simply not top-of-mind yet.

Sound familiar? We’ve just been looking backwards from 1984 – now look forward. Were the computing realities of 1984 not unlike the physical realities of 1948?  Could the protocols and practices of computing’s bygone age possibly have anticipated today’s security realities? Of course not.

Hey folks, it’s Jon again with this month’s installment about the Visual Auditing Security Tool (Project VAST). Today I’d like to introduce a new VAST capability we have created in direct response to multiple customer and internal requests. Today I want to discuss Project VAST and SMB1.

Quick Review: What is SMB

For our project, we don’t need to spend very much time on how SMB works. My colleague and an owner of SMB at Microsoft, Ned Pyle, has written extensively on the subject (start here). Note that Microsoft has implemented three versions of SMB over the years; SMB2 and SMB3 are much more secure than SMB1.

The good news is that supported operating systems will use SMB2/3 by default. The bad news is that your out-of-support computers (you definitely don’t have any of those, right?  😊), such as Windows XP and Server 2003, support only SMB1. And as is the case with so many other deprecated protocols, your overall security posture may very well be being set by these outdated computers and applications. This is not a good situation. (To be clear, there is no way to use SMB1 securely – at least not at the protocol level. To be very clear, if you are running out-of-support Operating Systems in your production environment, then SMB1 support is but one of the many security pratfalls you face.)

The other places that SMB1 may be coming from are outdated (or not updated) printers and multi-function devices. If you have devices on your network that require SMB1, then it’s time to have a heart-to-heart with the vendor or the team requiring their use. Ned and team publish a clearinghouse of SMB1 products here.

Digging a bit deeper – what’s the specific problem with SMB1?

If you’re convinced and just want to learn how to remove SMB1 without wrecking your environment, skip to the next section. But if you’d like to learn a bit more, read on. The security built into SMB2 and SMB3 is primarily around protecting against security downgrade attacks (Pre-authentication Integrity, Secure Dialect Negotiation) and attacker-in-the-middle attacks (Encryption, Insecure guest auth blocking). SMB1 simply doesn’t know about or contain any of these protections.

Worse yet, if you have SMB1 enabled on your file servers or (worse, worse yet) your domain controllers, then an attacker-in-the-middle can simply block SMB2/3 and force a downgrade attack. The net effect here will be that your computer(s) will gracefully and happily flip over to SMB1, thereby exposing the exchange’s data.

As if this all weren’t scary enough, the world experienced several massive ransomware attacks with SMB1 components last year. Petya/WannaCrypt and their variants marked a sea change in their use of rapid destruction combined with credential theft.** From the Microsoft Secure blog:

[Petya] ransomware drops a credential dumping tool (typically as a .tmp file in the %Temp% folder) that shares code similarities with Mimikatz and comes in 32-bit and 64-bit variants. Because users frequently log in using accounts with local admin privileges and have active sessions opens across multiple machines, stolen credentials are likely to provide the same level of access the user has on other machines.+

In addition to impersonation via stolen credentials, Petya/WannaCrypt also leveraged an EternalBlue exploit to utilize the ticking timebomb of SMB1 to worm through the environment.

One piece of good news is that MS17-010 solved this specific exploit. But we will almost certainly see other SMB1 exploits in the future.

Okay, I’m convinced – what to do about SMB1?

Unless you’re in the rare position of knowing exactly the sources of your SMB1 traffic, we need to start with auditing. Beginning in Windows 10 and Server 2016, and back-ported via an update to Server 2012 R2 and Windows 8.1, Microsoft made SMB1 auditing available. To enable SMB1 server-side auditing, you’ll run the following PowerShell command:

Set-SmbServerConfiguration –AuditSmb1Access $true

To set auditing on a group of servers or across a large swath of your environment, export a list using PowerShell or LDIF. And then run something like this:

$computers = Get-Content "c:SMB_computers.txt" foreach ($computer in $computers) {Invoke-Command -ComputerName $computer -ScriptBlock {set-smbserverconfiguration -auditsmb1Access $true -Force}}

(In this example, SMB_computers.txt is the list of your exported computers or server to targets.)

This will set auditing for the Applications and Services LogsMicrosoftSMBServerAudit hive. Specifically, Windows will write a simple EventID 3000 for each and every time that a client attempts to access the server using SMB1. As shown below, you will log the IP address or hostname of the client. You will then need to don your best Sherlock Holmes cap and figure out what is using SMB1 on the client. Finding out is typically not very hard.

As with other protocols, this logging presents several immediate problems. Natively, you will have to hop between servers to check the event logs. Depending upon the amount of SMB1 traffic you are seeing, you may be able to employ technologies like Windows Event Forwarding (WEF) to aggregate the log data. However, there’s a high probability that you will need more than just aggregation to deal with the sheer number of log events that you’ll be generating. And that’s where Project VAST comes in.

Enter Project VAST

As we’ve discussed in previous blog entries, we need to deal with this data in two ways. First, we clearly need to aggregate it and then we need to make the SMB1 big data set consumable and truly actionable.

We’ll start, as we always do in Project VAST, with Azure Log Analytics. Once we have EventID 3000 aggregated in Azure, we can create Kusto queries to view the data and control the output.

Notice a couple of things here. First, Azure Log Analytics combined with the Kusto query language makes this data highly extensible. With a single line, we can pull the ClientAddress attribute out of the common ParameterXml attribute.

Second, notice on the right-hand side, this query returned almost 7,500 records in just seven days of data. Even with some creative Azure Log Analytics queries, we may need to employ another step to render our data truly actionable. After all, recall that a central guiding principle of Project VAST is to provide the ability to make well-informed, data-centric decisions about security budgets.

A Closer Look

Let’s take a look at Project VAST’s new SMB1 tab. Recall that here we are exporting the Kusto query out of Azure Log Analytics and importing it into Power BI. This configuration allows Power BI to query Azure Log Analytics directly with no need for intermediary data sources. The SMB1 tab in Project VAST allows us to visualize the EventID 3000 data and filter the display in a number of ways.

Let’s start examining this tab in the box in the upper left. The box entitled SMB1 Traffic Flow (Top 5) is a visual representation of the top five largest SMB1 flows in the data set. In other words, this box shows us where to start. Notice that we have four IP addresses and a loopback address responsible for this SMB1 traffic. Because we enabled logging for SMB1 (and not 2 or 3), all of this traffic is part of our problem. We want this data addressed so that SMB1 can be removed, with haste, from our organization’s infrastructure.

Clicking on any one of these data flows will filter the other panes on the screen to only reflect that data flow filter. For example, clicking on the largest of the IP addresses, 192.168.2.57, would filter the data set to only reflect the flow from this IP address against Server2.Contoso.com (the server it is hitting).

We could also filter by ClientAddress or SMB Server in the boxes directly to the right. Or, below those boxes, we could also filter on the SMBv1 traffic flow by Destination. In fact, this is where I tend to start with customers. Without even filtering this sample data set, we see that Server1.Contoso.com is by far the largest offender for fulfilling SMB1 data requests. Let’s click on it and display only the SMB1 data going against this Server, Server1.

Clicking on it greys out the other SMB1 flows in the data set. SMB1 flows hitting Server1 are now the only ones in scope. This is reflected in the box we looked at originally -- the one that represented SMB1 client-server traffic flows:

At this point, we have a specific, actionable set of SMB1 client computers to investigate and remediate; in other words, we have now actionable data.

Our investigation here would quickly lead us to the clients making SMB1 calls against Server1 and then against other servers. It is likely that either the clients or something running on them are responsible for requiring the use of SMB1.

It is also possible that the server itself is demanding SMB1 usage through misconfiguration (I’ve seen that several times). This is all the more reason to remove SMB1 from the environment, and then to disallow its use globally.

If you have read my previous blog entries on Project VAST, the process should be becoming familiar to you. We have now surfaced a major vulnerability with SMB1 and can now take action to mitigate it. This is a central story to any modern and thoughtful security journey. Once we’re satisfied we’ve mitigated SMB1 traffic, then by all means we will disable it in the environment.

That wraps it up for Project VAST and SMB1 auditing for now. Good luck, let us know how we can help and, as always, happy auditing.

*If I get my hands on the article, I will post a link.

**Thanks to my friend and colleague Mark Simos for this insight.

+https://cloudblogs.microsoft.com/microsoftsecure/2017/06/27/new-ransomware-old-techniques-petya-adds-worm-capabilities/

Microsoft Graph API を利用して Azure AD のサインイン アクティビティ レポートを CSV ファイルで取得する PowerShell スクリプト

$
0
0

こんにちは、Azure & Identity サポート チームの 姚 (ヨウ)です。

 

前回 Azure AD のサインイン アクティビティ レポートと監査アクティビティ レポートを Azure AD Graph API を経緯し、PowerShell csv 形式で取得できるスクリプトを紹介ました。

 

既にご存知の方もいると思うのですが、マイクロソフトとしては、Azure AD Graph API ではなく、今後は Microsoft Graph API の利用を推進しております。

 

今回は、PowerShell スクリプトで Microsoft Graph API を利用して Azure AD のサインイン アクティビティ レポートを csv 形式で取得する方法を紹介いたします。

なお、今回のスクリプトでは前回紹介しましたスクリプトと比較してサインイン アクティビティ レポートの内容をより詳細に csv ファイルに格納するようにしています。

 

 

1. 以下のドキュメントに記載された手順に従って、アプリケーションを登録し、"構成設定を収集する" に従って必要な情報を取得します。

 

Azure AD Reporting API にアクセスするための前提条件

https://docs.microsoft.com/ja-jp/azure/active-directory/active-directory-reporting-api-prerequisites-azure-portal

 

2. テキスト エディタを開き、次の中身をコピーし環境に合わせて赤字部分の内容を設定します。その後 .ps1 ファイルとして保存し、実行します。

これによりサインイン アクティビティ レポートを csv ファイルとして取得できます。

 

=======================================================================================================

$clientID       = "<アプリケーションのクライアント ID>" # 手順の 1 で作成したアプリケーションのクライアント ID です。

$clientSecret   = "<アプリケーションのクライアント シークレット>" # 手順の 1 で作成したアプリケーションのクライアント シークレットです。

$loginURL       = "https://login.windows.net/"

$tenantdomain   = "<対象のテナント ドメイン名>" # 利用されている Azure AD のテナント名です。例えば contoso.onmicrosoft.com です。

$msgraphEndpoint = "https://graph.microsoft.com"

$outfile =  "<出力ファイルのファイル名>.csv"

$data=@()

 

# Get an Oauth 2 access token based on client id, secret and tenant domain

$body       = @{grant_type="client_credentials";resource=$msgraphEndpoint;client_id=$clientID;client_secret=$clientSecret}

$oauth      = Invoke-RestMethod -Method Post -Uri $loginURL/$tenantdomain/oauth2/token?api-version=1.0 -Body $body

 

if ($oauth.access_token -ne $null) {

$headerParams = @{'Authorization'="$($oauth.token_type) $($oauth.access_token)"}

 

$url = "$msgraphEndpoint/beta/auditLogs/signIns"

Write-Output "Fetching data using Uri: $url"

 

Do{

$myReport = (Invoke-WebRequest -UseBasicParsing -Headers $headerParams -Uri $url)

$myReportValue = ($myReport.Content | ConvertFrom-Json).value

$myReportVaultCount = $myReportValue.Count

 

for ($j=0; $j -lt $myReportVaultCount; $j++)

{

$eachEvent =@{}

 

$thisEvent = $myReportValue[$j]

$canumbers = $thisEvent.conditionalAccessPolicies.Count

 

$eachEvent = $thisEvent |

select id,

createdDateTime,

userDisplayName,

userPrincipalName,

userId,

appId,

appDisplayName,

ipAddress,

clientAppUsed,

mfaDetail,

correlationId,

conditionalAccessStatus,

isRisky,

riskLevel,

 

@{Name='status.errorCode';Expression={$_.status.errorCode}},

@{Name='status.failureReason';Expression={$_.status.failureReason}},

@{Name='status.additionalDetails';Expression={$_.status.additionalDetails}},

 

@{Name='deviceDetail.deviceId';Expression={$_.deviceDetail.deviceId}},

@{Name='deviceDetail.displayName';Expression={$_.deviceDetail.displayName}},

@{Name='deviceDetail.operatingSystem';Expression={$_.deviceDetail.operatingSystem}},

@{Name='deviceDetail.browser';Expression={$_.deviceDetail.browser}},

 

@{Name='location.city';Expression={$_.location.city}},

@{Name='location.state';Expression={$_.location.state}},

@{Name='location.countryOrRegion';Expression={$_.location.countryOrRegion}},

@{Name='location.geoCoordinates.altitude';Expression={$_.location.geoCoordinates.altitude}},

@{Name='location.geoCoordinates.latitude';Expression={$_.location.geoCoordinates.latitude}},

@{Name='location.geoCoordinates.longitude';Expression={$_.location.geoCoordinates.longitude}}

 

for ($k=0; $k -lt $canumbers; $k++)

{

$temp = $thisEvent.conditionalAccessPolicies[$k].id

$eachEvent = $eachEvent | Add-Member @{"conditionalAccessPolicies.id$k" =$temp} -PassThru

 

$temp = $thisEvent.conditionalAccessPolicies[$k].displayName

$eachEvent = $eachEvent | Add-Member @{"conditionalAccessPolicies.displayName$k" =$temp} -PassThru

 

$temp = $thisEvent.conditionalAccessPolicies[$k].enforcedGrantControls

$eachEvent = $eachEvent | Add-Member @{"conditionalAccessPolicies.enforcedGrantControls$k" =$temp} -PassThru

 

$temp = $thisEvent.conditionalAccessPolicies[$k].enforcedSessionControls

$eachEvent = $eachEvent | Add-Member @{"conditionalAccessPolicies.enforcedSessionControls$k" =$temp} -PassThru

 

$temp = $thisEvent.conditionalAccessPolicies[$k].result

$eachEvent = $eachEvent | Add-Member @{"conditionalAccessPolicies.result$k" =$temp} -PassThru

}

$data += $eachEvent

 

}

 

#Get url from next link

$url = ($myReport.Content | ConvertFrom-Json).'@odata.nextLink'

}while($url -ne $null)

 

} else {

Write-Host "ERROR: No Access Token"

}

 

$data | Sort -Property createdDateTime  | Export-Csv $outfile -encoding "utf8" -NoTypeInformation

=======================================================================================================

 

 

このスクリプトで取得できるサインイン アクティビティ レポートをさらに詳細にフィルターしたい場合、以下の公開情報を参照いただければと思います。

 

signIn resource type

https://developer.microsoft.com/en-us/graph/docs/api-reference/beta/resources/signin

 

 

次回は監査アクティビティ レポートの取得方法を紹介いたします。

 

 

このブログの情報がお客様の検証や運用のお役に少しでもお役に立てばと思います。

 

製品動作に関する正式な見解や回答については、お客様環境などを十分に把握したうえでサポート部門より提供させていただきますので、ぜひ弊社サポート サービスをご利用ください。

※本情報の内容(添付文書、リンク先などを含む)は、作成日時点でのものであり、予告なく変更される場合があります。

RBAC のスコープについて

$
0
0

こんにちは、Azure & Identity サポート チームの坂井です。

 

今回は ロールベースのアクセス制御 (RBAC) について紹介します。

Azure の各種リソースに対するアクセス制御 (IAM) とも呼ばれ、リソースに対して実行できること、そのユーザーがアクセスできる領域を管理するのに役立ちます。

 

RBAC の設定ポイントとしては、3 つあります。

 

セキュリティ プリンシパル :権限を付与する対象を表します。ユーザー、グループ、サービス プリンシパルに設定可能です

ロール (権限) 定義       :組み込みロールカスタム ロールから、付与したい権限の種類を決定します

Scope (スコープ)      :権限が適用される範囲を決定します   (サブスクリプション > リソース グループ > リソース)

サブスクリプション全体に対して権限を付与すると、リソース グループ、リソースへと権限が継承されます

 

ここからは "スコープ" について具体例をもとに、紹介します。

 

 

シナリオ 1 : サブスクリプション単位


【設定内容】 : サブスクリプション A には「所有者」権限、サブスクリプション B には権限の付与なし

 

【設定結果】 : ユーザーは、サブスクリプション A のすべての操作が可能ですが、サブスクリプション B を参照することもできません。

 

【設定手順】 : [サブスクリプション] - [サブスクリプション A ] - [アクセス制御 (IAM) ] より 「+追加」

 

 

 

シナリオ 2 : リソース グループ単位


【設定内容】 : リソース グループ 1 には「所有者」権限、リソース グループ 2 には権限の付与なし

 

【設定結果】 : ユーザーは、リソース グループ  1  配下のすべての操作が可能ですが、リソース グループ 2 を参照することもできません。 また、新しいリソース グループの作成など、サブスクリプションのスコープで操作が必要な操作はできません。

 

【設定手順】 : [リソース グループ] - [リソース グループ 1 ] - [アクセス制御 (IAM) ] より 「+追加」

 

 

 

シナリオ 3 : リソース単位


【設定内容】 : リソース a には「所有者」権限、その他権限の付与なし

 

【設定結果】 : ユーザーは、リソース a のすべての操作が可能ですが、その他リソースの参照や操作はできません。

 

【設定手順】 : [リソース グループ 1 ] - [リソース a ] - [アクセス制御 (IAM) ] より 「+追加」

 

 

 

シナリオ 4 : 別スコープで、別の権限を付与


【設定内容】 : サブスクリプション Aには「所有者」権限、サブスクリプション B のリソース グループ 4 に「閲覧者」の権限を付与

 

【設定結果】 : ユーザーは、サブスクリプション A のすべての操作が可能です。サブスクリプション B については、リソース グループ 4 の閲覧 (つまり参照) のみ可能です。 このシナリオは、RBAC の設定がひとつのユーザーやグループ オブジェクトに対して、複数のスコープや異なった権限を付与することが可能であることを示します。

 

【設定手順】 : [サブスクリプション] - [サブスクリプション A ] - [アクセス制御 (IAM) ] より 「+追加」で役割を「所有者」で選択

   [リソース グループ] - [リソース グループ 4 ] - [アクセス制御(IAM) ] より 「+追加」で役割を「閲覧者」で選択

 

 

 

シナリオ 5 : 親スコープからの継承


【設定内容】 : サブスクリプション Aには「閲覧者」権限、リソース グループ 2 に「所有者」の権限を付与

 

【設定結果】 : ユーザーは、サブスクリプション A のすべての参照が可能かつ、リソース グループ 2 配下において、すべての操作が可能です。

 

【設定手順】 : [サブスクリプション] - [サブスクリプション A ] - [アクセス制御 (IAM) ] より 「+追加」で役割を「閲覧者」で選択

    [リソース グループ] - [リソース グループ 2 ] - [アクセス制御(IAM) ] より 「+追加」で役割を「所有者」で選択

 

 

-補足

サブスクリプションとリソース グループに与える権限を逆にして、サブスクリプション A には「所有者」権限、リソース グループ 2 に「閲覧者」を付与した場合は、リソース グループ 2 に対しても、より強力な「所有者」の権限が継承されます。

 

 

 

各サブスクリプションが紐づく Azure AD が異なる場合について


今回のシナリオはすべて、サブスクリプション A と サブスクリプション B が同じ Azure AD に紐づいている前提です。

もし、サブスクリプション A - Azure AD 1、サブスクリプション B - Azure AD 2 というようにそれぞれのサブスクリプションが別の Azure AD に紐づいている場合は、設定やリソースを操作するためにはディレクトリの切り替えを実施する必要があります。

 

 

また、サブスクリプションの権限と Azure AD の権限は異なりますので、サブスクリプションに対して「所有者」の権限を付与しても、Azure AD 上のユーザー作成・削除、パスワード変更などの管理者操作が行えるわけではありませんので、その点ご注意ください。

こちらについては、リンクのブログでも紹介しておりますので、合わせてご参照ください。

 

 

上記内容が少しでも皆様の参考となりますと幸いです。

 

 

STOP エラー 0x00000050 メッセージが Windows ベースのコンピューターで表示される事象について

$
0
0

いつも弊社製品をご利用いただきまして誠にありがとうございます。
Windows プラットフォーム サポートの石田です。

パフォーマンス モニターのプロセス情報の監視を行っている環境などプロセスの情報を参照する処理を行っている環境にて STOP エラー 0x00000050 が発生した場合の対応についてご案内させていただきます。

[事象]
Microsoft Windows Server 2012 ベースまたは Microsoft Windows Server 2012 R2 ベースのコンピューターで、パフォーマンス モニターのプロセス情報の監視を行っている環境などプロセスの情報を参照する処理を行っている環境にて、ごく稀に以下の STOP エラー メッセージが表示され予期せぬ再起動が発生します。


STOP:0x00000050 (parameter1, parameter2, parameter3, parameter4)

PAGE_FAULT_IN_NONPAGED_AREA (50)


注意事項
STOP エラー メッセージ内のパラメーターは、コンピューターの構成によって異なる場合があります。
すべての STOP エラー 0x00000050 メッセージが本問題とは限りません。

[原因]
プロセスの情報を参照する処理と参照対象のプロセスの終了が重なった際に、ごくわずかなタイミングでメモリー マネージャーが解放済みの領域にアクセスしてしまい、STOP エラーが検知されます。

[解決策]
本事象は Windows Server 2016 以降で改修されました。Windows Server 2012 R2 以前の OS では修正はリリースされておりません。

[回避策]
Windows Server 2012 R2 以前の OS では本事象を完全に回避する方法はありません。ただし、プロセス情報のパフォーマンス カウンターを取得し監視を行っている場合は、参照頻度を下げるもしくは監視を除外することで発生頻度を軽減できる可能性があります。

本情報の内容(添付文書、リンク先などを含む)は、作成日時点でのものであり、予告なく変更される場合があります。

[GPS] Three important news regarding Group Policy Search features

$
0
0

I haven't blogged about the GPS for a while now - there is not so much actions around it - besides the continuous updates of the GPOs, currently supporting up to and including Windows 10 1803V2.

But we changed/added three features in the last few days/weeks:

  1. Supporting the trend to encrypt all data in transit we now switched to TLS only!  For you as a user you should not experience any difference besides the "s" in the url.
    We do not send, receive or store any PII or other confidential or sensible data, therefor this step is not "really" necessary, but it doesn't hurt anybody, so there we go with https and a more secure web!
  2. As we hear a lot request "what is new in Windows 10 Version wxyz?" we changed some mechanisms in the db, so that it is possible now to get a list of the fresh Win10 policies for the most current supported version.
    E.g. at the moment of writing this article the most current and supported version of Win10 is "1803". Therefor you will get the new policies which came AFTER 1709 and are live in 1803. [Sidenote: we are adding new policies as soon as a new version is General Available (GA)]
    The link to get the latest Win10 policies is: http://aka.ms/gps/latestInfo: There is no and there will probably never be a possibility in the GPS to dig deeper into the different Win10 versions as we expect anybody to stay current so that you will always come from n-1 and you want to know what differences are to n.
    Info2: In some unlikely cases there come new GPOs to "older" or existing versions of Windows. These new policies are NOT covered with this feature, it will only show the newest policies only supported starting the most current version!
    Info3: Easiest way to view this list is using Internet Explorer, because it comes as an XML (RSS) document (this is a legacy function...).
  3. Some weeks ago we already added a 6th language to the GPS: pt-PT is now a supported language in the GPS. This means that currently the Windows GPOs are already available in pt-PT, the Office policies are not -> we will jump onto the pt-PT Office Policies with the next major Office release.

 

That's for now - thanks for using the Group Policy Search! If you have any feedback or requests we love to hear from you in the comments, on Facebook or directly in my inbox.

 

Cheers

Stephanus

Script to help with Site Deletion feature

$
0
0

Hello All,

I'm sure many of you know about the Site Deletion feature, if you don't please read here.  My customer was having some issues as users weren't responding to emails then they were complaining when sites were deleted.  They also realized later on that the feature doesn't look at SPWeb object (Sub Sites) so I wrote a quick script to help get around some of there concerns.

The script does the following:

  1. Reports to CSV on Url, CertificationDate (Last date confirmed), DeadWebNotificationCount (Emails sent)
  2. ConfirmUsage() on all Sites in whitelist
  3. Reviews LastItemModifiedDate for all subsites and ConfirmUsage() for all sites that have active subsites.

First thing to do is set the variables and I have 4 of them

$WebAppURLs = @("http://sp.weaver.ad","http://ecm.weaver.ad")
$SubWebAge = 39
$WhiteList = "C:Tempwhitelist.txt"
$OutputFile = "C:TempOutput.csv"

The first variable sets which Web Applications you will interrogate, then we need to know how long subsites have to be inactive before we don't Confirm Usage for them, then come the white list which is a text file with Site Url's for all sites that we want to exclude from the Site Usage feature and the output file.

$SiteUrl = $_.Url
$CertDate = $_.CertificationDate
$NotificationCount = $_.DeadWebNotificationCount
Add-Content -Path $OutputFile -Value "$SiteUrl, $CertDate,$NotificationCount"
ForEach ($Site in $WhiteListSite) {
if($_.Url -eq $Site)
{
$_.ConfirmUsage() | Out-Null
}
}

Then for each Site collection we gather the Url, CertificationDate, and the DeadWebNotificationCount and save it to a csv file, we then compare the site url to the whitelist file and for each one we find we run ConfirmUsage() so that the owners never get emails and the site never will be deleted.

$Site | Get-SPWeb -Limit ALL | ForEach-Object {
$TodaysDate = Get-Date
$LastModifiedDate = $_.LastItemModifiedDate
$DateDifference = (New-TimeSpan –Start $LastModifiedDate –End $TodaysDate).Days
if($DateDifference -lt $SubWebAge)
{
$Site.ConfirmUsage() | Out-Null
}
}

Then lastly the script goes thru all sub sites and compares the LastItemModifiedDate to today() and if the difference is less then the variable $SubWebAge that means the sub site is active and we run ConfirmUsage() on the parent Site Collection.

You can find the full script here

Pax

SharePoint Online/OneDrive for Business から Word/Excel/PowerPoint Online でファイルをそのまま編集モードで開けるようになりました

$
0
0

こんにちは、Office サポートの西川 (直)です。

 

SharePoint Online/OneDrive for Business からファイルを Word/Excel/PowerPoint Online で開くと、以前は読み取り専用で開かれていましたが、以下の記事の通りそのまま編集モードで開けるようになりました。

 

Title : Edit faster in Word, Excel and PowerPoint

URL : https://techcommunity.microsoft.com/t5/SharePoint-Support-Blog/Edit-faster-in-Word-Excel-and-PowerPoint/ba-p/220335

 

これにより、"編集" ボタンをクリックすることなく、そのまま編集できるため、よりシームレスに作業をすることが可能となります。

一方で、「ユーザーが編集したと意識せずに Excel ファイルが更新されました」と、お問い合わせをいただくことがあります。

これは、以下のような条件に合致した際に発生します。

 

条件

・Now 関数のような揮発性関数がファイルに含まれている

・ファイルを最後に保存した Excel のメジャーバージョンが過去のものである

 

一方で、上記の条件に合致するファイルをご利用されており、直接編集モードで開きたくないというお問い合わせもいただきます。

上記の記事のとおり、読み取り専用で開くか編集モードで開くか設定できるようなオプションは現在検討中となりますが、現時点では以下の方法により、読み取り専用でファイルを開くことができます。

 

方法 1 : プレビューを使用する

-----------------------------------

ドキュメントライブラリ上で右クリックし、"プレビュー" をクリックして表示します。

 

方法 2 : SharePoint Online のドキュメントライブラリの設定において、ファイルのチェックアウトを必須にする

-----------------------------------

SharePoint Online のドキュメントライブラリの設定において、ファイルのチェックアウトを必須にしていただきます。

 

Title : 自動保存とは

URL : https://support.office.com/ja-jp/article/6d6bd723-ebfd-4e40-b5f6-ae6e8088f7a5

※ "他のユーザーが謝って変更しない方法でファイルを共有することはできますか" をご確認ください。

 

なお、ファイルのチェックアウトを必須にするように構成いたしますと、編集時にチェックアウトが必須となってしまう、共同編集が出来なくなる等の留意点があります。

これは、以下の公開情報に詳細が記載されておりますので、ご参照いただければと思います。

 

Title : ファイルのチェックアウトを必須にするようにライブラリを設定する

URL : https://support.office.com/ja-jp/article/0c73792b-f727-4e19-a1f9-3173899e695b

※ "チェックアウトがライブラリで必須になると、どうなりますか。" をご確認ください

 

方法 3 : SharePoint Online のドキュメントライブラリの設定において、既定でクライアント アプリケーションでドキュメントを開く

-----------------------------------

以下の手順により、Office のクライアントアプリケーションでファイルを開くことで "読み取り専用" としてファイルを開いていただくことが可能です。

 

- 手順

1. 設定を変更したいライブラリにアクセスし、右上の歯車ボタンにある [ライブラリの設定] をクリックします。

2. [全般設定] セクションにある [詳細設定] をクリックします。

3. [ブラウザーで開くドキュメント] セクションにて "クライアント アプリケーションで開く" を選択して [OK] をクリックします。

ただし、Office 365 ProPlus の最新をご利用いただいている場合、クライアントアプリケーションにおいても自動保存が有効となります。

 

本情報の内容 (添付文書、リンク先などを含む) は、作成日時点でのものであり、予告なく変更される場合があります。


Cloud Platform Release Announcements for August 8th, 2018

$
0
0

Azure Virtual Machines | Azure reserved instance size flexibility

Pricing Linux | Pricing Windows | Azure Reserved VM Instances Webpage

We recently introduced instance size flexibility—a new feature that's applicable to all existing and new Azure Reserved Virtual Machine (VM) Instance purchases.

Instance size flexibility can:

  • Simplify the management of Azure Reserved VM Instances.
  • Avoid the need to exchange or cancel a reserved instance to apply its benefit to other virtual machines within the same Azure Reserved Instance VM group and region.
  • Help you further reduce costs in many scenarios.
  • Automatically apply Azure Reserved Instance benefits that have been purchased to other VMs within the same group and region.

Instance size flexibility applies to both Windows and Linux Azure Virtual Machines. For general information regarding this new feature please visit the Azure Reserved VM Instances webpage. Also read the engineering blog or the documentation for a more comprehensive overview.

Azure Management Groups | GA

We recently announced the general availability of Azure Management Groups, a free Azure service that allows our you to organize subscriptions and apply governance controls such as Azure Policy and Role-Based Access Controls (RBAC) through hierarchical structures. This service is fully integrated on the Azure platform and enables policy-based governance at scale by allowing you to group subscriptions and other management groups to form a hierarchy.

Use Management Groups to reduce your workload and the risk of error by avoiding duplicate assignments. Instead of applying multiple assignments across numerous resources and subscriptions, apply the one assignment on the one management group that contains the target resources. This will save time in the application of assignments, create one point for maintenance, and allow for better controls on who can control the assignment.

To learn more, read the full blog post.

Azure security and operations management | Tips on hardening security with Azure Security

On the latest episode of Microsoft Mechanics, you’ll see how the Microsoft Threat Intelligence Center is helping to secure Azure and the global security landscape. The Threat Intelligence Center mitigates attacks that target the Azure platform, and that intelligence is fed back into our services for increased protection,

Watch the episode to see how to strengthen your organizational security using these tools.

Azure Database Migration Service | Save project and run activity

The Azure Database Migration Service now supports the ability to create a migration project and perform a specific migration activity, for example offline migration, in a single workflow. This new capability reduces the number of steps required to perform migrations when using DMS.

Learn more about Azure Database Migration Service.

Azure Database Migration Service | Support for using existing backup files for migration

The Azure Database Migration Service now supports the ability to create a migration project and perform a specific migration activity, for example offline migration, in a single workflow. This new capability reduces the number of steps required to perform migrations when using DMS.

Learn more about Azure Database Migration Service.

Azure SQL Database | Save up to 80 percent with reserved capacity and Azure hybrid benefit

Save up to 80 percent with reserved capacity and Azure hybrid benefit

Azure SQL Database reserved capacity is now available for single and elastic pool databases, expanding our commitment to making Azure the most cost-effective cloud for your workloads. This new pricing option saves you up to 33 percent compared to license-included pricing by pre-paying for your SQL Database vCores for a one-year or three-year term. Save up to 80 percent when you combine reserved capacity savings with Azure Hybrid Benefit for SQL Server. Improve your budgeting and forecasting with a single upfront payment, making it easy to calculate your investments.

Learn more.

Linux on App Service Environment | Now generally available

Linux on App Service Environment is now generally available. This feature allows our Linux and containers users to deploy their apps into a subnet of an Azure Virtual Network. This means that you can bring your code and use one of our built-in Linux images, or just bring your own container.

Linux, containerized, and Windows apps can now be deployed in the same App Service Environment. Use an internet-accessible or internal endpoint to deploy your App Service Environment to access resources in your Virtual Network, as well as to take advantage of higher scaling capabilities. Also, we have now expanded Linux on App Service Environment to all App Service regions.

Learn more here, and get started with this quick start.

Azure App Service | App Service Windows Container in preview

Windows Server container support in App Service is now available in preview. Push your Windows container image to Docker Hub, Azure Container Registry, or a private registry, and deploy your containerized app to App Service. This feature enables a straightforward way to migrate .NET applications to Windows containers and run on Azure through a lift-and-shift straight to PaaS approach. Migration scenarios involving applications with dependencies or the need to include custom components in a multitenant environment would also benefit from the Window server container support. Now available in select regions in preview.

Learn more.

Ethereum Proof-of-Authority on Azure

We recently announced the release of Ethereum Proof-of-Authority on Azure, a new template on the Azure Marketplace that allows you to quickly configure and deploy a blockchain network using a PoA consensus approach.

Unlike Proof of Work, which uses computation costs to self-regulate the network and allow fair participation, Proof-of-Authority has a more relaxed consensus approach more suitable for permissioned networks where participants are known and reputable. Without the need for mining, Proof-of-Authority is more efficient while still retaining Byzantine fault tolerance, and it has transaction speeds up to 100 times faster than public Ethereum.

To learn more read the full blog post.

Consejos: Pensar como un reclutador puede abrirles muchas puertas en su búsqueda de empleo

$
0
0

Por: Toyinaminia 'Toy' Norwood, blog Microsoft Life

Un reclutador de Microsoft comparte cómo tomar un enfoque orientado hacia una “conexión fría” con un reclutador puede acercarlos al trabajo que desean.

Construir su carrera profesional es una jornada llena de retos, emoción y obstáculos. Y las jornadas son más sencillas con mapas. En estos textos, expertos laborales responderán sus preguntas y compartirán consejos para ayudarles a dar el siguiente paso.

Pregunta: Estoy interesado en una posición que encontré en un sitio de empleos. Contacté a un reclutador de la empresa a través de LinkedIn, pero no obtuve respuesta. ¿Lo hice mal?

Respuesta: Si encontraron el puesto perfecto para ustedes en un sitio de empleos, tal vez se sintieron tentados a correr una búsqueda rápida en LinkedIn, identificaron a un reclutador que trabaja en esa empresa y lo contactaron. En ocasiones este enfoque funciona, pero muy seguido, no obtienen respuesta. ¿A qué se debe esto?

Aunque LinkedIn es una gran manera de conectar con otras personas durante una búsqueda de trabajo, es probable que hayan tomado el camino equivocado en su creación de conexiones, o incluso hayan contactado a la persona equivocada.

Mike Maglio, reclutador de Microsoft, ofrece un enfoque simple para utilizar LinkedIn para incrementar su oportunidad de recibir una respuesta y crear una conexión significativa. ¿Su secreto? Pensar como un reclutador.

Utilicen LinkedIn como una herramienta de búsqueda

No es de sorprender que los reclutadores utilicen la herramienta de búsqueda de LinkedIn para encontrar candidatos potenciales para sus vacantes laborales. El truco, comenta Maglio, es que aquellos que buscan empleo utilicen la misma herramienta de búsqueda para encontrar reclutadores que podrían ser los que contratan para los empleos que ustedes quieren.

“En su perfil, muchos reclutadores explicarán lo que hacen y qué organizaciones cubren para descubrir búsquedas de manera más precisa”, comentó. Pueden encontrarlos al ejecutar su propia búsqueda.

Por ejemplo, si son ingenieros de software apasionados por trabajar en tecnología Azure, busquen “Azure AND reclutador AND Microsoft”. Maglio sugiere a los que buscan empleos que utilicen una lógica booleana de búsqueda con términos como “AND” para obtener resultados más relevantes con un listado más preciso de reclutadores en ese espacio. “Utilicen filtros como empresa actual, ubicación, etc., para obtener resultados aún más relevantes”, agregó.

“Incluso dentro de un producto tan grande como Azure, deberán ser tan específicos como sea posible con su búsqueda”, mencionó Maglio. “Entre más orientada esté su búsqueda, mejor".

Revisen los perfiles de los reclutadores que encuentren y luego elijan algunos que se adecúen a sus calificaciones específicas, como ingeniero de software, graduados recientes, y soluciones de Azure.

Preséntense

Ahora que han localizado a los reclutadores adecuados, es tiempo de presentarse. Fabriquen un mensaje que sea conciso, preciso, y que ofrezca información que explique quiénes son. “Los reclutadores reciben muchos mensajes, así que ser directos y específicos incrementa la posibilidad de que reciban una respuesta”, comentó Maglio.

Utilicen un saludo cordial, como “Hola [Nombre del reclutador]” y luego sean claros acerca de lo que buscan (por ejemplo: referencia para un puesto, conexión a un equipo, información, etc.). Un reclutador verá su perfil, así que no tienen que enviar on currículum completo o escribir una introducción con toda su experiencia.

¿Tienen una conexión en común? Mencionen a esa persona en su presentación, o mejor aún, pidan a su conexión en común que haga una presentación InMail entre ustedes y los reclutadores, sugiere Maglio. Esto les da un “impulso de confianza” automático porque los reclutadores están familiarizados con la conexión que los recomienda.

Expliquen lo que quieren

“Si establecen contacto acerca de un puesto, incluyan la liga de la publicación del empleo. Permitan que los reclutadores sepan que están interesados y quisieran ser considerados para ese puesto”, comentó. Esto también ayudará a los reclutadores a conectarlos con otros reclutadores o equipos de contratación, en caso de que ese puesto en específico sea manejado por alguien más.

Si sólo desean más información, sean claros al respecto. Si los reclutadores pueden ayudar, es probable que agenden un tiempo para conversar con ustedes o incluso que los refieran con alguien más en la organización.

Den una razón para creer

Los reclutadores necesitan entender quiénes son ustedes más allá de su currículum y su perfil en LinkedIn, así que utilicen esta oportunidad para mostrarles lo que pueden llevar a la compañía o al trabajo.

“Deben ser capaces de demostrar su valor y mostrar que son un aplicante informado, pero sean concisos”, comentó Maglio.

“Pueden hacer una breve referencia a un artículo o comunicado relevante que esté atado a lo que los apasiona. O, si es posible, mencionen una patente, aplicaciones que hayan creado, o una presentación de proyecto que pueda ser vista”, agregó.

Estos ejemplos muestran sus pasiones e intereses, más allá de su currículum. “Pero mantengan el mensaje corto y directo”, mencionó Maglio. “Lo último que quieren hacer es enterrar ese tipo de información”.

No dejen de buscar una conexión

Si han seguido estos pasos y no han podido conectar con el primer conjunto de reclutadores que identificaron, no dejen de aplicar y refinar estos pasos.

La conexión correcta está ahí afuera, junto con el puesto de sus sueños.

Azure Stack: úvod do cloudu, který se zatoulal k vám do sklepa

$
0
0

Azure je velký cloud s fantastickým tempem vývoje, obrovskou flexibilitou a možnostmi, širokou plejádou služeb a je to ideální místo pro vaše IT potřeby. Přesto někdy jsou důvody proč provozovat něco lokálně. Pro moderní aplikace s lokálním nasazením je tu Azure Stack. Podmnožina Azure dostupná "ve vašem sklepě" a konzistentní s ovládáním Azure. Bezpečné a rychle nasaditelné řešení. Azure Stack je pro ty, kteří chtějí využívat Azure u sebe, ne pro ty, co si užívají budování cloudu. Máte rádi Azure, ale pro některé situace potřebujete běžet lokálně?
Podívejme se co je Azure Stack a kdy ho využít.

Cesta k Azure Stack

Nápad nabídnout kousek Azure do datového centra zákazníka nebo providera není nový. Něco takového chtěl Microsoft celou dobu, ale cesta k Azure Stack nebyla jednoduchá a firma se při ní hodně naučila. Myslím, že je dobré a poučné tento historický kontext vnímat – osvětluje to některá designová rozhodnutí Azure Stacku.

Pokus první: Azure ve vašem datovém centru (cirka 2011)

Pojďme významným zákazníkům nabídnout Azure tak, jak se to dělá v Microsoft datových centrech. Tato myšlenka pochází z roku 2011 a měla několik problémů. Tím největším bylo, že nejmenší jednotka nasazení (scale unit) v Azure je 800 fyzických serverů.  To už je panečku pořádná investice, na kterou je málo který zákazník připraven. Navíc operační model vyžadoval stejnou komplexitu a znalosti jako provoz Azure samotného. I když zákazník něco takového chtěl, bylo to tak komplikované, že se mu o to stejně Microsoft musel kompletně starat. Tento model sice pár zákazníků našel, ale úspěšný moc nebyl.

Pokus druhý: Azure-like Hyper-V ve vašem datovém centru s Azure Pack  (cirka 2013)

Dobře, nainstalovat skutečný Azure je nesmírně složité a nákladné, pojďme na to jinak. Jak zajistit nasazení v menší škále? Co použít metody klasického on-premises produktu, čili Hyper-V a System Center a nad to přidat vrstvu, která simuluje Azure? Takhle vznikl Azure Pack, rozšíření on-premises produktů. To byla o poznání úspěšnější varianta, ale rovněž přinesla úskalí. V okamžiku, kdy máte pod kapotou on-premises řešení, je dost obtížné zajistit konzistentní chování. V Azure je všechno softwarově definované – compute, storage i networking, ale tady máte klasičtější řešení, které vaši schopnost přinést Azure konzistenci značně limituje. Ten hlavní problém není udělat produkt, který připomíná podmnožinu Azure jako snapshot v čase, ale hlavně udržet krok s jeho vývojem. Azure se mění velmi rychle a inovuje fantastickým tempem. Výsledkem bylo, že rozdíl mezi Azure a Azure Pack se nejen nezmenšoval, ale ani nezůstával stejný. Právě naopak – neustále se zvětšoval, protože Azure přišel s novým portálem a API (Azure Resource Manager) a to už se do Azure Pack nikdy nedostalo.

Úspěšné řešení: Azure Stack  (2017)

Bylo tedy potřeba se vrátit k rýsovacímu prknu a to se staro v roce 2015 a v té době začaly také poprvé do tisku unikat zvěsti o Azure Stack. První snahy stále ještě stavěly na on-premises světě System Center a nad to se pokoušely dát vrstvu nového Azure (ARM). Brzy se ale přišlo na to, že oba negativní aspekty předchozích snah by se znova opakovaly. Podobně jako u prvního pokusu zde byla neuvěřitelná komplexita variant a celé to bylo neudržitelně složité a také minimální velikost instalace se blížila deseti fyzickým serverům, což byl overhead, který byl zkrátka moc – cílem bylo nabídnout Azure Stack už od 4 nodů. Podobně jako v případě Azure Pack tady bylo příliš mnoho „překladů“ mezi Azure a on-premises modelem a začalo být zřejmé, že by znovu nebylo možné udržet s vývojem Azure krok. Muselo se na to jinak.

... Pokračovat ve čtení na blogu Tomáše Kubiceho...

We’ve made updates to the Windows Admin Center SDK (Preview)!

$
0
0

We’re excited to announce the following updates to the SDK, currently in public preview:

  • Windows Admin Center CLI
  • Target an SDK version
  • Refreshed examples
  • Publishing extensions and plugins
  • What’s next?

Windows Admin Center CLI

We’ve released the Windows Admin Center CLI as part of the SDK!  The CLI installs seamlessly alongside your other global dependencies by running ‘npm install -g windows-admin-center-cli’.  Once installed, you can create a new empty tool or solution with a single command, with more features planned in upcoming releases.  Read more about using the CLI in our SDK documentation.

Target an SDK version

Keeping your extension up to date with SDK changes and platform changes is easy.  We’re using NPM to tag the GA release, preview release, and latest versions of our platform dependencies.  Learn how to use those to update your development environment automatically, and switch between versions to validate your extension’s integration with our latest features.

Refreshed examples

We’ve refreshed the tool, solution, and gateway plugin examples to leverage all the latest features in the Windows Admin Center platform, and the examples are now built on top of the Windows Admin Center CLI.  Check them out!

Publishing extensions and plugins

You have more options for publishing extensions and gateway plugins.  Now you can bundle a gateway plugin with an extension package.

What’s next?

We’re on the path to GA release of the Windows Admin Center SDK!  A number of additional improvements and new content are planned between now and then, stay tuned for updates.

Troubleshooting Windows 10 Intune Policy Failures

$
0
0

Quick brain dump today. One of our customers recently reached out with an issue where a policy for Windows 10 wasn’t applying correctly, and we were returning a very unhelpful error message “-2016281112 Remediation failed”.

Unfortunately, the Remediation failed error message is all that is returned by the client when we issue the SET command on the OMA-URI’s required to configure the target setting. We’re partnering with Windows to improve this experience, so watch this space. But for now, we have to settle for what we have.

So what are the next steps in troubleshooting this error?

Luckily, Windows has a pretty good diagnostics channel in everyone’s favorite Event Viewer (eventvwr).

So first, open up eventvwr.msc from Run.

mmc_2018-08-09_08-51-48

Next, browse to Application and Services Logs > Microsoft > Windows > DeviceManagement-Enterprise-Diagnostics-Provider. You’ll see two logs, Admin and Operational

mmc_2018-08-09_08-54-20

Firstly, take a look in the Admin log. You should see some high level error messages which might point to an obvious issue. For example, here on my corp device I’ve got an error message for an app deployment via MDM.

mmc_2018-08-09_08-56-31

This error obviously indicates an app is not being discovered as expected. I recon if I gave this a couple more syncs, the app would reinstall and all would be well.If the error messages in the Admin log are still unhelpful, we have one other option and that’s to enable Debug logging on the DeviceManagement-Enterprise-Diagnostics-Provider.

To do this, from the View menu in eventvwr, enable the Show Analytic and Debug Logs option. This will likely make your eventvwr window flash like crazy for a minute or two, but it’s enabling a bunch of extra logs and the UI doesn’t like it much.

mmc_2018-08-09_08-59-21

Once enabled, you’ll now see a Debug log option in the DeviceManagement-Enterprise-Diagnostics-Provider. Now enable the log by right-clicking on the log and selecting Enable Log.

mmc_2018-08-09_09-02-57

Now run a repro of your issue by running a Sync (Control Panel > Access work or school > Connected to Azure AD > Info)

ApplicationFrameHost_2018-08-09_09-04-41

In the debug log, you should see a bunch of verbose debug information about the sync and settings being applied.

mmc_2018-08-09_09-22-06

And here you can see the Wifi URI being applied successfully. If there was an issue with the Wifi configuration, I’d get a much more helpful reason as to why the URI failed. I’m not seeing the error from the MDM MSI anymore, so it must have fixed itself on subsequent check-ins.

Hope you find this helpful!

Matt Shadbolt
Senior Program Manager for Microsoft Intune

Viewing all 34890 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>