Today I will continue to look into Azure and what it will do for me.
Good Morning Everyone
Cloud Platform Release Announcements for August 22, 2018
Azure SQL Database | Managed instance general purpose availability coming soon
Azure SQL Database Managed Instance general purpose will become generally available on October 1, 2018. Managed Instance is a new deployment option providing near full compatibility with SQL Server, allowing you to migrate your on-premises workloads to the cloud with minimal changes. The general-purpose performance tier provides balanced and scalable compute and storage options that fit most business workloads. Create a managed instance today and use your SQL Server licenses and active Software Assurance to save up to 55 percent with Azure Hybrid Benefit for SQL Server.
Learn more about the Azure Hybrid Benefit for SQL Server. Learn how to create a managed instance today.
SKU changes and additions for the Azure Database Migration Service
The Basic 1 and 2 vCore SKUs for the Azure Database Migration Service have been renamed to General Purpose 1 and 2 vCore, and a General Purpose 4 vCore SKU has been added. In addition, a new Business Critical 4 vCore SKU is available for higher-end workloads.
For more information, visit the pricing page.
Support for SQL Server to Azure SQL Database online migrations
Now in preview, migrate SQL Server databases to Azure SQL Database with minimal downtime by using the Azure Database Migration Service. Read the Azure Database Migration Service documentation to perform migrations from PostgreSQL on-premises, or on virtual machines to Azure Database for PostgreSQL.
Support for MySQL to Azure Database for MySQL online migrations
Now in preview, migrate MySQL databases to Azure Database for MySQL with minimal downtime by using the Azure Database Migration Service. Read the Azure Database Migration Service documentation to perform migrations from MySQL on-premises, or on virtual machines to Azure Database for MySQL.
Learn more about Azure Database Migration Service.
Azure Database migration service support for MySQL to Azure Database for MySQL migrations
Now in preview, you can migrate MySQL databases to Azure Database for MySQL with minimal downtime by using the Azure Database migration service. Visit the Azure portal and provision an instance of the Database migration service to perform migrations from MySQL on-premises or on virtual machines to Azure Database for MySQL.
Learn more and Learn how to provision an instance of the Database migration service to perform migrations.
Virtual network service endpoints for Azure database services for MySQL and PostgreSQL now available
Now generally available, virtual network service endpoints for Azure database services for MySQL and PostgreSQL are accessible in all available regions. Virtual network service endpoints allow you to isolate connectivity to your logical server from only a given subnet or set of subnets within your virtual network. Traffic to Azure database services for MySQL and PostgreSQL from the virtual network service endpoints stays within the Azure backbone network. This direct route will be preferred over any specific routes that take internet traffic through virtual appliances or on-premises.
Learn more about Azure Database Service for MySQL.
Learn more about Azure Database Service for PostgreSQL.
Azure SQL Data Warehouse | Australia France regions—GA
Azure SQL Data Warehouse Gen2 is now available in Australia and France
Azure SQL Data warehouse is a fast, flexible, and secure analytics platform. The compute optimized Gen2 tier of Azure SQL Data Warehouse is now available in 23 regions. We recently made the service available in Australia Central, Australia Central 2, and France Central. Compute optimized Gen2 tier, using adaptive caching and instant data movement, now brings at least five times better performance.
Azure Database for MySQL | Read replica in preview
Azure Database for MySQL now supports continuous asynchronous replication of data from one Azure Database for MySQL server (‘master’) to up to five Azure Database for MySQL servers (‘replicas’) in the same region. This allows read-heavy workloads to scale beyond the capacity constraints of one Azure Database for MySQL server and be balanced across replica servers according to your preference. Replica servers are read-only except for writes replicated from data changes on the master. Stopping replication to a replica server causes it to become a standalone server that accepts reads and writes.
Power BI Desktop | GA
Our Power BI Desktop August 2018 release is filled with features that address some of your top requests, including our export to PDF feature which is geared towards the number one feature request on UserVoice, printing in Power BI Desktop. We also have an exciting update for data scientists and statisticians with our new Python integration.
For more information, see the Power BI blog.
Power BI Report Server | August 2018 update
The August 2018 update of Power BI Report Server contains several new features, including some highly anticipated items like report theming, conditional formatting improvements, and report page tooltips.
See the Power BI blog for more information.
Power BI service | GA
The latest update to the Power BI service includes several new and exciting capabilities including:
- Dataflows in Power BI (in preview), a new capability to help organizations unify data from disparate sources and prepare it for modeling. Dataflows are used to ingest, transform, integrate, and enrich big data by defining data source connections, ETL logic, refresh schedules, and more.
- Power BI Premium Multi-Geo (in preview), a new feature that helps multinational organizations address their data residency requirements.
- New Power BI Admin APIs, along with a .NET SDK and a PowerShell module, that enable administrators to discover artifacts in their Power BI tenant, as well as take administrative actions.
- The preview of a new workspace experience in Power BI designed to enable enterprises to easily manage Power BI content at scale using security groups, distribution lists, and Office 365 Groups.
For more information, see the Power BI blog.
Azure security and operations management | Configure just-in-time virtual machine access from the VM blade
Just-in-time virtual machine access can now be configured from the virtual machine blade (in preview) to make it even easier for you to reduce your exposure to threats.
Attackers commonly target open ports on internet-facing virtual machines, spanning from port scanning to brute force and DDoS attacks. One way to reduce exposure to an attack is to limit the amount of time that a port on your virtual machine is open. Ports only need to be open for a limited amount of time for you to perform management or maintenance tasks. Just-in-time virtual machine access helps you control the time that the ports on your virtual machines are open.
Azure security and operations management | Security Center confidence score on alerts
Security Center can help your team triage and prioritize alerts with a new capability called confidence score. The Confidence Score automatically investigates alerts by applying industry best practices, intelligent algorithms, and processes used by analysts to determine whether a threat is real and provides you with meaningful insights. To learn more about this, visit our blog.
Visual Studio 2017 | Update
A new update to Visual Studio 2017 is now available This means you can take advantage of everything listed below, and more, for free:
- Productivity
- This release adds notable productivity and debugging enhancers. New additions such as multi-caret editing, contextual navigation, new keybinding profiles, new refactorings, and more enable you to be even more productive.
- Performance
- Performance was again a big area of focus and we've made significant improvements in areas such as Git branch checkout and switching, test performance, CPU usage tools, and more.
- .NET Development
- .NET Core SDK 2.1 is included in this release, as well as .NET Framework 4.7.2.
- For Mobile .NET Development with Xamarin, we've made improvements to the Android designer, and this release also adds support for the Hyper-V compatible Google Android emulator.
- C++ Development
- We've made improvements to the C++ development experience by adding a C++11 standards compliant experimental, token-based preprocessor, C++ Just My Code debugging, and background code analysis.
- F# Development
- F# language version 4.5 is added in this release, as are improvements to IntelliSense performance, brace completion, bug fixes, and an experimental CodeLens implementation.
- JavaScript and TypeScript Development
- This release of Visual Studio includes TypeScript 3.0 by default and has improved support for Vue.js and ESLint.
- Web tools
- The all-new library manager helps you manage client-side libraries in your web projects. Additionally, we have added a new single project Docker container experience.
For more details, head over to the blog or the release notes.
Download the Visual Studio 2017 update today through the Visual Studio installer or VisualStudio.com
SharePoint: Quick Edit – The user does not exist or is not unique
Consider the following scenario:
You have a SharePoint 2013 or 2016 web application that has both Windows and Trusted Provider / SAML authentication (ADFS, etc) enabled.
You have a list with a "Person or Group"-type (aka: "people picker") column in it.
You edit the list using the "Quick Edit" / "edit this list" functionality to edit the list in a datasheet-style view.
You search for a user account in the Person or Group column and it resolves correctly:
You hit enter to go to the next row and you get an error on the first row that says: "The user does not exist or is not unique".
This seems very odd since you just, seconds before, successfully resolved that to a single user account.
Note: This may only occur for users not already added to the site collection. More on that later…
So why does this happen?
When you hit enter to go to the next row, the "Quick Edit" control tries to update the list item. It must re-resolve the user that you entered into the People Picker column.
It does so using the users email address. It calls into all available claims providers including Active Directory and your Trusted Provider, which both return results.
That's why we get the "The user does not exist or is not unique" error. We did not get a single unique result.
Solutions:
This one does not have an ideal solution at the moment.
It only occurs when there are multiple authentication providers enabled on the web application, and only when using the "Quick Edit" control, so if you avoid either of those, you won't see the problem. That's more like "problem avoidance" than a "solution", but I do have a workaround.
Workaround: For the user account that fails to resolve, add them to any SharePoint group or "person or group"-type list column in the site. It doesn't have to be the same list, you can add them anywhere in the entire site collection, and then "Quick Edit" should be able to properly resolve them.
Here I'm just using the normal new item form to add my "Test 1" user to my list.
Not only does that work to add the user to the "person or group" column, after doing so, I'm also able to successfully add the same user using "Quick Edit":
Why this works: When the user already exists in the site collection, the "Quick Edit" control still re-resolves the user and gets multiple matches, but it then uses those results to match to the User Information List for the site collection and resolve it to a single unique user account. So all you have to do is use the normal (non-quick edit) new item or edit item forms to add the user to any person or group column. After doing that once, you then should be able to add that same user using the "Quick Edit" view on the same list or any other list in the site collection.
Technical details:
Note: While I've seen this happen in both SharePoint 2013 and 2016, all my testing was done on SharePoint 2016 at build 16.0.4639.1002.
In your SharePoint ULS logs, you'll see something like this:
08/14/2018 09:28:46.95 w3wp.exe (0x18E8) 0x2E70 SharePoint Foundation Resolve ax7nk Medium Found multiple matches for resolution. Web: 'd21a207d-0726-4779-8e8d-c0cb97011805', WebApp: 'null', Input: 'test1@contoso.com', PrincipalType: 'User, SecurityGroup, SharePointGroup', PrincipalSource: 'UserInfoList, MembershipProvider, RoleProvider', MatchCount: '3'. 3ab0849e-e089-e088-a10e-e4e9a4c5effc
08/14/2018 09:28:46.95 w3wp.exe (0x18E8) 0x2E70 SharePoint Foundation General 8kh7 High The user does not exist or is not unique. 3ab0849e-e089-e088-a10e-e4e9a4c5effc
08/14/2018 09:28:46.95 w3wp.exe (0x18E8) 0x2E70 SharePoint Foundation General art2g High ListItemUpdate ExpectedFailure: Microsoft.SharePoint.SPException: The user does not exist or is not unique. ---> System.Runtime.InteropServices.COMException: The user does not exist or is not unique. at Microsoft.SharePoint.Library.SPRequestInternalClass.AddOrUpdateItem(String bstrUrl, String bstrListName, Boolean bAdd, Boolean bSystemUpdate, Boolean bPreserveItemVersion, Boolean bPreserveItemUIVersion, Boolean bUpdateNoVersion, Int32& plID, String& pbstrGuid, Guid pbstrNewDocId, Boolean bHasNewDocId, String bstrVersion, Object& pvarAttachmentNames, Object& pvarAttachmentContents, Object& pvarProperties, Boolean bCheckOut, Boolean bCheckin, Boolean bUnRestrictedUpdateInProgress, Boolean bMigration, Boolean bPublish, String bstrFileName, ISP2DSafeArrayWriter pListDataValidationCallback, ISP2DSafeArrayWriter pRestrictInsertCallback, ISP2DSafeArrayWriter pUniqueFieldCallback) at Microsoft.SharePoint.Library.SPRequest.AddOrUpdateItem(String bstrUrl, String bstrListName, Boolean bAdd, Boolean bSystemUpdate, Boolean bPreserveItemVersion, Boolean bPreserveItemUIVersion, Boolean bUpdateNoVersion, Int32& plID, String& pbstrGuid, Guid pbstrNewDocId, Boolean bHasNewDocId, String bstrVersion, Object& pvarAttachmentNames, Object& pvarAttachmentContents, Object& pvarProperties, Boolean bCheckOut, Boolean bCheckin, Boolean bUnRestrictedUpdateInProgress, Boolean bMigration, Boolean bPublish, String bstrFileName, ISP2DSafeArrayWriter pListDataValidationCallback, ISP2DSafeArrayWriter pRestrictInsertCallback, ISP2DSafeArrayWriter pUniqueFieldCallback) --- End of inner exception stack trace --- at Microsoft.SharePoint.SPGlobal.HandleComException(COMException comEx) at Microsoft.SharePoint.Library.SPRequest.AddOrUpdateItem(String bstrUrl, String bstrListName, Boolean bAdd, Boolean bSystemUpdate, Boolean bPreserveItemVersion, Boolean bPreserveItemUIVersion, Boolean bUpdateNoVersion, Int32& plID, String& pbstrGuid, Guid pbstrNewDocId, Boolean bHasNewDocId, String bstrVersion, Object& pvarAttachmentNames, Object& pvarAttachmentContents, Object& pvarProperties, Boolean bCheckOut, Boolean bCheckin, Boolean bUnRestrictedUpdateInProgress, Boolean bMigration, Boolean bPublish, String bstrFileName, ISP2DSafeArrayWriter pListDataValidationCallback, ISP2DSafeArrayWriter pRestrictInsertCallback, ISP2DSafeArrayWriter pUniqueFieldCallback) at Microsoft.SharePoint.SPListItem.AddOrUpdateItem(Boolean bAdd, Boolean bSystem, Boolean bPreserveItemVersion, Boolean bNoVersion, Boolean bMigration, Boolean bPublish, Boolean bCheckOut, Boolean bCheckin, Guid newGuidOnAdd, Int32& ulID, Object& objAttachmentNames, Object& objAttachmentContents, Boolean suppressAfterEvents, String filename, Boolean bPreserveItemUIVersion) at Microsoft.SharePoint.SPListItem.UpdateInternal(Boolean bSystem, Boolean bPreserveItemVersion, Guid newGuidOnAdd, Boolean bMigration, Boolean bPublish, Boolean bNoVersion, Boolean bCheckOut, Boolean bCheckin, Boolean suppressAfterEvents, String filename, Boolean bPreserveItemUIVersion) 3ab0849e-e089-e088-a10e-e4e9a4c5effc
Top 10 Networking Features in Windows Server 2019: #5 Network Performance Improvements for Virtual Workloads
Share On: Twitter Share on: LinkedIn This blog is part of a series for the Top 10 Networking Features in Windows Server 2019! -- Click HERE to see the other blogs in this series. Look for the Try it out sections then give us some feedback in the comments! Don't forget to tune in next week for the next feature in our Top 10 list!
The Software Defined Data-Center (SDDC) spans technologies like Hyper-V, Storage Spaces Direct (S2D), and Software Defined Networking. Whether you have compute workloads like File, SQL, and VDI, you run an S2D cluster, or perhaps you're using your SDN environment to bring hybrid cloud to a reality, no doubt we crave network performance – we have a “need for speed” and no matter how much you have you can always use more.
In Windows Server 2016, we demonstrated 40 Gbps into a VM with Virtual Machine Multi-Queue (VMMQ). However, high-speed network throughput came at the additional cost of complex planning, baselining, tuning, and monitoring to alleviate CPU overhead from network processing. Otherwise, your users would let you know very quickly when the expected performance level of your solution degrades. In Windows Server 2019, virtual workloads will reach and maintain 40 Gbps while lowering CPU utilization and eliminate the painful configuration and tuning cost previously imposed on you, the IT Pro.
To do this, we’ve implemented two new features:
- Receive Segment Coalescing in the vSwitch
- Dynamic Virtual Machine Multi-Queue (d.VMMQ)
These features maximize the network throughput to virtual machines without requiring you to constantly tune or over-provision your host. This lowers the Operations & Maintenance cost while increasing the available density of your hosts. The efforts outlined here cover our progress in accelerating the host and guest; in a future article a colleague of mine (Harini Ramakrishnan) will discuss our efforts to accelerate the app in a future post.
Receive Segment Coalescing in the vSwitch
Number 1 on our playlist is an “oldie but goodie.” Windows Server 2019 brings a remix for Receive Segment Coalescing (RSC) leading to more efficient host processing and throughput gains for virtual workloads. As the name implies, this feature benefits any traffic running through the virtual switch including traditional Hyper-V compute workloads, some Storage Spaces Direct patterns, or Software Defined Networking implementations (for example, see Anirban's post last week regarding GRE gateway improvements #6 - High Performance SDN Gateways).
Prior to this release, RSC was a hardware offload (in the NIC). Unfortunately, this optimization was disabled the moment you attached a virtual switch. As a result, virtual workloads were not able take advantage of this feature. In Windows Server 2019, RSC (in the vSwitch) works with virtual workloads and is enabled by default! No action required your part!
Here’s a quick throughput performance example from some of our early testing. In the task manager window on the left, you see a virtual NIC on top of a 40 Gbps physical NIC without RSC in the vSwitch. As you can see, the system requires an average of 28% CPU utilization to process 23.9 Gbps.
In the task manager window on the right, the same virtual NIC is now benefiting from RSC in the vSwitch. The CPU processing has decreased to 23% despite the receive throughput increasing to 37.9 Gbps!
Here's the performance summary:
Average CPU Utilization | Average Throughput | |
Without RSC in the vSwitch | 28% | 23.9 Gbps |
With RSC in the vSwitch | 23% | 37.9 Gbps |
---Totals | 17.86% Decrease in CPU | 58.58% Increase in Throughput |
Under the Hood
RSC in the vSwitch combines TCP segments that are a part of the same TCP-stream into larger segments destined for a Hyper-V Guest. Processing coalesced (fewer) packets is far more efficient than the processing required for segmented packets. This leads to large performance gains to Hyper-V virtual machines.
Performance gains are seen in both high and low throughput environments; high-throughput environments benefit from more efficient CPU processing (lower CPU utilization on the host) while low throughput environments may even see throughput gains in addition to the processing efficiencies. Take a look at RSC in action:
Get Started!
If you’re a Windows Server 2019 Insider and using Hyper-V, Storage Spaces Direct, Software Defined Networking (including the High Performance Gateways Anirban talked about last week!), you’re likely already consuming this feature! This feature is enabled by default! But of course, if you’d like to compare the results yourself, check out our validation guide below.
Ready to give it a shot!? Download the latest Insider build and Try it out!
Dynamic Virtual Machine Multi-Queue (d.VMMQ)
With the advent of 10 Gbps NICs (and higher), the processing required for the network traffic alone exceeded what could be accomplished by a single CPU. Virtual Machine Queue and its successor Virtual Machine Multi-Queue allowed traffic destined for a vmNIC to be processed by one or more different processor cores.
Unfortunately, this required complex planning, baselining, tuning, and monitoring; often more effort than the typical IT Pro intended to expend. Even then, problems arose. If you were to introduce a heterogeneous hardware footprint in your datacenter, the optimal configuration could be varied or if tuning was needed, virtual machines may not be able to maintain a consistent level of performance.
To combat these problems, Windows Server 2019 dynamically tunes the host for maximum CPU efficiency and consistent virtual machine throughput. D.VMMQ requires no setup once a supporting driver in-place and will autotune the existing workload to ensure optimal throughput is maintained for each virtual machine. This reduces the OPEX cost imposed by previous versions of this technology.
How it Works
There are two key outcomes from this technology:
- When network throughput is low: The system coalesces traffic received on a vmNIC to as few CPUs as possible
Here’s a VM receiving around 5.3 Gbps.
The system can coalesce all packets onto one CPU for processing efficiency.
- When network throughput is high: The system automatically expands traffic received to as many CPUs as needed
The VMs traffic has grown to about 21 Gbps, which is more than a single CPU can handle.
The system expands the traffic across additional CPUs as necessary (and available) – In this case 5 - to maintain the demand for traffic.
Here's a quick video on Dynamic VMMQ in a low-throughput scenario. You'll see the dynamic scheduling algorithm coalesce all the network throughput onto one core. Then, once network traffic has completed, the queues will return to their "ready" state allowing them to expand very quickly if a burst of traffic occurs.
Get Started!
This feature requires a driver update for your NICs to a Dynamic VMMQ capable driver (referred to by some vendors as RSSv2). Drivers for Dynamic VMMQ will not be included inbox as this is an advanced feature, so please contact your IHV or OEM for the latest drivers.
If you are purchasing new hardware, you should pay special attention to the available NICs and verify that they have received the SDDC Premium Logo through our certification program (click on a specific NIC and look for SDDC Premium). If not, Dynamic VMMQ is not supported on these devices and you will default the traditional Static mechanism.
Ready to give it a shot!? Download the latest Insider build and Try it out!
Summary
Regardless of workload, your virtual machines need the highest possible throughput. Not only can Windows Server 2019 reach outstanding network performance, it eliminates the costly planning, baselining, and tuning required by previous Windows versions. You may still get a late-night call to troubleshoot a poorly performing virtual machine, but it won’t be because of the network throughput!
Thanks for reading and see you at Ignite!
Dan “Auto-tuning” Cuomo
SharePoint – PowerShell Script to Remove Users from Site Collection
Site collections still have users that are either disabled or deleted from Active Directory. SharePoint does not have anything out of the box that will clean up these. The reasons to clean these users up is to avoid getting them as a result in the people picker when they should not be showing up.
I do have two scripts that will clean these users up. This will be something you want to test out since this is a delete function (leave $RemoveUsers = $false).
WARNING: This will remove users from site collections. This will remove their alerts, personal views, unique permissions, etc. The only way to revert is to restore the site collection or content database from backup. |
Just a bit of information about your site…
This script does require some modification to tailor it for your environment. The service account is needed even if it's the local domain. This was designed this way, so you can use this script for sites that have users from different domains.
As stated above, $RemoveUsers is the switch to make this to script to remove the users that were not found in Active Directory. By default, this is set to $false.
After everything is set, run "GetUsers".
CSV is created.
The CSV will be created if any users are in the site collections that are no longer in Active Directory or are disabled. Review this before feeding it through the second script. There might be some accounts you may have false positives or just want to remove a few entries.
This script is more straight forward since we are taking that CSV and removing the user from the URL specified in the CSV. I would recommend backing up the content database(s) that could be affected by the this script.
DeletedADUsersFromsites1.1.ps1 can be downloaded here
The second script can be downloaded here.
Microsoft, Amazon, Google, IBM, Oracle, y Salesforce establecen una declaración conjunta para la interoperabilidad en la atención médica
Por: Josh Mandel, jefe de arquitectura para Microsoft Healthcare.

Foto: ITI
En la imagen, de izquierda a derecha: Dean Garfield (ITI) – Alec Chalmers (Amazon) – Mark Dudman (IBM) – Peter Lee (Microsoft) – Greg Moore (Google)
La interoperabilidad es un conjunto de superposiciones de retos técnicos y de políticas, desde acceso a los datos a modelos comunes de datos a intercambio de información a integración de flujos de trabajo. Y estos retos en ocasiones ofrecen una barrera para la innovación en la atención médica. Microsoft ha estado involucrado por muchos años en el desarrollo de mejores prácticas para la interoperabilidad a través de diferentes industrias. En esta ocasión, durante la reunión de líderes de la comunidad de TI en salud en la Conferencia CMS Blue Button 2.0 Developer en Washington, DC, nos complace anunciar que Microsoft se ha unido a Amazon, Google, IBM, Oracle, y Salesforce para apoyar la interoperabilidad en la atención médica con la siguiente declaración:
Estamos comprometidos en conjunto para remover las barreras para la adopción de tecnologías para la interoperabilidad en la atención médica, en particular aquellas que están habilitadas a través de la nube y la IA. Compartimos la búsqueda común por desbloquear el potencial en los datos de atención médica, para entregar mejores resultados con costos más bajos.
Al participar en este diálogo, arrancamos a partir de estas hipótesis fundamentales:
- El intercambio de datos de atención médica sin fricciones, con permisos y controles apropiados, llevará a un mejor cuidado del paciente, una satisfacción de usuario más alta, y costos más bajos a través de todo el ecosistema de salud.
- Para que sea exitosa, la interoperabilidad de datos de atención médica, debe considerar las necesidades de todas las partes interesadas globales, impulsar a los pacientes, proveedores de atención médica, pagadores, desarrolladores de aplicaciones, fabricantes de dispositivos y fármacos, empleados, investigadores, científicos ciudadanos, y muchos otros que desarrollarán, probarán, afinarán, y escalarán la implementación de nuevas herramientas y servicios.
- Estándares abiertos, especificaciones abiertas y herramientas de código abierto son esenciales para facilitar un intercambio de datos sin fricciones. Esto requiere una variedad de estrategias técnicas y colaboración continua para que la industria converja y adopte estándares emergentes para la interoperabilidad de datos de atención médica, como HL7 FHIR y el Proyecto Argonaut.
- Entendemos que conseguir un intercambio de datos de atención médica sin fricciones es un proceso continuo y nos comprometemos a involucrarnos de manera activa entre las comunidades de código abierto y estándares abiertos para el desarrollo de estándares de atención médica y evaluaciones de conformidad para fomentar la agilidad para considerar el acelerado ritmo de la innovación.
Juntos, creemos que un diálogo robusto de la industria acerca de las necesidades de la interoperabilidad de la atención médica hará que esta causa avance, por lo tanto nos sentimos complacidos por establecer esta declaración conjunta.
Aunque soy nuevo aquí en Microsoft, he estado enfocado en los últimos diez años en disminuir las barreras de la innovación en la atención médica, he trabajado de cerca con las comunidades de desarrollo de estándares y de código abierto. Estoy satisfecho de que mi primer texto aquí en Microsoft se alinee tan bien con mi estatuto de colaborar en una arquitectura de nube abierta con la comunidad de atención médica.
Los Registros Electrónicos de Salud (EHR, por sus siglas en inglés) se acercan a la adopción universal en los hospitales y prácticas ambulatorias de los Estados Unidos, gracias en parte a los Centers for Medicare y Programas de Incentivos de EHR de Medicaid Services (CMS, por sus siglas en inglés). La Acta de Curas del Siglo XXI hará los datos digitales de salud más accesibles con el llamado por API abiertas.
En el contexto de la atención médica de Estados Unidos, muchos sistemas de registros de salud se han enfocado en la representación consistente para un conjunto clave de elementos de datos definido por el Conjunto de Datos Clínicos Significativos de Uso Común (Meaningful Use Common Clinical Data Set). Conforme aumenta el soporte para este conjunto común de datos, se vuelve más sencillo conectar nuevas herramientas a los flujos de trabajo clínico, analizar historiales clínicos, recolectar nuevos datos, y coordinar los cuidados. Muchas de estas capacidades técnicas han estado disponibles por mucho tiempo dentro de pequeños y unidos sistemas de salud, pero desarrollar estas capacidades ha requerido de una ingeniería compleja y personalizada y de un mantenimiento y soporte continuos. Dirigirse hacia una arquitectura abierta hace la adopción más rápida, sencilla y económica.
Como estudiante de medicina, practicaba lo que llamaba “interoperabilidad clandestina”, me conectaba a servicios donde podía, y recopilaba la plataforma de datos que quería. Todo esto funcionaba, pero era una pesadilla mantenerla. Luego, cuando me uní a la facultad de investigación en el Hospital Infantil de Boston y comencé a trabajar en la Plataforma SMART Health IT, queríamos construir una plataforma robusta para aislar a los desarrolladores de aplicaciones de los detalles subyacentes de un sistema EHR, así que comenzamos a diseñar nuevas API abiertas desde cero y las atamos al sistema subyacente del proveedor.
Esta labor llamó la atención de Health Level Seven (HL7), la organización de desarrollo de estándares de atención médica, que es responsable de varias generaciones de estándares de datos de salud. Cuando HL7 convocó a una “Fuerza de Tareas de Nuevos Enfoques” para invitar perspectivas sobre nuevos enfoques basados en API para el intercambio de datos, me sentí honrado en participar y compartir mi experiencia de SMART.
Esta fuerza de tareas (entre muchas influencias) al final inspiró la creación de Recursos de Rápida Interoperabilidad de Atención Médica (FHIR, por sus siglas en inglés), un enfoque ágil y abierto para el desarrollo de estándares de atención médica. Me involucré de manera temprana con la comunidad FHIR cuando escribí el primer servidor FHIR de código abierto. Cinco años después, ha sido motivante ver cuántos proveedores, incluido Microsoft, apoyan al emergente estándar FHIR.
Me uní a Microsoft porque está entre los más grandes colaboradores de estándares abiertos y código abierto. Contribuimos de manera activa con tecnología innovadora para esfuerzos de estándares en muchas industrias, e implementamos miles de estándares en nuestros productos que son formulados por una amplia diversidad de organismos de estándares. Durante 2017 vimos profundos compromisos con la portabilidad de datos de consumo a través de la nube durante el Proyecto de Transferencia de Datos, un ecosistema interoperable para modelos de IA a través del Intercambio de Redes Neurales Abiertas (ONNX, por sus siglas en inglés), y la plataforma líder mundial de desarrollo de software a través de la adquisición de GitHub.
En Microsoft hemos tomado un enfoque colaborativo para construir herramientas abiertas que ayuden a la comunidad de atención médica, en el que se incluyen API hospedadas en la nube y servicios para IA y aprendizaje automático. Microsoft entiende que la verdadera interoperabilidad en la atención médica requiere soluciones de principio a fin, en lugar de piezas independientes, que tal vez no trabajen bien en conjunto.
De manera reciente, agregamos soporte para FHIR a la Plataforma de Aplicación de Negocios Dinámicos a través de Dynamics 365 Healthcare Accelerator, y desarrollamos Azure Security and Compliance Blueprint for Health Data de código abierto e IA para implementar HIPAAA/HITRUST, habilitado por FHIR en Azure. Estas soluciones son resultado del trabajo cercano de los equipos de Microsoft con nuestros socios para asegurar que todos los componentes de nuestro portafolio de productos trabajen en conjunto para atender las necesidades únicas de nuestros escenarios de atención médica.
Transformar la atención médica significa trabajar en conjunto con organizaciones de todo el ecosistema. La declaración conjunto de interoperabilidad que hemos realizado refleja los comentarios de nuestros clientes y socios de la atención médica, y juntos estableceremos las bases técnicas para apoyar la atención basada en valor. Esperamos que las hipótesis de nuestra declaración en conjunto puedan continuar la evolución y sean refinadas con base en este diálogo abierto con la industria.
Por favor únanse a la conversación. Me pueden encontrar en Twitter como @JoshCMandel. Si quieren participar, comentar o aprender más acerca de FHIR, pueden entrar al chat de la Comunidad de FHIR en https://chat.fhir.org.
Office 365: A different approach to handling Office 365 group mail flow…
In Exchange Online an Office 365 Group ( Unified Group / Modern Group ) allows for a new level of team collaboration. Office 365 Groups are created directly in Exchange Online and Azure Active Directory. This makes them cloud only objects. The membership and attributes of these objects are maintained directly in Office 365.
There are legitimate scenarios where on premises applications must be able to send email to the Office 365 groups. By default an Office 365 Group does not write back to on premises Active Directory and therefore is not a valid recipient for Exchange on-premises. To compensate for this Azure Active Directory Connect has a group writeback feature. The group writeback feature allows Office 365 groups to be represented in the on-premises Active Directory. The group membership cannot be managed using the on-premises Active Directory – any changes are overwritten by Azure Active Directory Connect.
When a group is written back to the on-premises active directory they are not mail enabled by default. Administrators must execute the update-recipient command in order to have the objects represented in the on-premises global address list and for full transport functionality. In some cases this can be an interesting task. There is another option to establish mail flow and have the object appear in the on-premises global address list.
Utilizing a mail contact…
The process starts by provisioning the Office 365 Group in either Exchange Online or in Azure Active Directory. When the group is provisioned the mail enabled attributes are created – of particular interest to us are the email addresses stamped on the group.
The group must be updated with an email address that includes domain.mail.onmicrosoft.com.
PS C:> Set-UnifiedGroup Officers -emailaddresses:@{add=officers@domain.mail.onmicrosoft.com}
With the new email address present we can gather the attributes that we will use in future commands into a variable.
PS C:> $group=Get-UnifiedGroup -Identity Officers
PS C:> $group.EmailAddresses
smtp:officers@fortmillrescuesquad.mail.onmicrosoft.com
SMTP:Officers@fortmillems.org
smtp:Officers@fortmillrescue.com
SPO:SPO_8bd244fb-60f3-4710-a1ef-40bc7ef584ff@SPO_eefdeca8-5850-4ca5-a160-0716f2d8496e
smtp:Officers@FortMillRescueSquad.onmicrosoft.com
PS C:> $group.DisplayName
Officers
PS C:> $group.name
Officers_7ccca570b9
PS C:> $group.alias
Officers
As with most mail enabled objects in Office 365 this group has a primary email address at the vanity domain @domain.org and a secondary email address at the tenant domain @domain.mail.onmicrosoft.com. I have also noted the other attributes that we will utilize later.
The next step is to locate or create an Organizational Unit in the on-premises Active Directory to store the on-premises objects we will associated with these groups. An important configuration step here is that the OU must NOT be included in objects that are replicated by Azure Active Directory Connect to Azure Active Directory. This is performed through the Azure Active Directory Connect configuration wizard.
The last step of the process is to provision mail enabled contacts within the non-sync OU. The mail enabled contacts will:
- Have a primary email address matching the primary email address of the mail enabled group in Office 365.
- Have an external email address matching the tenant specific email address.
- Any number of secondary addresses as necessary.
- Note – the primary and secondary email addresses may be defined automatically by the on-premises recipient policies and match the Office 365 Group depending on the configuration of the on-premises recipient policies.
In this example I will utilize powershell to create the mail enabled contact and the values previously gathered above.
[PS] C:>New-MailContact -DisplayName "Officers" -Name "Officers_7ccca570b9" -ExternalEmailAddress "officers@domain.mail.onmicrosoft.com" -Alias "Officers" -PrimarySmtpAddress "officers@domain.org" -OrganizationalUnit "domain.local/TopLevelOU/Contacts/Office365-NoSync"
Name Alias RecipientType
---- ----- -------------
Officers_7ccca570b9 Officers MailContact
The contact creation can be verified with get-mailcontact and reviewing the individual attributes set.
[PS] C:>$contact=get-mailContact Officers_7ccca570b9
[PS] C:>$contact.displayName
Officers
[PS] C:>$contact.name
Officers_7ccca570b9
[PS] C:>$contact.ExternalEmailAddress
SmtpAddress : officers@domain.mail.onmicrosoft.com
AddressString : officers@domain.mail.onmicrosoft.com
ProxyAddressString : SMTP:officers@domain.mail.onmicrosoft.com
Prefix : SMTP
IsPrimaryAddress : True
PrefixString : SMTP
[PS] C:>$contact.Alias
Officers
[PS] C:>$contact.PrimarySmtpAddress
Length : 25
Local : officers
Domain : domain.org
Address : officers@domain.org
IsUTF8 : False
IsValidAddress : True
[PS] C:>$contact.OrganizationalUnit
domain.local/TopLevelOU/Groups/Office365-NoSync
The mail contact will appear in the on premises global address list.
When the contact is selected as a mail target the email will be received at address@domain.org and will forward to address@domain.onmicrosoft.com. Here is an example from the on-premises message tracking logs.
[PS] C:>Get-MessageTrackingLog -MessageId c652db0a537848d4bf43c6d435bbb79e@domain.org
Timestamp EventId Source Sender Recipients MessageSubject
--------- ------- ------ ------ ---------- --------------
8/21/2018 8:23:03 PM HAREDIRECTFAIL SMTP Administrator@OOOO... {officers@OOOOOOOO... Test New Contact
8/21/2018 8:23:03 PM RECEIVE SMTP Administrator@OOOO... {officers@OOOOOOOO... Test New Contact
8/21/2018 8:23:17 PM RESOLVE ROUTING Administrator@OOOO... {officers@OOOOOOOO... Test New Contact
8/21/2018 8:23:22 PM AGENTINFO AGENT Administrator@OOOO... {officers@OOOOOOOO... Test New Contact
8/21/2018 8:23:25 PM TRANSFER ROUTING Administrator@OOOO... {officers@OOOOOOOO... Test New Contact
8/21/2018 8:23:29 PM SENDEXTERNAL SMTP Administrator@OOOO... {officers@OOOOOOOO... Test New Contact
8/21/2018 8:22:58 PM RECEIVE STOREDRIVER Administrator@OOOO... {officers@OOOOOOOO... Test New Contact
8/21/2018 8:23:06 PM SUBMIT STOREDRIVER Administrator@OOOO... {officers@OOOOOOOO... Test New Contact
The Exchange Online message tracking logs show the inbound transmission to Office 365.
PS C:> Get-MessageTrace -RecipientAddress officers@domain.mail.onmicrosoft.com
Received Sender Address Recipient Address Subject
-------- -------------- ----------------- -------
8/21/2018 8:23:28 PM Administrator@domain.org officers@domain.mail.onmicrosoft.com Test New Contact
When the full message headers are pulled from the message delivered to the group we can additionally validate that the authentication source is internal. The messages are trusted.
26
X-MS-Exchange-Organization-AuthAs
Internal
When using this process the contact on-premises shows as a mail enabled contact. It will not show as a distribution group – which may impact some peoples abilities to locate within the global address list – for example someone who selects all lists in the address book drop down. If you prefer – you could utilize a distribution list with a single member to achieve the same results.
Utilizing a distribution group…
The same pre-requisites must be followed. We need to add the additional email address to the unified group, capture the values for the group, and have a prepared organizational unit not synchronized. The process only deviates in how we create the relationship between the on-premises groups and the Office 365 Groups.
The steps of this process include provisioning a distribution group and a mail enabled contacts within the non-sync OU.
The distribution list will:
- Have a primary email address matching the primary email address of the mail enabled group in Office 365.
- Contain a mail enable contact with an external email address matching the domain.mail.onmicrosoft.com address of the Office 365 Group.
- Any number of secondary addresses as necessary.
- Note – the primary and secondary email addresses may be defined automatically by the on-premises recipient policies and match the Office 365 Group depending on the configuration of the on-premises recipient policies.
The mail enabled contact will:
- Have an external email address matching the domain.mail.onmicrosoft.com address of the Office 365 Group.
- Be a recipient hidden from the global address list.
In this example I will utilize powershell to create the mail enabled group and the values previously gathered above.
[PS] C:>New-DistributionGroup -DisplayName "Officers" -Name "Officers_7cca570b9" -Alias "Officers" -PrimarySmtpAddress
"officers@domain.org" -OrganizationalUnit "domain.local/TopLevelOU/Groups/Office365-NoSync"
Name DisplayName GroupType PrimarySmtpAddress
---- ----------- --------- ------------------
Officers_7cca570b9 Officers Universal officers@domain.org
Using powershell the mail enabled contact will be created. To avoid any attribute collisions “–contact” was added to the end of each parameter.
[PS] C:>New-MailContact -DisplayName "Officers-Contact" -Name "Officers_7ccca570b9-Contact" -ExternalEmailAddress "officers@domain.mail.onmicrosoft.com" -Alias "Officers-Contact" -PrimarySmtpAddress "officers-contact@domain.org" –OrganizatisonalUnit "domain.local/TopLevelOU/Contacts/Office365-NoSync"
Name Alias RecipientType
---- ----- -------------
Officers_7ccca570b9-Conta Officers-Contact MailContact
ct
The mail contact can then be hidden from the address list preventing users from locating it rather than the group created.
[PS] C:>Set-MailContact Officers_7cca570b9-contact -HiddenFromAddressListsEnabled:$TRUE
With the mail contact provisioned it can be added to the distribution group as a member.
[PS] C:>Add-DistributionGroupMember -Identity Officers_7cca570b9 -Member Officers_7cca570b9-contact
[PS] C:>Get-DistributionGroupMember -Identity Officers_7cca570b9
Name RecipientType
---- -------------
Officers_7cca570b9-contact MailContact
Here is an example of the message tracking log on-premises. The distribution list expansion and redirect to the external recipient can be reviewed.
[PS] C:>Get-MessageTrackingLog -MessageId "4d19cf9cae67475cb4b41e011f28031d@domain.org"
Timestamp EventId Source Sender Recipients MessageSubject
--------- ------- ------ ------ ---------- --------------
8/22/2018 2:08:05 PM HAREDIRECTFAIL SMTP Administrator@OOOO... {officers@OOOOOOOO... Distribution Grou...
8/22/2018 2:08:05 PM RECEIVE SMTP Administrator@OOOO... {officers@OOOOOOOO... Distribution Grou...
8/22/2018 2:08:05 PM EXPAND ROUTING Administrator@OOOO... {officers-contact@... Distribution Grou...
8/22/2018 2:08:05 PM RESOLVE ROUTING Administrator@OOOO... {officers@OOOOOOOO... Distribution Grou...
8/22/2018 2:08:06 PM AGENTINFO AGENT Administrator@OOOO... {officers@OOOOOOOO... Distribution Grou...
8/22/2018 2:08:06 PM TRANSFER ROUTING Administrator@OOOO... {officers@OOOOOOOO... Distribution Grou...
8/22/2018 2:08:06 PM DROP ROUTING Administrator@OOOO... {officers@OOOOOOOO... Distribution Grou...
8/22/2018 2:08:06 PM TRANSFER ROUTING Administrator@OOOO... {officers@OOOOOOOO... Distribution Grou...
8/22/2018 2:08:07 PM SENDEXTERNAL SMTP Administrator@OOOO... {officers@OOOOOOOO... Distribution Grou...
8/22/2018 2:08:05 PM RECEIVE STOREDRIVER Administrator@OOOO... {officers@OOOOOOOO... Distribution Grou...
8/22/2018 2:08:05 PM SUBMIT STOREDRIVER Administrator@OOOO... {officers@OOOOOOOO... Distribution Grou...
A message trace in Office 365 confirms receipt of the message to the Office 365 Group.
PS C:> Get-MessageTrace -RecipientAddress officers@fortmillrescuesquad.mail.onmicrosoft.com
Received Sender Address Recipient Address Subject
-------- -------------- ----------------- -------
8/22/2018 2:08:06 PM Administrator@domain.org officers@domain.mail.onmicrosoft.com Distribution Gr...
When the full message headers are pulled from the message delivered to the group we can additionally validate that the authentication source is internal. The messages are trusted.
26
X-MS-Exchange-Organization-AuthAs
Internal
Utilizing this method the on premises object appears as a mail enabled distribution group with a single member. It will appear in the global address list as a group object and you can apply many of the same group properties – such as moderation and authentication – should it be required.
Senders and authentication…
The steps provided above yield email that arrives in Office 365 Groups as internal. This considers the message to be trusted and authenticated. In the testing performed above an on-premises mailbox was utilized as the source of the messages. In many cases administrators are considering this plan or group writeback to allow the on premises organization to receive internet email as the primary MX and route to the Office 365 Groups <or> to allow internal applications to relay to Office 365 Groups.
When messages do not originate in the context of an authenticated user the connector status is utilized to determine the security of a message. When the MX record points to an on premises server – it should be directed to a connector that has anonymous rights only. This connector will not elevate messages received to an internal status. In this test I utilized telnet to send an email through a connector where only the anonymous rights are present. The header shows an authentication status of Anonymous.
24
X-MS-Exchange-Organization-AuthAs
Anonymous
When trusted internal applications require the ability to send securely to Office 365 Groups a connector can be leveraged that utilizes the externally secured permissions. I have written a document here that some may find helpful. https://blogs.technet.microsoft.com/timmcmic/2018/04/22/office-365-trusting-application-emails-sent-through-internal-relay/ In this test I utilized an MFP to send an email through a connector where the externally secured rights were applied and restrictions were in placed based on source IP address. The header shows an authentication status of Internal.
25
X-MS-Exchange-Organization-AuthAs
Internal
This information can be useful in understanding how rights are applied to distribution groups and the security of inbound mail flow.
Support Tip: Optimizing bandwidth for Microsoft Store app updates on Windows 10 devices
A post was published today on a sister site with some tips to help optimize bandwidth for Microsoft store app updates. It shows how Intune can be used when a large number of updates to provisioned or installed Windows apps are delivered to Windows 10 devices. This is especially useful when you have devices in your environment that have limited network bandwidth. You can read the complete blog post here: Optimizing bandwidth for Microsoft store app updates on Windows 10 devices
OSD Video Tutorial: Part 23 – Nested Task Sequences
This session is part twenty-three of an ongoing series focusing on Operating System Deployment in Configuration Manager. We are posting these a little out of the order that Steven originally recorded them. Don’t worry, the remaining sessions starting with fifteen will continue in our Advanced OSD section.
In this tutorial, Steven explains the nested task sequences capabilities which were added first in the Configuration Manager current branch 1710 release. He details how the feature works, what to expect and demonstrates a few scenarios.
The video linked below was prepared by Steven Rachui, a Principal Premier Field Engineer focused on manageability technologies.
This is the last tutorial in the OSD – A Deeper Dive sequence. Join us for the Advanced OSD section next where Steve starts with an overview of advanced concepts.
Posts in OSD - A Deeper Dive
- Part VI - Task Sequence variables - a deeper dive
- Part X - USMT and OSD
- Part XI - MDT Integration
- Part XII - OSD and the ADK
- Part XIII - to be Known or to be Unknown - that is the question
- Part XIV – Pre-staged Media
- Part XXIII - Nested Task Sequences (this post)
Known Issue: Updates are not installed for iOS devices 11.4 and higher
We’ve noticed an issue with updates being installed in some iOS devices. This has been seen in devices with iOS 11.4 and higher that are in a locked state and have a passcode set. Admins may see an error in the Intune admin console when they schedule software updates for these devices and updates will not be installed.
As a workaround, end users can manually install updates on their devices from Settings > General > Software Update.
This is an iOS issue and Apple is aware of it. We’ll keep this blog post updated as we know more.
Assigned Access with Intune and AssignedAccess CSP – Part I
Hello all,
Back again, much faster this time around! This two part blog picks up or adds to my last blog regarding creating AppLocker policies with AppLocker CSP and Intune. If you missed that post, check it out here:
Summary
In my previous blog post, I mentioned my thoughts on the whitelisting approach to blocking application activity on devices. I mentioned that when an administrator desires to take a whitelisting approach, they want to lock down the application activity on their devices. I also mentioned that leveraging a whitelist approach will, more than likely, result in a ton of Allow rules in your AppLocker policy. And finally, I mentioned that for those who want to take a whitelisting approach, why not consider leveraging an Assigned Access profile? This two part blog is going to explain what Assigned Access is, the use cases for it, and how to create an Assigned Access profile in Intune. In this part, we'll discuss the background of our strategy, examine the docs, and build our XML. In part II of the blog, we'll configure our policy in Intune and validate our results.
NOTE: While you can certainly deploy an Assigned Access profile to an already provisioned device that is enrolled in Intune (for the policy to apply, the user must log out and back into the device), this blog will assume you are provisioning devices with Windows 10 Autopilot. We will not cover configuring Windows 10 Autopilot in this blog. If you have not already setup Windows 10 Autopilot, below is a bulleted list (in order) to get you started on configuring Windows 10 Autopilot before jumping in on this blog:
- Gather device hardware IDs - https://blogs.technet.microsoft.com/mniehaus/2017/12/12/gathering-windows-autopilot-hardware-details-from-existing-machines/
- Prepare the Intune tenant for Windows 10 Autopilot - https://docs.microsoft.com/en-us/intune/enrollment-autopilot
- Configure an Enrollment Status Page in Intune - https://github.com/MicrosoftDocs/IntuneDocs/blob/master/intune/windows-enrollment-status.md
Pre-Requisites
- Windows 10 Autopilot is configured in your Intune tenant
- Devices that are at least Windows 10 1709
- Devices that are physical in nature. NOTE: It is possible to test with a VM, but works best with a physical device. More on this later
- Azure AD group that contains users who will log into devices that will leverage the Assigned Access policy
What is Assigned Access and Why Use It?
With Assigned Access, an administrator can limit an existing user account to use only one or multiple installed Windows app that they choose. This can be useful to set up single-function devices, such as restaurant menus or displays at trade shows (kiosks). Or, it can be useful for provisioning highly restricted devices. Use cases for this would include provisioning of contractor devices, multi-user devices, training room devices, or for admins who want to restrict their user base to certain activity. Our example will follow that of an admin who wants to restrict a subset of different users on their devices.
Now, if you are aware of the feature set in Intune, you may be aware that I'm already describing a feature that exists in Intune. And you would be correct, to an extent. At the time of writing this blog, Intune currently has a feature called Kiosk Mode. However, it is in preview. This means you can use the feature, but should not use the feature for wholesale use as results can be inconsistent since the feature is not generally available yet. If you have already tested this feature thoroughly, you know this to be the case. I have tested the feature and it's great! I can't wait to see it fully ready and how it will evolve over time. But it's not ready for primetime just yet. Unfortunately, kiosk mode does not work with an Enrollment Status Page, which is also in preview. The importance of the Enrollment Status Page is that this page keeps the device in OOBE so all the Intune policies and applications are applied/deployed to the device so that when the user logs on, it's ready to go and there is no need to wait for policy to come down for the device. Specific to Assigned Access policy, we want the policy applied from the moment the user is logged in rather than waiting on the policy to come down, be applied, and requiring the user to log out and back in for the Assigned Access profile to take effect. Also, kiosk mode (Previews) is also limited to Windows 10 1803. If your devices are not Windows 10 1803, the policy will not be applicable and Intune will let you know that in the policy configuration status node.
Lastly, I'll be the first to admit that everything I just said and everything you read below becomes a moot point over time as Windows 10 1803 becomes standard across organizations and kiosk mode in Intune becomes a full fledged feature in Intune. In the meantime, all is not lost! We can leverage the AssignedAccess CSP for devices that are at least Windows 10 1709 to deploy an Assigned Access profile. Buckle up, this ride is bumpy and a little complex, but we'll get through it!
The Plan
I love it when a plan comes together! Bonus points if you know where that saying comes from. As always, the legwork has been completed already to provide this example so you don't have to struggle though it. For those of you who read my blogs religiously, you know I'm a documentation fiend. So our first item is to consult the documentation on the AssignedAccess CSP.
https://docs.microsoft.com/en-us/windows/client-management/mdm/assignedaccess-csp
Let's examine the docs:
First thing we want to understand is our OMA-URI path. We see from the documentation that the OMA-URI path is:
./Device/Vendor/MSFT/AssignedAccess/Configuration
We also see from the documentation that Configuration setting was added in Windows 10 1709 devices and is used to configure the settings of our Assigned Access policy. Reading the documentation further, we see the policy setting requires an XML and it must be a string value. And that's about it for the documentation. It's a lot of information, but there are still questions on how we need to approach this. Continuing down in the documentation, there is an AssignedAccess XML example that is provided. However, there isn't much context around what any of contents of the XML means. We can find that information in the link below. You should review this in order to understand the XML contents better:
https://docs.microsoft.com/en-us/windows/configuration/lock-down-windows-10-to-specific-apps
From reviewing the example XML tags, it gives us our outline on how we need to build this policy.
- <AllowedApps> tag - We need to build our list of apps we want to allow and put them in the XML
- <StartLayout> tag - We need to provide a Start Menu layout and put it in the XML
- <Account> tag - We need to configure an account config and put it in the XML
I have provided a sample XML file as part of this blog. It will be much easier on you to use the provided sample. Before we move on any further, I want to call out a couple pitfalls in the documentation XML example. Just as I mentioned in the AppLocker blog, a subset of the example XML is what is needed as part of the string value for our policy and that was not called in the documentation. <?xml version="1.0" encoding="utf-8" ?> should not be included in the string value for your policy. Everything else in the XML is fine. If you include this line in your XML, you'll get a Remediation Failed error.
The next pitfall is the <Account>MultiAppKioskUser<Account/>. The bolded text in the previous sentence is merely a placeholder. This is not called out in the docs. So if you don't update this configuration, thinking it is a universal variable of sorts, and you leave this as is in your string XML, you'll get a Remediation Failed error. I'm probably being a little critical of our docs here, but I can see where this could cause confusion and frustrate someone to no end.
Building the XML
Earlier we mentioned by reviewing the example XML, we see the steps we need to do to build our XML file. The first step is building our AllowedApps list. Let's review an <AllowedApps> section example: Apologies for the formatting, but for some reason, I can't indent the text to look like typical a XML tree.
<AllAppsList>
<AllowedApps>
<App AppUserModelId="Microsoft.ZuneMusic_8wekyb3d8bbwe!Microsoft.ZuneMusic" />
<App AppUserModelId="Microsoft.ZuneVideo_8wekyb3d8bbwe!Microsoft.ZuneVideo" />
<App AppUserModelId="Microsoft.Windows.Photos_8wekyb3d8bbwe!App" />
<App AppUserModelId="Microsoft.BingWeather_8wekyb3d8bbwe!App" />
<App AppUserModelId="Microsoft.WindowsCalculator_8wekyb3d8bbwe!App" />
<App DesktopAppPath="%windir%system32mspaint.exe" />
<App DesktopAppPath="C:WindowsSystem32notepad.exe" />
</AllowedApps>
</AllAppsList>
Notice the red text above. The first line is an example tag for a modern app that we want to allow leveraging the <App AppUserModelId> tag. The bottom two lines are an example of EXE apps that we want to allow leveraging the <App DesktopAppPath> tag. Looking at the values of these tags, the <App DesktopAppPath> tag is pretty easy to discern. The examples show you can use a literal path or system variables as part of the part. Leveraging a test device, that is of the same version you plan on deploying the policy, simply open up File Explorer and find the paths of the EXE you want to allow. Simply copy and paste the paths into the "" for the <App DesktopAppPath> tag. Then copy and paste the entire line to the next line in the XML. Rinse and repeat until you have added all the EXE files you want to allow.
Now the <App AppUserModelId> tag is a bit more challenging just because of the lovely ID format that modern apps use. In order to get the AppIDs, run the following Powershell script on a test device that is of the same version you plan on deploying the policy:
$installedapps = get-AppxPackage
$aumidList = @()
foreach ($app in $installedapps)
{
foreach ($id in (Get-AppxPackageManifest $app).package.applications.application.id)
{
$aumidList += $app.packagefamilyname + "!" + $id
}
}
$aumidList | Out-File -FilePath C:AUMIDList.txt
Again, apologies for the formatting. But the Powershell script above will spit out a text file with all the IDs for the modern apps. Open the text file and review the IDs of the apps you wish to allow. Similar to the way you configured your <App DesktopAppPath> tags, simply copy and paste the App ID into the "" for the <App AppUserModelID> tag. Then copy and paste the entire line to the next line in the XML. Rinse and repeat untl you have added all the modern apps you want to allow.
<StartLayout>
The <StartLayout> section is much easier to obtain. On a test device that is of the same version of Windows 10 that you wish to deploy the policy to, open up Powershell and run the following Powershell script:
Export-StartLayout -Path C:Start.xml
Once you have the Start.xml file, open the Start.xml in Notepad and copy the entire contents of the Start.xml into the policy XML under the <StartLayout> in between the <![CDATA[ ]]> tag. If this is confusing, review the example in the documentation to understand where you should paste the Start.XML contents.
<Account>
The last item we need to configure is the <Account> tag. At the linked doc within the AssignedAccess CSP docs:
https://docs.microsoft.com/en-us/windows/configuration/lock-down-windows-10-to-specific-apps
We can see the different account types we can configure here. We can configure local users, domain users, Azure AD users. We can also configure user groups such as local groups, AD groups, and AzureAD groups. In our example, configuring an Azure AD group is ideal as we want the policy to apply to multiple users. In order to leverage an AzureAD group, we need to get the group ID of the Azure AD group.
- Open a browser and go to portal.azure.com
- Open the Azure Active Directory blade
- Open the Groups blade
- On the Groups blade, search for the group that contains the group of users you wish the Assigned Access policy to. Click on the Group
- In the left pane, click Properties
- Copy the Object ID of the Azure AD group and notate this
Now that we have our Azure AD group ID, let's build the <Account> tag, which for us, is actually the <UserGroup> tag since we are leveraging an Azure AD group
<Configs>
<Config>
<UserGroup Type="AzureActiveDirectoryGroup" Name="79d90a36-9060-4bc3-a397-a9103a98526c" />
<DefaultProfile Id="{9A2A490F-10F6-4764-974A-43B19E722C23}"/>
</Config>
</Configs>
Within the <UserGroup> tag, copy the Azure AD group Object ID you notated earlier between the "" of the Name value. See above for an example.
Once you have completed this section, you are done! Save your XML. We will use this XML in the next blog to configure our Assigned Access policy in Intune.
Wrap Up
You can see why I split this up into a two part blog! I told you it'd be bumpy, but we got through it. The hard part is over. Nice job! In the next part, we'll configure the Intune policy and deploy it out and we'll validate the results of our Assigned Access policy.
Here is the link to download the sample XML for our example:
https://1drv.ms/t/s!Avb5Zr26pC54gt5AaxHwwsSvZWqHUw
And now, the obligatory disclaimer:
© 2018 Microsoft Corporation. All rights reserved. Sample scripts or files in this blog are not supported under any Microsoft standard support program or service. The sample scripts or files are provided AS IS without warranty of any kind. The entire risk arising out of the use or performance of the sample scripts and documentation remains with you. In no event shall Microsoft, its authors, or anyone else involved in the creation, production, or delivery of the scripts be liable for any damages whatsoever (including, without limitation, damages for loss of business profits, business interruption, loss of business information, or other pecuniary loss) arising out of the use of or inability to use the sample scripts or documentation, even if Microsoft has been advised of the possibility of such damages.
Assigned Access with Intune and AssignedAccess CSP – Part II
Hello all,
Back with part two of our blog dealing with Assigned Access profiles leveraging the AssignedAccess CSP. If you haven't read Part I of this blog, you can read it here:
Summary
In the previous blog, we discussed Assigned Access, the use cases for Assigned Access, discussed the AssignedAccess CSP documentation, and walked through how to build the XML that we need for our Assigned Access policy in Intune. In this part of the blog, we'll discuss Windows 10 Autopilot settings, configuring and deploying the policy in Intune, and spend some time validating our results on the device.
Ok, a little bit of additional background for our example. In our example, recall we are configuring a policy for the use case of an administrator who wants to restrict activity on devices for a subset of multiple users across different devices. In this scenario, the administrator only wants to give the users access to the following applications:
- Microsoft Edge (UWP app)
- Calculator (UWP app)
- OneNote (UWP app)
- Notepad (desktop app)
If you've already looked at the provided sample XML from part one of this blog, you've seen the XML is already been configured for the scenario mentioned above. If you haven't, challenge yourself by building the XML on your own by following along in part one of this blog.
Windows 10 Autopilot and Devices
Recall we discussed Windows 10 Autopilot briefly. The one caveat about your Windows 10 Autopilot profile is that the user account type setting must be set to Standard. There are a couple of reasons for this:
- Per our AssignedAccess CSP documentation, it's not supported and we are warned that we may experience inconsistent results if an Assigned Access profile is deployed to an administrator account
- Per our AssignedAccess CSP documentation, several addition settings are automatically configured when the Assigned Access profile is applied. Namely, the Start menu Show All Apps list button is hidden. If an Assigned Access profile is applied to an administrator account, some settings, namely this one, will not apply properly and a user will be able to access all applications from the Start menu. This doesn't mean the apps will run, they won't based on the Assigned Access policy. However, when a user clicks on an app icon, they'll get a pop-up saying the app is not allowed. This is not an ideal user experience and may prompt a helpdesk call from the user.
As mentioned in the Part I, ensure you are leveraging an Enrollment Status Page to keep the device in OOBE until the Assigned Access profile is applied. In this manner, when the user logs in for the first time, the Assigned Access profile will be applied and configured from start.
Recall we also discussed leveraging physical devices. The reason for this recommendation is because if you use a VM, at least a Hyper-V VM, when the Windows 10 Autopilot configures the user account as a standard user account, that account will not have rights to log on remotely, you will effectively be locked out of the VM, and the configuration will not complete. You can use a Hyper-V VM for testing, but the Windows 10 Autopilot profile must have the user account type set to administrator. Your policy will apply in this manner, but recall it will not set all the settings as mentioned earlier. It is recommended to leverage physical devices for testing so you get the full effect of your Assigned Access policy and you can evaluate the policy as an end user to understand if you need to tweak your policy at all.
Create and Deploy the Assigned Access Policy in Intune Portal
Remember our XML from part I? Go ahead and get that ready.
- Open up a browser and navigate to your Intune portal
- In the Intune blade, click Device Configuration
- In the Device Configuration blade, click Profiles
- In the Device Configuration - Profiles blade, click the Create Profile button
- On the Create Profile blade, in the Name field, name the profile accordingly. In the Platform drop-down, choose Windows 10 and later. In the Profile type drop-down, choose Custom
- The Custom OMA-URI Settings blade will appear. Click the Add button
- On the Add Row blade, in the Name field, name the settings accordingly.
- In the OMA-URI path, copy and paste ./Device/Vendor/MSFT/AssignedAccess/Configuration
- In the Data type drop-down, choose String
- In the Value text box, copy and paste the XML you created in part I
- Click OK to close the Add Row blade
- Click OK to close the Custom OMA-URI Settings blade and click Create to create your policy
- Once created, click the policy. On the Policy blade, click Assignments
- On the Assignments blade, in the Assigned to drop-down, select Selected Groups and then click Select groups to include
- On the Azure AD Groups blade, choose your Windows 10 Autopilot devices group. NOTE: This would have been completed already as part of your pre-requisites in Part I of this blog
- On the Assignments blade, click Save
Provision a Windows 10 Autopilot Device and Validate the Result
- Boot a test device into OOBE and click through the screens
- When you reach the Add an Account screen, use your work account to Azure AD Join the device and enroll the device in Intune
- At this point, you'll see the Enrollment Status Page
- Wait a bit for the process to complete
- Once complete, the user should be auto-logged in and you should see your Assigned Access profile has been applied successfully. It should look a little something like the screens below
As you can see, the Assigned Access profile was successfully applied and we only see the 4 Start menu icons we configured for our allowed apps.
In this screen, notice I do not have the Start menu Show All Apps button. I all I have is the Pinned Tiles button
Lastly, a user can still swipe in from the right and attempt to access the Settings app, but guess what? Yep, that's right, it's blocked!
Once you get to this point, kick the tires a little bit and see if you can beat the Assigned Access profile!
Troubleshooting
If for some reason your AssignedAccess CSP policy isn't working as expected, check the Device status for the policy in the Intune portal.
- Open up a browser and navigate to your Intune portal
- In the Intune blade, click Device Configuration
- In the Device Configuration blade, click Profiles
- Click your AppLocker Policy. On the Policy blade, click Device status
- On the Device status blade, review the results. If you see an error, click the entry to drill down. If the error comes back as Remediation Failed, you have a malformed XML string value. A good way to check to see if your XML is malformed or not, open up the XML file in Internet Explorer. If you don't see the familiar XML formatting, but rather a blank page, you have a malformed XML. Go back and review the XML value you copied into the policy. Review the sections in the blog series that discusses the XML value
Wrap Up
Alright, that's it! I hope you had as much fun as I did! Always cool to see the power of Intune at work. The unfortunate aspect of this blog is what I mentioned in part I, as Windows 10 1803 becomes standard across organizations and Intune's kiosk mode evolves into a generally available feature, all of our hard work here is rendered moot. But hey, it sure was cool tinkering and engineering a stop-gap while we wait for that evolution to occur. Until next time!
System Center Configuration Manager サイトのアップグレード手順
みなさま、こんにちは。
日本マイクロソフト System Center Configuration Manager サポートの金です。
本日は、System Center Configuration Manager Current Branch (以下、SCCM CB) をアップグレードする手順をご案内させていただきます。
SCCM CB のバージョンアップは、最上位のサイト サーバーで実施します。
中央管理サイト サーバーがある階層型の環境の場合、中央管理サイト サーバーで、プライマリ サイト サーバーの環境の場合は、プライマリ サイト サーバーで実施してください。
最上位のサイト サーバーがアップグレードされる過程で、その配下のサイト サーバーや、管理ポイント、配布ポイント等のサイト システムは、自動的に更新されます。
これから、SCCM CB 1710 から、最新バージョンの SCCM CB 1802 へのアップグレード手順をご案内いたします。
また、SCCM CB でのアップグレードは累積となり、新しいバージョンが古いバージョンの内容を含んでおりますため、中間バージョンを踏まずに、最新バージョンにアップグレードすることができます。
なお、SCCM CB の場合、他のバージョンからでも、本手順と同様な手順でアップグレードすることができます。
本手順は、以下の流れとなっております。
A. SCCM CB バージョンアップの手順
B. アップグレードの状況を確認する手順
C. SCCM CB バージョンを確認する手順
A. SCCM CB バージョンアップの手順
1. [Configuration Manager コンソール] から、以下の画面に移動します。
[管理] - [概要] - [更新とサービス]
2. [更新とサービス] の一覧から、"Configuration Manager 1802" の [状態] を確認します。
3. [状態] が、"ダウンロード可能" の場合は、"Configuration Manager 1802" と右クリックし、[ダウンロード] をクリックします。
[状態] が、"インストール" の場合は、"インストールの準備完了" の場合は、既にダウンロード済みとなりますので、手順 7 からご実施ください。
4. "更新プログラムを確認しています。…" とのメッセージが表示されますので、[OK] をクリックします。
5. ダウンロードが開始されますので、ダウンロードが完了するまで、しばらく待機します。
6. ダウンロードが完了し、上記の手順 2 の 画面の [更新とサービス] の一覧から、"Configuration Manager 1802" の [状態] が "インストールの準備完了" となっていることを確認します。
7. "Configuration Manager 1802" を右クリックし、[更新プログラム パックのインストール] をクリックします。
8. [Configuration Manager 更新ウィザード] が表示されますので、[全般] の画面で、[次へ] をクリックします。
9. [機能] の画面で、追加したい機能がございましたら、チェックを入れ、[次へ] をクリックします。
機能は、必要になった場合、後から追加することもできます。
本手順では、既定のとおり、そのまま [次へ] とさせていただきます。
10. [クライアント更新プログラムのオプション] の画面で、サイトとともに、そのままクライアントをアップグレードする場合は、"検証なしでアップグレードする" を選択し、[次へ] をクリックします。
クライアント アップグレードの検証をする場合は、"実稼働前コレクションで検証する" を選択し、検証用コレクションを指定後、[次へ] をクリックします。
本手順では、既定のとおり、"検証なしでアップグレードする" を選択させていただきます。
11. [ライセンス条項] の画面で、"ライセンス条項およびプライバシーに関する声明に同意する" にチェックを入れ、[次へ] をクリックします。
12. [概要] の画面で、[次へ] をクリックします。
13. 更新ウィザードが正常に完了したことを確認し、[閉じる] をクリックします。
14. "Configuration Manager 1802" の [状態] が、"前提条件をチェックしています" と更新され、前提条件のチェック処理が開始されます。
前提条件で問題なく合格した場合、そのままインストールに進みますので、しばらく待機します。
インストールの状況を確認する手順につきましては、本文の最後に補足として記載させていただきます。
15. インストールが完了しますと、 "Configuration Manager 1802" の [状態] が、"インストール済み" と更新されます。
16. [Configuration Manager 1802 修正プログラム" をインストールする場合は、上記のインストール手順と同様に、対象更新プログラムをダウンロード後、インストールをご実施いただくことで、適用可能です。
上記で、SCCM CB のバージョンアップ後、コンソールの操作を行う際に、"新しいバージョンのコンソールを使用できます" のメッセージが表示される場合、[OK] をクリックします。
コンソールが閉じられ、コンソールのバージョンアップが開始されますので、しばらく待機した後に、コンソールを開いてくださいますようお願いいたします。
B. アップグレードの状況を確認する手順
1. [Configuration Manager コンソール] から、以下の画面に移動します。
[管理] - [概要] - [更新とサービス]
2. [更新とサービス] の一覧から、"Configuration Manager 1802" を選択し、画面の下段にある "ステータスの表示" をクリックします。
3. 続きます画面で、リボンメニューから、"ステータスの表示" のボタンをクリックします。
4. [更新プログラム パッケージのインストール ステータス] の画面にて、インストール状況がご確認いただけます。
C. SCCM CB バージョンを確認する手順
1. [Configuration Manager コンソール] の左上にある逆三角形のボタンをクリックし、[Configuration Manager のバージョン情報] をクリックします。
2. [System Center Configuration Manager のバージョン情報] の画面の"バージョン" にて、サイトのバージョンをご確認いただけます。
手順は、以上となります。
SCCM CB 自動クライアント アップグレード手順
みなさま、こんにちは。
日本マイクロソフト System Center Configuration Manager サポートの金です。
本日は、System Center Configuration Manager Current Branch (以下、SCCM CB) クライアントをサイトのバージョンアップ後、自動的にアップグレードする手順についてご案内させていただきます。
SCCM CB クライアントのアップグレード方法は、以下のように、複数がございます。
- 自動クライアント アップグレード
- クライアント プッシュ インストール
- グループ ポリシーによるインストール
- ログオン スクリプトによるインストール
- 手動インストール
- アップグレード インストール
お客様環境の運用方式に従い、お選びいただけますが、その中で、これからご案内する自動クライアント アップグレードは、一番作業の手間が少ない方法となりますので、クライアントのアップグレードの際に、本機能のご利用をご検討いただけますと幸いです。
自動クライアント アップグレードの手順
1. [Configuration Manager コンソール] を開き、以下の画面に移動します。
[管理] - [概要] - [サイトの構成] - [サイト]
2. リボンメニューから、[階層設定] をクリックします。
3. [階層設定のプロパティ] の画面で、[クライアント アップグレード] タブを開きます。
4. [実稼働クライアント バージョン] が、アップグレードしたいクライアントのバージョンと一致していることを確認し、"実稼働クライアントを使用して階層内のすべてのクライアントをアップグレードする" にチェックを入れます。
5. "階層内のすべてのクライアントの自動アップグレードを有効にするオプションを選択しました。…" とのメッセージが表示されますので、[OK] をクリックします。
6. [実稼働クライアント バージョン] の、"実稼働クライアントを使用して階層内のすべてのクライアントをアップグレードする" にチェックが入ったことを確認し、"次の日数以内にクライアントを自動アップグレードする" の設定を行います。
選択可能な範囲は、1 から 31 となります。
ここで指定した期間内のランダムなタイミングで、クライアントがアップグレードされます。
7. [OK] ボタンをクリックし、画面を閉じます。
手順は、以上となります。
管理対象が多い大規模の環境では、自動クライアント アップグレードの機能を有効にすることで、一気にクライアントのアップグレードがされ、ネットワークを逼迫することが懸念点として考えられますが、上記の設定画面にて、自動アップグレードの期間を長く指定してきただくことで、ネットワークの負荷を分散することができます。
例えば、指定可能な最長日数の 31 としていただきますと、クライアントは、1 ~ 31 日の期間中、ランダムの日付に、アップグレードを行います。
念の為、具体的なクライアントのアップグレード動作を、補足説明いたしますのでご参考いただけますと幸いです。
クライアント アップグレードの動作の流れ
1. SCCMサーバにおいて自動クライアント アップグレードを有効にする際に、アップグレード期間を指定します。
2. クライアントが自動クライアント アップグレードのポリシーを受信します。
3. クライアントのタスク スケジューラに、指定した日数内で、ランダムな日時に開始されるクライアント アップグレードのタスクが登録されます。
4. 登録されたタスクに開始時刻に、クライアントはそれぞれアップグレードを開始します。
以下は、参考イメージとなります。
各クライアントが、異なる時刻にアップグレードのスケジュールが設定されていることが判ります。
Azure AD ウェブセミナー 第2シリーズのご案内【8/22更新】
Azure Active Directory 製品部門主催、日本語 Webinar の第 2 弾のご案内
本日は大好評をいただいている Azure Active Directory Webinar の追加セッションをご案内致します。
本シリーズは Azure AD 製品開発チームメンバーが最近の Cloud Identity のトレンドを踏まえた Azure AD の利活用方法のうち、
基本的であり最も重要なトピックについて初心者にも分かりやすく説明をしています。
■Webinar概要
- Azure AD 製品開発チームのプログラムマネージャーが実施する Azure AD Webinar (日本語)
- 技術レベルは L100-200 で実施
- Azure AD の基礎のうち特に重要で必ず押さえておきたいトピックを扱う
■スケジュール
Season 2 (2018年 8月 - 10月実施)
〇8/30(木) 13:30-14:15
Azure Active Directory 利用開始への第一歩 -Getting Ready for Azure AD register -
〇9/13(木)13:30-14:15
IP ベースのアクセス制御からの脱却してよりセキュアな環境を構築しよう -Implement zero trust security using device based conditional register -
〇9/27(木)13:30-14:15
Office365 および Azure AD 管理者が必ずやっておくべきセキュリティ対策 -Key things O365 administrators must do for securing corporate identity register -
〇10/11(金) 13:30-14:15
Azure AD の SaaS アプリケーション認証への活用 -Utilize Azure AD for 3rp Party app authentication register -
〇10/25(木) 13:30-14:15
Azure AD で実現するスムーズな外部パートナー協業 -Accelerate partner collaboration through Azure AD
以下の登録サイトより是非ご参加登録をください。
http://aka.ms/azureadwebinar
※是非ブックマークに登録ください。
今後の追加のセッション情報をこちらに更新いたします。
また、過去のオンデマンドセッションも上記サイトよりご視聴いただけます。
Azure AD におけるロール管理の新しい方法
こんにちは、Azure & Identity サポート チームの栗井です。
本記事は、2018年7月末に公開された A new way to manage roles and administrators in Azure AD を元にしています。
これまで、 Azure AD の全体管理者などのロールを割り当てられているユーザー一覧を取得することができなかったのですが、最近の更新により、簡単に確認できるようになりました。
元記事を参考に、ユーザーへのディレクトリロールの新しい割り当て・管理方法について、画面キャプチャを含めてご紹介します。
Azure AD におけるロール管理の新しい方法
ユーザの役割の確認や管理者権限の割り当てを、これまでより簡単におこなうことができます。
以下が、新しい機能の特徴です。
- ビルトイン (既定) のディレクトリロールの一覧と、それぞれの詳細を確認可能
- 役割の管理や設定がより簡単に
- 関連ドキュメントへのリンク追加
言い換えますと 「全体管理者は何人いるのか?」「このユーザーに割り当てられているのは何の役割か?」がすぐ分かるようになりました。
概要画面から、新機能「ロールと管理者」がご利用できます
概要
ビルトイン ディレクトリ ロールの一覧とそれぞれの簡単な説明は 「ロールと管理者」をクリックすることで確認できます。これも最近追加されました Azure AD と連携するアプリケーションの管理を目的とした新しいロールも含まれます。
現在サインインして操作を実施しているユーザー自身に何かしらのロールが割り当てられている場合、画面上部に表示されます。「ロール」をクリックすると、自身に割り当てられているロールと、それぞれの概要の一覧を確認できます。
「ロールと管理者」下の、各ロールと説明の一覧
また、ロールの行をクリックすると、そのロールに割り当てられているユーザーの一覧が確認できます。
ロールに割り当てられているユーザー(メンバー)一覧
各ロールによりどのようなことができるのかという質問をよく頂きますが、それぞれのロールに何の権限があるのか、その詳細を一覧でみられるようになりました。同じ画面で、関連記事のリンクも見ることができます。ロールを最大限有効に使うためにぜひご活用ください。
画像に示すように、ブレードの「説明」をクリックするとこの画面が見られます。もしくはロールの一覧画面から、各行の右側にある「・・・」をクリックをすることでも、同じ画面に辿り着くことができます。
あるロールに割り当てられているユーザの一覧に加えて、その逆引きである「あるユーザーに割り当てられているロールの一覧」を見ることができるようになりました。同じ画面から、ユーザーに新たなロールを追加することができます。
詳細は Assigning administrator roles in Azure Active Directory をご参照ください (英語ドキュメント)。
ユーザーに割り当てられているロールの一覧と、「ロールの追加」ボタン
一人のユーザーに複数の特権ロール (privileged roles) を割り当てることも可能です。
既に割り当てられているものは表示されません。
割り当て可能なロールの一覧
Azure AD PIM での更新
より細やかな権限管理は Azure AD Privileged Identity Management (PIM) で可能です。 Azure AD PIM の管理ブレードにも「ロールと管理者」へのリンクがあります。
Privileged Identity Management 画面から「ロールと管理者」へのリンク
もし組織で PIM を有効にしていなくても、「PIMで管理」をクリックすれば、管理者保護のためにPIMをどのように活用できるか確認できます。無料のトライアルもご利用可能です。
「PIM」で管理から、Privileged Identity Managementの情報が閲覧できます。
上記内容が少しでも参考となりますと幸いです。
製品動作に関する正式な見解や回答については、お客様環境などを十分に把握したうえでサポート部門より提供させていただきますので、ぜひ弊社サポート サービスをご利用ください。
※本情報の内容(添付文書、リンク先などを含む)は、作成日時点でのものであり、予告なく変更される場合があります。
O365 Groups Tidbit – Compliance in O365 Groups (Audit log search)
Hello All,
Continuing to look at Compliance and O365 Groups I wanted to look at the Audit log search in Security & Compliance, I’m sure we all realize how important it is to collect audit data so that you can answer questions about user or system actions.
So let’s look at what it means for O365:
- Go to Security & Compliance portal
- Expand Search & Investigation then select Audit Log Search
- From the GUI you need to select the activities you want to report on which can cover many different services like File, Sway, and AAD to mention just a few and then each service has multiple activities like Delete, Create, etc.
- As well you can select Start and end dates
- If you know enough specifics you can narrow it down to users and files/folders
NOTE: Any information you can use to narrow down what you have to dig thru will be better for you.
The information that is provided to you will have all the info you expect, and what you do with that data is up to you. You can view that data in the GUI itself or you can export to a CSV file.
As well for those that love automation and development you can choose from:
PowerShell and using the cmdlet Search-UnifiedAuditLog which will return activities from all the services like Exchange, SharePoint, Teams, etc
Graph API and use the Management Activity API to return audit data and manipulate with features like pagination, etc
Pax
Top 10 Networking Features in Windows Server 2019: #5 Network Performance Improvements for Virtual Workloads
Share On: Twitter Share on: LinkedIn This blog is part of a series for the Top 10 Networking Features in Windows Server 2019! -- Click HERE to see the other blogs in this series. Look for the Try it out sections then give us some feedback in the comments! Don't forget to tune in next week for the next feature in our Top 10 list!
The Software Defined Data-Center (SDDC) spans technologies like Hyper-V, Storage Spaces Direct (S2D), and Software Defined Networking. Whether you have compute workloads like File, SQL, and VDI, you run an S2D cluster, or perhaps you're using your SDN environment to bring hybrid cloud to a reality, no doubt we crave network performance – we have a “need for speed” and no matter how much you have you can always use more.
In Windows Server 2016, we demonstrated 40 Gbps into a VM with Virtual Machine Multi-Queue (VMMQ). However, high-speed network throughput came at the additional cost of complex planning, baselining, tuning, and monitoring to alleviate CPU overhead from network processing. Otherwise, your users would let you know very quickly when the expected performance level of your solution degrades. In Windows Server 2019, virtual workloads will reach and maintain 40 Gbps while lowering CPU utilization and eliminate the painful configuration and tuning cost previously imposed on you, the IT Pro.
To do this, we’ve implemented two new features:
- Receive Segment Coalescing in the vSwitch
- Dynamic Virtual Machine Multi-Queue (d.VMMQ)
These features maximize the network throughput to virtual machines without requiring you to constantly tune or over-provision your host. This lowers the Operations & Maintenance cost while increasing the available density of your hosts. The efforts outlined here cover our progress in accelerating the host and guest; in a future article a colleague of mine (Harini Ramakrishnan) will discuss our efforts to accelerate the app in a future post.
Receive Segment Coalescing in the vSwitch
Number 1 on our playlist is an “oldie but goodie.” Windows Server 2019 brings a remix for Receive Segment Coalescing (RSC) leading to more efficient host processing and throughput gains for virtual workloads. As the name implies, this feature benefits any traffic running through the virtual switch including traditional Hyper-V compute workloads, some Storage Spaces Direct patterns, or Software Defined Networking implementations (for example, see Anirban's post last week regarding GRE gateway improvements #6 - High Performance SDN Gateways).
Prior to this release, RSC was a hardware offload (in the NIC). Unfortunately, this optimization was disabled the moment you attached a virtual switch. As a result, virtual workloads were not able take advantage of this feature. In Windows Server 2019, RSC (in the vSwitch) works with virtual workloads and is enabled by default! No action required your part!
Here’s a quick throughput performance example from some of our early testing. In the task manager window on the left, you see a virtual NIC on top of a 40 Gbps physical NIC without RSC in the vSwitch. As you can see, the system requires an average of 28% CPU utilization to process 23.9 Gbps.
In the task manager window on the right, the same virtual NIC is now benefiting from RSC in the vSwitch. The CPU processing has decreased to 23% despite the receive throughput increasing to 37.9 Gbps!
Here's the performance summary:
Average CPU Utilization | Average Throughput | |
Without RSC in the vSwitch | 28% | 23.9 Gbps |
With RSC in the vSwitch | 23% | 37.9 Gbps |
---Totals | 17.86% Decrease in CPU | 58.58% Increase in Throughput |
Under the Hood
RSC in the vSwitch combines TCP segments that are a part of the same TCP-stream into larger segments destined for a Hyper-V Guest. Processing coalesced (fewer) packets is far more efficient than the processing required for segmented packets. This leads to large performance gains to Hyper-V virtual machines.
Performance gains are seen in both high and low throughput environments; high-throughput environments benefit from more efficient CPU processing (lower CPU utilization on the host) while low throughput environments may even see throughput gains in addition to the processing efficiencies. Take a look at RSC in action:
Get Started!
If you’re a Windows Server 2019 Insider and using Hyper-V, Storage Spaces Direct, Software Defined Networking (including the High Performance Gateways Anirban talked about last week!), you’re likely already consuming this feature! This feature is enabled by default! But of course, if you’d like to compare the results yourself, check out our validation guide below.
Ready to give it a shot!? Download the latest Insider build and Try it out!
Dynamic Virtual Machine Multi-Queue (d.VMMQ)
With the advent of 10 Gbps NICs (and higher), the processing required for the network traffic alone exceeded what could be accomplished by a single CPU. Virtual Machine Queue and its successor Virtual Machine Multi-Queue allowed traffic destined for a vmNIC to be processed by one or more different processor cores.
Unfortunately, this required complex planning, baselining, tuning, and monitoring; often more effort than the typical IT Pro intended to expend. Even then, problems arose. If you were to introduce a heterogeneous hardware footprint in your datacenter, the optimal configuration could be varied or if tuning was needed, virtual machines may not be able to maintain a consistent level of performance.
To combat these problems, Windows Server 2019 dynamically tunes the host for maximum CPU efficiency and consistent virtual machine throughput. D.VMMQ requires no setup once a supporting driver in-place and will autotune the existing workload to ensure optimal throughput is maintained for each virtual machine. This reduces the OPEX cost imposed by previous versions of this technology.
How it Works
There are two key outcomes from this technology:
- When network throughput is low: The system coalesces traffic received on a vmNIC to as few CPUs as possible
Here’s a VM receiving around 5.3 Gbps.
The system can coalesce all packets onto one CPU for processing efficiency.
- When network throughput is high: The system automatically expands traffic received to as many CPUs as needed
The VMs traffic has grown to about 21 Gbps, which is more than a single CPU can handle.
The system expands the traffic across additional CPUs as necessary (and available) – In this case 5 - to maintain the demand for traffic.
Here's a quick video on Dynamic VMMQ in a low-throughput scenario. You'll see the dynamic scheduling algorithm coalesce all the network throughput onto one core. Then, once network traffic has completed, the queues will return to their "ready" state allowing them to expand very quickly if a burst of traffic occurs.
Get Started!
This feature requires a driver update for your NICs to a Dynamic VMMQ capable driver (referred to by some vendors as RSSv2). Drivers for Dynamic VMMQ will not be included inbox as this is an advanced feature, so please contact your IHV or OEM for the latest drivers.
If you are purchasing new hardware, you should pay special attention to the available NICs and verify that they have received the SDDC Premium Logo through our certification program (click on a specific NIC and look for SDDC Premium). If not, Dynamic VMMQ is not supported on these devices and you will default the traditional Static mechanism.
Ready to give it a shot!? Download the latest Insider build and Try it out!
Summary
Regardless of workload, your virtual machines need the highest possible throughput. Not only can Windows Server 2019 reach outstanding network performance, it eliminates the costly planning, baselining, and tuning required by previous Windows versions. You may still get a late-night call to troubleshoot a poorly performing virtual machine, but it won’t be because of the network throughput!
Thanks for reading and see you at Ignite!
Dan “Auto-tuning” Cuomo
Dynamics AX 2012 R3 Enterprise Portal Setup with Auth0
This article will assist you with the setup of an external Enterprise portal site using Auth0 user authentication mechanism.
Start by downloading the required KB 4133646 for Dynamics AX2012 R3.
Secondly, download the PDF file Dynamics-AX-2012-R3-Enterprise-Portal-Setup-with-Auth0. This document will describe all the steps required to setup your external Enterprise portal site using Auth0.
Enable digital transformation with centralized identities in the cloud
Get technically equipped to sell and deploy Microsoft’s trustworthy identity and access management solution! Through the one-on-one consultations listed below you’ll learn how to enable secure access to all your apps with integration across cloud and on-premises directories.
Microsoft 365 Identity & Access Management Presales Consultation (L100-200)
- Key outcomes: Have you chosen to deliver on the Identity & Access Management capabilities of Microsoft 365? During this remote one-to-one technical consultation, you’ll receive personalized presales guidance for your first opportunities.
Microsoft 365 Identity & Access Management Deployment Consultation (L300-400)
- Key outcomes: During this remote one-to-one technical consultation, you’ll receive personalized deployment guidance for identity and access management solutions. You’ll learn how to monitor identity resources and respond to identity breaches using Microsoft 365.
Explore the full suite of technical webinars and consultations available for the Security and Compliance technical journey at aka.ms/SecurityTechJourney.