Quantcast
Channel: TechNet Blogs
Viewing all 34890 articles
Browse latest View live

Jak navrhnout FW pravidla v Azure s NSG a ASG

$
0
0

Pokud z nějakého důvodu nemůžete použít platformní službu (PaaS), možná stojíte před úkolem jak nastavit firewall pravidla pro aplikaci ve VM, která má dvojici webových serverů přístupných z venku a dvě databázové VM. Jak to udělat? Mikrosegmentace per VM? Nebo pravidla na subnet? A co aplikační objekty s ASG? Podívejme se dnes na čtyři způsoby jak to navrhnout a výhody či nevýhody každého z nich.

Možnosti filtrování komunikace v IaaS v Azure

Doporučuji maximálně využívat nativních prostředků Azure softwarově definované sítě. Je to totiž distribuované řešení (neřešíte sizing, neomezuje vás rychlostně) a je zdarma. Základním kamenem je Network Security Group, což je stavový L4 firewall implementovaný přímo v SDN fabric (tedy nejde o virtuální krabičku, ale skutečně vlastnost síťového stacku). V pravidelech používáte cílové a zdrojové IP rozsahy případně servisní tagy (o tom jindy – v zásadě objekty představující nějaký výčet IP adres, například public IP svět mimo Azure označovaný jako Internet nebo naopak právě používané public IP pro některé platformní služby, například Azure SQL) a TCP/UDP porty. Tuto NSG můžete aplikovat přímo na jedno konkrétní VM (čistokrevná mikrosegmentace). Neznamená to dovnitř VM (s firewallem vevnitř to nemá nic společného), ale na její virtuální síťvou kartu – implementaci filtrování provádí hostitel. Druhou možností je aplikovat NSG na subnet VNETu a pravidla se pak aplikují na všechny současné i budoucí VM v subnetu.

Co když chcete víc, třeba L7 pravidla, WAFku, IPS a tak podobně? Azure nabízí například Application Gateway (L7 brána/proxy + WAFka) nebo můžete použít virtuální síťovou appliance třetí strany – Azure podporuje mimo jiné Cisco, Fortinet, Check Point, Palo Alto, F5, Imperva, Barracuda a další. Můžete tak použít systém co znáte, mít jednotnou správu pravidel apod. Na druhou stranu nepřehánějte to. Každá virtuální krabička nese náklady za VM v Azure (nebo poplatek za Application Gateway pokud použijete Microsoft řešení) a ještě víc za licence výrobci zařízení. Filtrovat takhle provoz mezi VM mi nedává sebemenší smysl z pohledu požadovaného výkonu, flexibility a cenovky. Potřebujete enterprise firewall na north-south traffic, tedy vystavení služeb do Internetu? Dobré využití. Potřebujete firewall na oddělení/propojení dvou projektů, které pro vás spravují různé firmy a komunikaci mezi nimi potřebujete hlídat enterprise firewallem? Také dobrý nápad. Chcete oddělit webovou vrstvu od DB? Výrazně doporučuji NSG.

Vyzkoušejme čtyři odlišné designy řešení

Vytvoříme si následující infrastrukturu. Dva webové servery, které mají do Internetu vystavit port 80 a dva DB servery, které mají mít otevřený port 1433, ale jen pro webové servery, ne pro ostatní VM ve VNETu nebo z Internetu. Napadají mě čtyři scénáře: per-VM pravidla, per-subnet pravidla, kombinace obojího a použití aplikačních objektů (ASG).

V následujících příkladech nebudu řešit pravidla pro správu (SSH, RDP) ani load balancer. Ty tam pravděpodobně budete mít, ale chceme si to pro pochopení principů co nejvíc zjednodušit.

Pokračovat ve čtení


O365 Tidbit – Deprecation of Machine Translation and Site Manager

$
0
0

Hello All,

Wanted to make sure you were aware of this change that affects only SPO.

Machine Translations

Beginning June 2018, in SharePoint Online, Microsoft will remove the in-product UI entry point for automatic translations. The configuration options during variation use will be removed and hardcoded to false. The APIs will be marked as deprecated with limited support, but will continue to remain available if users want to integrate directly via custom code.

Microsoft recommends that users leverage the Bing translation APIs directly. However, users will still be able to continue accessing the existing APIs via custom code, but support is limited.  Please see this document for more information about SharePoint Machine Translation (Variations).

For SharePoint on-premise, Microsoft we will not remove the UX entry points or API, but will communicate that this feature is deprecated.

Site Manager

Beginning in June 2018, the UI entry point to SiteManager.aspx will be removed from SharePoint Online and direct access will be restricted to Site Collection Admins.  For customers using the Site Manager, we recommend considering the modern file and library copy/move functionality.  The main functionality of Site Manager has been implemented in modern file move and copy. You can learn more about File Copy/Move for SharePoint Document Libraries here.

For more information about how to move and copy files in a document library in SharePoint, see the following Microsoft websites:

To learn more about the differences between modern and classic lists and libraries refer to the article at https://support.office.com/en-us/article/differences-between-the-new-and-classic-experiences-for-lists-and-libraries-30e1aab0-a5cc-4363-b7f2-09e2ae07d4dc?ui=en-US&rs=en-US&ad=US.

You can learn more about File Copy/Move for SharePoint Document Libraries at https://support.office.com/en-us/article/Copy-files-and-folders-from-OneDrive-for-Business-to-a-SharePoint-site-67a6323e-7fd4-4254-99a8-35613492a82f?ui=en-US&rs=en-US&ad=US.

Pax

System Center Operations Manager Technology Update – May 2018

$
0
0

Lynne Taggart here with another Operations Manager update. I haven’t had the time to catch up on blogging in the last few months with everything on my plate, but here we go.

Like always bookmark & remember you can access my blog by typing //aka.ms/allthat


Disclaimer:

All content provided by this blog is for informational purposes only and it is provided "AS IS" with no warranties, and confers no rights. Always test in a lab first before implementing into your production. The use of included script samples are subject to the terms specified in the Terms of Use. The owner of this blog makes no representations as to the accuracy or completeness of any information on this site or found by following any link on this site. The opinions and views expressed in this blog are those of the author and do not necessarily state or reflect those of Microsoft.


Latest Releases

    • SCOM console and Service Manager console for PowerShell modules can now coexist on the same server.
      Note Both SCOM Update Rollup 5 (this update) and Service Manager Update Rollup 5 (update KB 4093685) must be installed to resolve this issue.
    • Active Directory Integration rules are not visible or editable in an upgraded 2016 Management Group. This prevents the ongoing management of Active Directory integration assignment in the upgraded Management Group.
    • When the UNIX host name on the server is in lowercase, the OS and MonitoredBy information is displayed incorrectly in the Unix/Linux Computers view.
    • Active Directory integrated agents do not display correct failover server information.
    • Performance views in the web console do not persist the selection of counters after web console restart or refresh.
    • The PowerShell cmdlet Get-SCXAgent fails with error “This cmdlet requires PowerShell version 3.0 or greater.”
    • During the upgrade from SCOM 2016 to SCOM 1801, if the reporting server is installed on a server other than the management server, the upgrade fails. Additionally, you receive the error message, "The management server to which this component reports has not been upgraded."
    • If a group name has been changed through the operations console, the Get-SCOMGroup cmdlet does not retrieve the group data that includes the changed group name.
    • Error HTTP 500 occurs when you access Diagram view through the web console.
    • When you download a Linux management pack after you upgrade to SCOM 2016, the error "OpsMgr Management Configuration Service failed to process configuration request (Xml configuration file or management pack request)" occurs.The SQLCommand Timeout property is exposed so that it can be dynamically adjusted by users to manage random and expected influx of data scenarios.The MonitoringHost process crashes and returns the exception "System.OverflowException: Value was either too large or too small for an Int32."
    • When company knowledge is edited by using the Japanese version of Microsoft Office through the SCOM console, the error (translated in English) "Failed to launch Microsoft Word. Please make sure Microsoft Word is installed. Here is the error message: Item with specified name does not exist" occurs.
    • Accessing Silverlight dashboards displays the "Web Console Configuration Required" message because of a certificate issue.
    • Microsoft.SystemCenter.ManagementPack.Recommendations causes errors to be logged on instances of Microsoft SQL Server that have case-sensitive collations.
    • Deep monitoring displays error “Discovery_Not_Found” if the installation of JBoss application server is customized.
    • Adds support for the Lancer driver on IBM Power 8 Servers that use AIX.
    • The ComputerOptInCompatibleMonitor monitor is disabled in the Microsoft.SystemCenter.Advisor.Internal management pack. This monitor is no longer valid.


    Latest KB Articles


    Microsoft Bloggers

    System Center Operations Manager Team (https://aka.ms/SCOMTeam)

    Keven Holman (https://aka.ms/kevinholman or https://aka.ms/SuperPFE)

    Kevin Justin (https://aka.ms/kjustin)

    Bruno Gabrielli (https://aka.ms/brunoG)

    Michael Repperger - https://aka.ms/omx

    Sertac Topal (https://aka.ms/Sertac)

    Silvana Deac (https://aka.ms/Silvana)

    Stefan Stranger (https://aka.ms/SStranger)

    Tim McFadden (https://www.scom2k7.com)

    Tyson Paul (https://aka.ms/tysonpaul)

    Nathan Gau (https://aka.ms/NathanGau)

    Antoni Hanus (https://aka.ms/antonih)

    Wei H Lim (https://aka.ms/weioutthere)

    Nicole Welch (https://blogs.msdn.microsoft.com/nicole_welch/)

    Said Nikjou (https://blogs.msdn.microsoft.com/axinthefield/)

    Jarrett Renshaw

    Philip Van de Vyver (https://blogs.technet.microsoft.com/philipvandevyver/)

    SQL Server Release Service (https://blogs.msdn.microsoft.com/sqlreleaseservices/)


    Community Bloggers

    Bob Cornelissen (https://www.bictt.com/blogs/)

    Cameron Fuller (http://blogs.catapultsystems.com/author/cfuller/)

    Kevin Greene (http://kevingreeneitblog.blogspot.com/)

    Marnix Wolf (http://thoughtsonopsmgr.blogspot.com/)

    Tao Yang (https://blog.tyang.org/)


    Management Packs


    Silect (https://www.silect.com)

    • What’s in a Name (or Version Number)? - With each release of Operations Manager, Microsoft updates the management packs (MPs) that are delivered with the product. Each MP has a name, which doesn’t change, and a version number that should change. As each MP is updated, the version number should be changed to indicate that there is a newer version of the MP.
    • TFS Compared to Silect Management Pack Store - Silect Store is a database used by Silect to store management packs (MPs), test results, preferences, and other information used by MP Studio. Team Foundation Server (TFS) is a source control repository from Microsoft which is used to store source code and other files. Some customers have wondered about the relative merits of using TFS to store MPs compared to Silect Store. There are some things in common between the two solutions (storing multiple versions of files in a hierarchical structure), but this report will concentrate on the differences.
    • MP Studio / MP Author Professional Version 8.2 - Silect announced the General Availability of MP Studio and MP Author Professional version 8.2. Silect made lots of updates and improvements to the products

    SquaredUp (https://squaredup.com)

    • What's it like to be a Developer at Squared Up? - To give you an insight into what life is like as a developer at Squared Up, we're pleased to share an interview with Wayne Plummer, Lead Developer. Wayne is one of our most experienced developers and is infamous for his dad jokes! Check out what he has to say about working at Squared Up, development as a career, and how much it has changed since his first job.
    • 3 awesome ways to use your SCOM Data Warehouse - This technical tutorial webinar provides an introduction to querying your SCOM Data Warehouse, together with examples of some of the awesome insights you can glean from it.
    • v3.4 - Donuts, Dashboard Export, Azure Log Analytics and more – SquaredUp officially announce the release of the latest version of Squared Up, v3.4. New features included in the v3.4 release are: The awesome Donut Tile, Dashboard Export to Excel, Azure Log Analytics Tile, Azure Application Insights Tile.
    • SCOM MP University... coming to a desktop screen near you! - Silect have announced that their popular SCOM MP University will make its return on May 9, 2018 and we're delighted to have been invited to present.


    TechNet Gallery

    • SCCM Service Window to SCOM Maintenance Mode Automation Management Pack (Jason Daggett- Microsoft) - This management pack was designed with two parts. These parts, contained in a single Rule and one Data Source Method, were broken up to provide System Center Operations Manager with the ability of automatically placing a system into Maintenance Mode using System Center Configuration Manager Service Windows. 
    • SCOM: Export Effective Monitoring Configuration with PowerShell (Tyson Paul- Microsoft) - Export-SCOMEffectiveMonitoringConfiguration does not output ALL contained instance configurations even when the "-RecurseContainedObjects"  parameter is used. Yes, really!  It outputs configuration data for only one instance of each particular type, be it a logical disk or Ethernet adapter etc., you only get monitor and rule configuration output for one of a kind in the resulting .csv file. If you have three logical disks (i.e. C:, D:, E:) you will only see configuration data for one of those, whichever one gets enumerated first by the cmdlet.
    • Install Linux on OMS and SCOM (on-boarding Linux Systems in OMS) (GouravIN) - Document for installing OMS agent on Linux
    • SCOM Alerts Query (GouravIN) - documents on SCOM alerts.with the help of this document we can fetch crtical, warning , open, closed alerts from Database or Powershell.
    • Error 1714 : System Error 1612. (GouravIN)  - The Older version of Microsoft Monitoring Agent cannot be removed. Contact your technical support group. System Error 1612. In this article I have covered the step by step process to fix the above issue during agent upgrade. We can apply this when we cannot uninstall agent on server and as well install new on server this will help us.
    • SCOM SQL Performance Rule Bloat (Kevin Justin - Microsoft) - Read the blog  https://blogs.technet.microsoft.com/kevinjustin/2018/03/02/sql-mp-bloat/  to see how much data the SQL packs add to your SCOM database and DW aggregating from ~60 to ~200 performance counters (depending on which SQL MP version(s) you are running.
    • LAPS Solution for Microsoft OMS (Adin Ermie) - This is a pre-built solution for the Operations Management Suite  (www.microsoft.com/oms),  to visualize the Local Administrator Password Solution (LAPS) events. This solution requires the OMS Security and Audit solution to be enabled first, as it leverages the collected Security Logs. To add this solution to your OMS Workspace, use the View Designer and add this view. For full details on the creations of this Solution, see http://adinermie.com/laps-oms-solution/
    • Office 365 Supplemental Management Pack V1 (Brian Zoucha – MSFT) Updated - The Office 365 Supplemental Management Pack includes synthetic transactions that provide an increased level of visibility into the health of the Office 365 environment.
    • Server Performance Solution for Microsoft Log Analytics (Cameron Fuller) Updated - This is a pre-built server performance solution for Microsoft OMS (www.microsoft.com/oms). To add this solution, use the view designer and add this view. Then add the following Windows performance counters on the settings page: Logical Disk(*)% Free Space Logical Disk(*)Free
    • Cireson Portal & Cache Builder Management Pack for SCOM 2012 R2 – 2016 (GavSpeed) Updated -
    • This Management Pack provides discovery, monitoring and alerting for all the key supporting components of Cireson Portal instances


    Knowledge Opportunity / Mind Growth

    • Silect: MP University MAY 9, 2018 from 9AM to 4PM  CEST - Join Microsoft, Silect and other industry leading partners for this free 1 day event on Management Pack Authoring, SCOM and Microsoft Azure. Learn MP authoring best practices, how to leverage Fragments, how to optimize SCOM performance, what’s new in SCOM 1801, Azure management and much much more! Speakers include industry experts Brian Wren, Kevin Holman and Aditya Goda from Microsoft, Jonas Lenntun from Approved Consulting, Matthew Long and Nathan Foreman from Squared Up, Mike Sargent from Silect and more.

    Tip of the Day: Windows Hello, now with Synchronous Certificate Enrollment

    $
    0
    0

    Today's tip...

    In the past, Hello (hybrid scenario) users had to wait thirty minutes after first creating a PIN before they could use it to logon due to the time it takes for a public key to sync back to the on-premises AD using AAD Connect. If the user tried to logon before the sync-back they might see the following error message:

    ‘This option is currently unavailable, please try again.’

    Recent improvements to the Hybrid Certificate Trust scenario reduces the wait time for public key sync-back from the original thirty minutes to one minute or less, making it almost instantaneous by comparison. Users can now use their certificate with PIN or biometrics for authentication almost immediately resulting in a vastly improved experience.

    NOTE: This does not change or affect hybrid key-trust deployments.  Users in these deployments must still wait for the public key to sync to on-premises Active Directory before they can authenticate with their PIN or biometric.

    Come learn about the spring launch for Microsoft Dynamics 365 Apps

    $
    0
    0

    We all know how to respond when a prospect, a customer, or even a friend or family asks us, “What is Dynamics 365?”

    The best way to grab their attention is to explain how Microsoft uniquely delivers a comprehensive, end-to-end approach to business applications—helping you unify data and relationships, build intelligence into your decision making, and accelerate business transformation.  Of course, that always includes new improvements. So what’s all the hype this spring?

    What’s new in the spring release for Microsoft Dynamics 365 Customer Engagement?

    The Spring Release includes a brand-new Marketing application and a wide array of new capabilities across all of the existing Customer Engagement applications. There is a wealth of information on all of the details in the release notes and the launch videos here.

    We’ll help you get up to speed by reviewing the key updates in our community call. This includes an overview of the new Marketing application, the new Sales Professional license, and highlights from the Blitz event. We’ll also provide you with additional resources to help you as you move forward.

    What’s the new offering with Microsoft Dynamics 365 Business Central?

    Microsoft Dynamics 365 Business Central is designed for businesses looking for an all-in-one business management solution that's easy to use and adapt. Connect your finances, sales, service, and operations to streamline business processes, improve customer interactions, and enable growth. Check out more on that here.

    During the call, attendees will learn about all the resources available to begin building a cloud practice encompassing Business Central. We’ll offer practical advice on where to focus and how to get started—from business development to training and development resources.

    Sign up for the Business Applications Community call that takes place on May 8 at 9 am PT.

     Business Applications Technical Community

    How to Develop a Currency Detection Model using Azure Machine Learning

    $
    0
    0

    This post is authored by Xiaoyong Zhu, Anirudh Koul and Wee Hyong Tok of Microsoft.

    Introduction

    How does one teach a machine to see?

    Seeing AI is an exciting Microsoft research project that harnesses the power of Artificial Intelligence to open the visual world and describe nearby people, objects, text, colors and more using spoken audio. Designed for the blind and low vision community, it helps users understand more about their environment, including who and what is around them. Today, our iOS app has empowered users to complete over 5 million tasks unassisted, including many "first in a lifetime" experiences for the blind community, such as taking and posting photos of their friends on Facebook, independently identifying products when shopping at a store, reading homework to kids, and much more. To learn more about Seeing.AI you can visit our web page here.

    One of the most common needs of the blind community is the ability to recognize paper currency. Currency notes are usually inaccessible, being hard to recognize purely through our tactile senses. To address this need, the Seeing AI team built a real time currency recognizer which can uses spoken audio to identify the currency that is currently in view, and with high precision and low latency (generally in under 25 milliseconds). Since our target users often cannot perceive whether a currency note is in the camera's view or not, having a real time spoken experience acts as feedback to them, helping them hold the note until it's clearly visible at the right distance and in the right lighting conditions.

    In this blog post, we are excited to be able to share with you the secrets behind building and training such a currency prediction model, as well as deploying it to the intelligent cloud and intelligent edge. More specifically, you will learn how to:

    • Build a deep learning model on small data using transfer learning. We will develop the model using Keras, the Deep Learning Virtual Machine (DLVM), and Visual Studio Tools for AI.
    • Create a scalable API with just one line of code, using Azure Machine Learning.
    • Export a mobile optimized model using CoreML.

    Boost AI Productivity with the Right Tools

    When developing deep learning models, using the right AI tools can boost your productivity – specifically, a VM that is pre-configured for deep learning development, and a familiar IDE that integrates deeply with such a deep learning environment.

    Deep Learning Virtual Machine (DLVM)

    The Deep Learning Virtual Machine (DLVM) enables users to jumpstart their deep learning projects. DLVM is pre-packaged with lots of useful tools such as pre-installed GPU drivers, popular deep learning frameworks and more, and it can facilitate any deep learning project. Using DLVM, data scientists can become productive in a matter of minutes.

    Visual Studio Tools for AI

    Visual Studio Tools for AI is an extension that supports deep learning frameworks including Microsoft Cognitive Toolkit (CNTK)TensorFlowKerasCaffe2 and more. It provides nice language features such as IntelliSense, as well as debugging capabilities such as TensorBoard integration. These features make it an ideal choice for cloud-based AI development.


    Visual Studio Tools for AI

    Building the Currency Detection Model

    In this section we describe how to build and train a currency detection model and deploy it to Azure and the intelligent edge.

    Dataset Preparation and Pre-Processing

    Let's first look at how to create the dataset needed for training the model. In our case, the dataset consists of 15 classes. These include 14 classes denoting the different denominations (inclusive both the front and back of the currency note), and an additional class denoting "background". Each class has around 250 images (with notes placed in various places and in different angles, see below), You can easily create the dataset needed for training in half an hour with your phone.

    For the "background" class, you can use images from ImageNet Samples. We put 5 times more images in the background class than the other classes, to make sure the deep learning algorithm does not learn a pattern.

    Once you have created the dataset, and trained the model, you should be able to get an accuracy of approximately 85% using the transfer learning and data augmentation techniques mentioned below. If you want to improve the performance, refer to the "Further Discussion" section below.


    Data Organization Structure

    Below is an illustration for one of the samples in the training dataset. We experimented with different options and found that the training data should contain a few images of the kind described below, so that the model can get decent performance when applied in real life scenarios.

    1. The currency note should occupy at least 1/6 of the whole image.
    2. The currency note should be displayed at different angles.
    3. The currency note should be present in various locations in the image (e.g. top left corner, bottom right corner, so forth).
    4. There should be some foreground objects covering a portion of the currency (no more than 40% though).
    5. The backgrounds should be as diverse a set as possible.


    One of the pictures in the dataset

    Choosing the Right Model

    Tuning deep neural architectures to strike an optimal balance between accuracy and performance has been an area of active research for the last few years. This becomes even more challenging when you need to deploy the model to mobile devices and still ensure it is high-performing. For example, when building the Seeing AI applications, the currency detector model needs to run locally on the cell phone. Inference on the phone therefore needs to have low latency to ensure that the best user experience is being delivered but without sacrificing accuracy.

    One of the most important metrics used for measuring the number of operations performed, during model inference, is called multiply-adds, abbreviated as MAdd. The trade-offs between speed and accuracy across different models are shown in the figure below.


    Accuracy vs time, the size of each blob represents the number of parameters
    (Source: https://github.com/tensorflow/models/tree/master/research/slim/nets/mobilenet)

    In Seeing AI, we choose to use MobileNet because it's fast enough on cell phones and provides decent performance, based on our empirical experiments.

    Build and Train the Model

    Since the data set is small (only 250 images per class), we use two techniques to solve the problem:

    1. Doing transfer learning with pre-trained models on large datasets.
    2. Using data augmentation techniques.

    Transfer Learning

    Transfer learning is a machine learning method where you start off using a pre-trained model and adapt and fine-tune it for other domains. For use cases such as Seeing AI, since the dataset is not large enough, starting with a pre-trained model and further fine-tuning the model can reduce training time and alleviate possible overfitting.

    In practice, using transfer learning often requires you to "freeze" a few top layers' weights of a pre-trained model, then let the rest of the layers be trained normally (so the back-propagation process can change their weights). Using Keras for transfer learning is quite easy – just set the trainable parameter of the layers which you want to freeze to False, and Keras will stop updating the parameters of those layers, while still back propagating the weights of the rest of the layers:


    Transfer learning in Keras

    Data Augmentation

    Since we don't have enough input data, another approach is to reuse existing data as much as possible. For images, common techniques include shifting the images, zooming in or out of the images, rotating the images, etc., which could be easily done in Keras:


    Data Augmentation

    Deploy the Model

    Deploying to the Intelligent Edge

    For applications such as Seeing AI, we want to run the models locally, so that the application can always be used even when the internet connection is poor. Exporting a Keras model to CoreML, which can be consumed by an iOS application, can be achieved by
    coremltools:

    model_coreml = coremltools.converters.keras.convert("currency_detector.h5", image_scale = 1./255)

    Deploying to Azure as a REST API

    In some other cases, data scientists want to deploy a model and expose an API which can be further used by the developer team. However, releasing the model as a REST API is always challenging for enterprise scenarios, and Azure Machine Learning services enables data scientists to easily deploy their model on the cloud in a secure and scalable way. To operationalize your model using Azure Machine Learning, you can leverage the Azure Machine Learning command line interface and specify the required configurations using the AML operationalization module, like below:

    az ml service create realtime -f score.py --model-file currency_detector.pkl -s service_schema.json -n currency_detector_api -r python --collect-model-data true -c aml_configconda_dependencies.yml

    The end to end architecture is as below:


    End to end architecture for developing the currency detection model and deploying to the cloud and intelligent edge devices

    Further Discussion

    Even Faster models

    Recently, a newer version of MobileNet called MobileNetV2 was released. Tests done by the authors shows that the newer version is 35% faster than the V1 version when running on a Google Pixel phone using the CPU (200ms vs. 270ms) at the same accuracy. This enables a more pleasant user experience.


    Accuracy vs Latency between MobileNet V1 and V2
    (Source: https://github.com/tensorflow/models/tree/master/research/slim/nets/mobilenet)

    Synthesizing More Training Data

    To improve the performance of the models, getting more data is the key. While we can collect more real-life data, we should also investigate ways to synthesize more data.

    A simple approach is to use images that are of sufficient resolution for various currencies, such as the image search results returned by Bing, transform these images, and overlay them on a diverse set of images, such as a small sample of ImageNet. OpenCV provides several transformation techniques. These include scaling, rotation, affine and perspective transformations.

    In practice, we train the algorithm on synthetic data and validate the algorithm on the real-world dataset we collected.

    Conclusion

    In this blog post, we described how you can build your own currency detector quite easily using Deep Learning Virtual Machines and Azure. We showed you to build, train and deploy powerful deep learning models using a small dataset. And we were able to export the model to CoreML with just a single line of code, enabling you to build innovative mobile apps on iOS.

    Our code is open source on GitHub. Feel free to contact us with any questions or comments.

    Xiaoyong, Anirudh & Wee Hyong
    (Xiaoyong can be reached via email at xiaoyzhu@microsoft.com)

    Tip of the Day: Windows ADK for Windows 10, version 1803

    $
    0
    0

    Today's tip...

    The Windows Assessment and Deployment Kit (Windows ADK) has the tools you need to customize Windows images for large-scale deployment, and to test the quality and performance of your system, its added components, and the applications running on it. The latest version of this kit is available for download below.

    Direct download of the Windows ADK for Windows 10, version 1803 – https://go.microsoft.com/fwlink/?linkid=873065

    What’s new in the Windows ADK for Windows 10, version 1803:

    • New PowerView tool
    • Answer file setting changes
    • MDM: Enhanced device and PC engagement

    References:

    Workshops for lærere, pædagoger og konsulenter – GRATIS

    $
    0
    0

    I denne periode er mange af jer i gang med at planlægge næste skoleår og til det har man ofte brug for inspiration. Jeg har planlagt følgende workshops for de kommende uger henimod sommerferien. Det er gratis at deltage og vi kommer rundt i landet med både Minecraft: Education Edition og Office 365.

    Relevant for lærere, pædagoger og konsulenter!

    Jeg håber I har lyst og mulighed for enten selv at deltage eller sende kollegaer afsted. Der er masser af værdi at hente fra arrangementerne. I kan læse mere og tilmelde jer fra vores blog.

    Listen over events er følgende:

    Minecraft i Lyngby – 16.5. og 13.6.
    Minecraft i Holstebro og Herning – hhv. 23.5. og 6.6.
    Office 365 kommuneforum, hhv. Lyngby 18.5. og Viborg 31.5.
    OnsdagsSessioner – ugentlige værksteder i Office 365 i Lyngby

    Som altid, har I spørgsmål eller kommentarer, så skriver eller ringer I bare til os 😊

     

     


    Dev Chat has been expanded to include Dynamics 365 scenarios!

    $
    0
    0

    MPN Dynamics 365 app developers can now leverage Dev Chat to receive development tips from Microsoft engineer at NO COST. In addition to the Azure and Office 365 scenarios already covered by Dev Chat, you can now receive technical guidance on Dynamics 365 (Sales and Customer Service apps) scenarios, including but limited to architecture, design, deployment, implementation and migration.

    View the full list services and scenarios covered and start a live chat now at aka.ms/DevChat.

    New Dynamics 365 technical scenarios now covered:

    • Sales and Customer Service in Dynamics 365
    • Customization
    • Development assistance, for example, develop with SDK or API, manage customer data, extend existing features, authentication

    Unsupported Dynamics 365 services:

    • Marketing, Field Service, Project Service Automation, Customer Insights, Retail, Talent
    • Finance and Operations (coming in July)

    General topics covered:

    • Getting started questions
    • Setup for development (service configuration and deployment)
    • Get up and running with your solution
    • Generic service capability
    • Architecture and design consult on solutions
    • Migration from on-premises service
    • Publishing to Marketplace, Office Store, App Store, etc.
    • Sample code review and light proof-of-concept
    • Publishing Custom Applications and Add-ins
    • Partner Center API

     

    Don’t forget to check out the full suite of webinars and consultations available for the Application Innovation technical journeys at aka.ms/AzureAppInnovation or aka.ms/O365AppInnovation.

    This blog has moved to Tech Community!

    $
    0
    0

    In an effort to provide you with a single location for announcements and technical blog posts that also provides a channel for discussion with your peers and our product and engineering teams here at Microsoft, the Windows IT Pro blog has moved to the Microsoft Tech Community.

    Please bookmark and note the new location: https://aka.ms/windowsforitpros.

     

    May 2018 Non-Security Office Update Release

    $
    0
    0

    Listed below are the non-security updates we released on the Download Center and Microsoft Update. See the linked KB articles for more information.

     

    Office 2010

    Update for Microsoft Outlook 2010 (KB4022144)

     

    Office 2013

    Update for Microsoft Office 2013 (KB4018389)

    Update for Microsoft OneNote 2013 (KB4011281)

    Update for Microsoft Outlook 2013 (KB4018376)

    Update for Microsoft Project 2013 (KB4018379)

    Update for Skype for Business 2015 (KB4018377)

     

    Office 2016

    Update for Microsoft Office 2016 (KB3203479)

    Update for Microsoft Office 2016 (KB4011634)

    Update for Microsoft Office 2016 (KB4018318)

    Update for Microsoft Office 2016 (KB4018369)

    Update for Microsoft Office 2016 (KB4022133)

    Update for Microsoft OneNote 2016 (KB4018321)

    Update for Microsoft Outlook 2016 (KB4018372)

    Update for Microsoft Project 2016 (KB4018373)

    Update for Skype for Business 2016 (KB4018367)

     

    How to build a strong relationship in the modern workplace

    $
    0
    0

    It's migration season in the world of business.

    Customers are preparing to leave their existing IT environments. For some, this will not be their first migration. They'll have moved between devices and applications many times in their lives. But for most, there lies ahead a daunting journey. Ahead, they hope, is the modern workplace they've heard so much about. All they need is a guide.

    Enter the partner. You're strong, wise, and you know the lie of the land. But you can't survive on your own. You know that it's costly to find new customers - which is why you do whatever you can to hang on to those already in your pack. If an existing customer needs a guide, you'll fight to make sure it's you.

    The customer and the partner. You need each other - your relationship is symbiotic. And it faces few tests greater than a migration. Because once the move is done, and the customer is settled, what then?

    How do you keep the relationship going?

    For your customers, the modern workplace is a destination. It's a smart, secure, simple way of working anywhere. And it's exactly what they're looking for.

    For you, the modern workplace is an opportunity. With new technology comes plenty of new ways to add value. The trick to keeping the relationship going is to make sure customers know you're an expert in this space - and that you've only just started to help them succeed.

    So, what else can you do for your customers? Here are just a few ideas.

    Make management easy

    It's quick and easy (and sometimes even self-service) for customers to add new devices to their modern workplace. But they'll all want to move at their own pace. Join them in the planning stage to stop the move and management getting in the way of their day to day work.

     

    Keep everything secure

    Your customers don't need to get distracted by security updates. In the modern workplace, they happen automatically. And if customers need to configure any special security policies, your knowledge of the IT makes them easy to build and implement - so no threats slip through.

     

    Stay on top of the latest tech

    This is one of the best bits of the modern workplace. Everyone can get their hands on the latest tools, all the time, anywhere. It's even smoother when you manage this process for your customers - so updates don't impact users while they're working, and it's business as usual for compliance and security.

     

    Really know your stuff

    What's really happening in your customers' businesses? With analytics, you can have all the answers. So it's easy to spot areas for improvement, drive deployment, and keep customers up to date. When you prove you really know their business, that's a relationship they'll want to hang on to.

     

    Better together

    Even after the migration is done, customers keep looking for new, better ways of working. Even after they've moved to a complete, intelligent solution like Microsoft 365, they'll want a partner that can take them further. There are lots of ways you can make their environment and their IT smarter, more secure, and simpler.

    Download the playbook to see them all . It'll tell you more about your modern workplace opportunity, the conversations you can start, and the value you can add to your customers' businesses - long after they've moved to Microsoft 365.

    Microsoft Cloud App Security log collector + OMS = Docker container monitoring

    $
    0
    0

    Need a quick method to monitor Docker containers? How about monitoring the Docker container that is utilized for automatic log upload for Microsoft Cloud App Security? If so, try out Microsoft OMS Container Monitoring Solution to monitor your docker containers including continuous log collectors using Docker in Microsoft Cloud App Security! 

    Did you know that Microsoft Operations Management Suite (OMS) offers many other management and monitoring solutions including update management for Windows, Surface Hub monitoring, Security and Audit information and many more. For more details please visit: https://docs.microsoft.com/en-us/azure/log-analytics/log-analytics-add-solutions

    If you’re utilizing Microsoft Cloud App Security in your environment today and would like to learn more about automatic log upload for continuous Cloud App Security reports please visit: https://docs.microsoft.com/en-us/cloud-app-security/discovery-docker

     

    The following walks through setting up the Container Monitoring Solution in Azure to monitor a Docker container used for Cloud App Security automatic log upload hosted on an Azure VM.

    Requirements

    Assumptions for this post

     

    Let’s get started…

    Here’s a look at the Ubuntu VM with Docker used for Cloud App Security automatic log upload:

    clip_image002

    If you have an Azure subscription log in, select “new” from the upper left, and search for “container monitoring solution”:

    clip_image004

    Select Container Monitoring Solution and Create to add it to your OMS workspace:

    clip_image006

    clip_image008

    Once the instance of Container Monitoring Solution is added, sign-on to your host where the containers are deployed and follow the instructions to install the OMS agent used for monitoring the host: https://github.com/Microsoft/OMS-docker#supported-linux-operating-systems-and-docker

     

    You’ll run a script that is discussed in the link above to install the OMS agent:

    clip_image010

     

    Once the installation in complete, navigate back to the OMS admin portal and look for a new tile called “Container Monitoring Solution”:

    clip_image012

     

    Select the tile and view the status of the containers on the host:

    clip_image014

    clip_image016

    clip_image018

     

    From the information provided, I can see I have a failure with my Cloud App Security Log Collector (i.e. I named the container “LogCollector”)

    clip_image020

    When we drill down into the failure I can see that the which container is failing and other details:

    clip_image022

     

    Monitoring Docker containers using Microsoft OMS as well as the containers used for log collection for Cloud App Security was really simple and I encourage everyone to deploy OMS today.

    Case of the Hit or Miss Windows 10 Servicing Fail

    $
    0
    0

    Hello All,

    I hope this finds everyone well and gearing up for summer!  As Windows 10 deployments accelerate and you successfully tackle bare metal and legacy to uefi conversion/refresh scenarios, we also find ourselves in a third scenario:  Servicing Windows 10.  Servicing is a new approach to updating Windows and has been introduced and discussed at length in a number of different forums, TechNet, Ignite, blogs, msdn, etc.  As we approach Windows 10 version 1803 by now most of you should have your servicing setup, tested, and likely have been through one or two rounds of servicing.  I wanted to take a moment to share with you something we found when servicing Windows 10 to version 1709, how we analyzed the problem, and what we did to work around it. The scenario is a mix of Windows 10 machines running versions 1511 and 1607, that are failing to service to 1709 via SCCM.  We set out to service the 1511 machines initially where we saw some level of success, and interestingly some level of failures; enough failures that raised many eyebrows.  Lets say it was a 60/40 ratio, or 40% failure rate; so it was pretty high which usually indicates a systemic problem that is common among the failures.  But alas we are not in the business of speculation!  We had these failures bubble up and it was time to rollup the sleeves, dig in, and do some post mortem to understand why.  Well as we all know, what we need in our life at this point are logs, logs, logs, and more logs!  But where are the logs for servicing?  Although the information is out there, it is surprisingly not so easy to find.  If you haven't already seen this page, you'll want to head over, check it out, and bookmark it.  Tons of great information in here with different levels of content for the beginner to the seasoned IT Pro.  Understanding how servicing works is going to help give you a good foundation on which to troubleshoot these types of failures.  There is quite a bit to take in on the aforementioned page, suffice it to say I will provide some cliffs notes here (which are not a replacement for reading that content ; )).

    The Process

    Windows 10 servicing is broken down into 4 phases, or 5 if you're unlucky enough to experience an uninstall/rollback.  It's a good idea to read through and understand what each phase is doing, where it takes place, and where the logs for each of these phases are located.  Also a key here in finding out what logs were generated and where, is to understand how many reboots have taken place.  Depending on what logs are generated (and the content of them), you can deduce which phase the servicing operation failed in.  The servicing process reboots once between each phase.  This will make more sense later.

    Phase 1.  DownLevel - This phase is ran in the source OS, this is where all of the install files that are needed are downloaded and prepared for installation.  During this phase we mount the SafeOS WIM file AKA the WinPE environment for use after the upcoming (READ 1st) reboot.  After the SafeOS WIM is mounted and updated for use on the system, we dismount it, apply BCD settings making it the default boot entry, suspend Bit Locker, and reboot the machine.

    Reboot.

    Phase 2.  SafeOS - After we come back from the first reboot we are now booting into the SafeOS WIM (WinPE) that was prepared in phase 1.  Once the machine enters WinPE this is where the bulk of the work to service the operating system is done, AKA where the magic happens.  There are many, many operations being done in this phase.   Some of the key operations are: Creating an OS rollback, creating a recovery partition, copying/moving the source WIM (target OS) to the recovery partition, applying the OS WIM, applying drivers, adding the new OS boot entry into BCD, and setting the SafeOS WIM as the default boot entry in BCD.  Once this phase completes successfully we have applied the new OS, and setup the machine to reboot back into the SafeOS.

    Reboot.

    Phase 3.  First Boot - We are now coming back from the second reboot of the servicing process.  During the First Boot phase we boot back into SafeOS, new BCD entries are created for the New OS,  settings are applied, sysprep is run, and data is migrated.  There is quite a bit going on here during this phase as well.

    Reboot.

    Phase 4.  Second Boot - During the final phase more settings are applied and more data is migrated, system services are started, and the out of box experience (OOBE) phase executes.  The culmination of the process is reaching the start screen and eventually the desktop.

    Phase 5.   Rollback.  If you've reached this phase, something has gone wrong and your machine is rolled back to the previously existing operating system version.  This implies that somewhere along the line the machine experienced a fatal error and could not continue.  Two logs are of immediate interest if you experience a rollback:

    C:Windows.~BTSourcesRollbacksetupact.log

    C:Windows.~BTSourcesRollbacksetuperr.log

    These four main phases are documented on the Windows 10 Troubleshoot-Upgrade-Errors page, and a nice graphic is included at the bottom of the page.  For the first three phases you can actually follow along with each item listed in the graphic on the upgrade errors page by looking at the C:Windows.~BTSourcesPanthersetupact.log to see which of the first three phases completed successfully.  The page also gives you an idea of where errors are typically seen and what kinds of things can cause them.

    The Problem

    Fairly widespread reports of machines taking the upgrade, and eventually rolling back began to trickle in.  Results may vary but on average the servicing process can take between 1-3 hours to complete.  The time it takes to complete is dependent on a number of factors, network uplink speed, processor spec, amount of RAM, type of HDD, etc.  In any event, the time that the servicing upgrade took was also compounded by the time the rollback actually took in order to revert the machine to the previous OS.  You can get an accurate count of overall servicing time and rollback time by looking at the setupact.log files.  In some instances the rollback of machines was still cooking a few hours into the servicing process.

    Why?

    First let me state that there are tons of logs generated during the servicing process; xml, etl, log, evtx, text files, etc.  All of them contain information about what happened during the servicing process, some of them are easy to consume and crack open, some of them aren't as friendly.  Review all of the logs, mount the .evtx logs in the event viewer, review the flat text and xml files, and to get into those pesky ETL files you can try converting them to CSV or XML with tracerpt:

    tracecrpt.exe setup.etl -of csv -o setup.etl.csv

    So we have "all the logs."  Let me start by saying that setupact.log and setuperr.log are your friends.  They are your go-to.  They likely have the information you are looking for or can give you enough information to point you in the right direction or to another log.

    After the dust settled we began to look at a sampling of the machines, effectively scraping the C:Windows.~BTSources and C:WindowsPanther directories to a file share for analysis.  Since the following log (C:Windows.~BTSourcesPanthersetupact.log) details the first three phases of the servicing process, that's where we want to start.  We reviewed the log and low and behold all of the first three phases completed successfully!  One thing to note and key in on in the log is that SETUPPLATFORMEXE reports Global servicing progress as well as Phase progress.  You'll see entries similar to the following:

    So we were able to quickly narrow down the scope of the failure to one specific phase.  Phase 4.  Remember Phase 4 occurs in the new target operating system, with all drivers and services starting up and running for the first time, and buttoning up things like settings and data migration tasks, reaching the OOBE phase, and finally (hopefully) the desktop.  Only we never reached the desktop.  Since we failed in Phase 4 which takes place in the new target OS, a rollback occurred and logs were created in the following directory:  C:Windows.~BTSourcesRollback  Cracking open our go-to log we see the following.  A rollback has occurred in phase 4 because of a STOP 0x50 bugcheck, which is PAGE_FAULT_IN_NONPAGED_AREA.   This stop code typically indicates that a driver attempted to read or write to an invalid location in memory, in this particular case it was a read operation.  In the event of a bugcheck a kernel mini-dump is also generated in C:Windows.~BTSourcesRollback The dump only contains stack data.  In this case we were not able to have the dump analyzed.  Don't fret we are still hot on the trail.  Notice about halfway down where it shows "Crash 0x00000050 detected", the next few lines show information extracted from the dump - we can actually see a representation of the stack and the frames in the log.  Frames 6-9 are in the mfenlfk.sys driver.


    Continuing down the log we see that Windows tried to recover the installation 3 times but bug checked each time with the same stop code, with the same driver in the middle of the stack.

    Eventually after hitting the max recovery attempts, Windows begins the process to rollback the OS:

    Now we've zeroed' in on the driver in question, which after reviewing it is a network security driver used by McAfee software; with a time/date stamp that is pretty old.  We engaged McAfee and started an inquiry on the driver, which was out of date (unsupported) for the version of Windows we were trying to service to (1709).  What we found and re-prod' was that even though the system had the latest versions of all the McAfee software(s) installed, this old driver seemed to hang around on the system.  Turns out this isn't so good for servicing.

    Moving Past

    With all eyes on this old driver, we discussed options in order to rid the system of it.  How can we get rid of this driver without impacting the system negatively?  What if the wrong driver is removed?  As you can see the impacts of making a mistake here could be potentially catastrophic on a given box.  After much deliberation and reviewing our documentation on the driver store, we arrived at the conclusion that the operating system fundamentally supports removing the driver from the store.  Here is a snip of powershell (add your logging, and customize, etc.) we used to interrogate the driver store, search for the very specific driver in question, and remove it:

    To expand on this a little, when you query the driver store all drivers are returned.  When you find the one you want to remove, you have to remove it by the value of the "Driver" property as seen below.  Use caution, just because you find the value on one machine as oem1.inf does NOT mean it will be the same value on another machine, the driver property value is different on each machine, even though the OriginalFileName value is the same.  For this reason we have to use logic to identify the driver, grab the "driver" property and feed that to our command to remove the correct driver.  Tricky (1st edition).  Also note lines 1-3, if your Get-WindowsDriver cmdlet returns an error you may need to use this if McAfee Access Protection is enabled and is blocking access to the temp folder.  Tricky (2nd edition).


    For the sake of time we used pnputil to remove the driver from the store, of note is that the command line switches for pnputil vary if you are on 1511 (build 10586), they use the legacy switches, and the newer builds of Windows 10 use the newer switches.  Tricky (3rd edition).  We placed this as the first item in the servicing task sequence, then called a reboot before the servicing step began.  We tested this on a number of failed machines and they all took the servicing upgrade successfully.  This was quite the long road from the initial discovery, to troubleshooting, to root cause, and eventually to finding a work-around.  I hope sharing this with you allows you to better understand the servicing process and how to troubleshoot failures.  I would like to re-iterate that the following links provide good information on the topic:

    Resolve Windows 10 Upgrade Errors:

    https://docs.microsoft.com/en-us/windows/deployment/upgrade/resolve-windows-10-upgrade-errors

    Windows 10 Log Files

    https://support.microsoft.com/en-us/help/928901/log-files-that-are-created-when-you-upgrade-to-a-new-version-of-window

    Windows 10 SetupDiag is a new tool that was recently released that can also be used to troubleshoot servicing failures.  This tool was not released at the time we were working this failure so we didn't get to use it!  Check it out!

    https://docs.microsoft.com/en-us/windows/deployment/upgrade/setupdiag

    Have a great weekend!

    Jesse

    SCOM Management Server grayed out with event description “A module of type “System.DataSubscriber” reported an error 0x80FF0003″

    $
    0
    0

    Posts in this blog are provided "AS IS" with no warranties, and confers no rights. Use of included script samples are subject to the terms specified in the Terms of UseAre you interested in having a dedicated engineer that will be your Microsoft representative.

     

    Let me start with something generic. My Management Server is in a grayed out state and what I will do next.

    I will start with running the below SQL query in the Operations Manager Database.

    --Replace the name SCOMMS with the name of your Management Server
    select BME.Path,AV.ReasonCode,AV.TimeStarted,AV.TimeFinished from AvailabilityHistory AV
    join BaseManagedEntity BME on AV.BaseManagedEntityId=BME.BaseManagedEntityId
    where BME.FullName like '%SCOMMS%'
    order by AV.TimeStarted desc

    Here in the output from my LAB.

    The reason code description are given below

    17 The Health Service windows service is paused.
    25 The Health Service Action Account is misconfigured or has invalid credentials.
    41 The Health Service failed to parse the new configuration.
    42 The Health Service failed to load the new configuration.
    43 A System Rule failed to load.
    49 Collection of Object State Change Events is stalled.
    50 Collection of Monitor State Change Events is stalled.
    51 Collection of Alerts is stalled.
    97 The Health Service is unable to register with the Event Log Service. The Health Service cannot log additional Heartbeat and Connector events.
    98 The Health Service is unable to parse configuration XML.

     

    In our case, the Reason Code is 43 which says "A System Rule failed to load".

    If you will look at the eventvwr on the Management Server you will see these events.

    These events will definitely tell you that that some rules are unloaded. However, in this case it has not really give us an idea  about the problem. I have worked in many cases where it right way gives the rule name and the issue. In our case, the rule name is a Data Warehouse collection rule, so I did not find it a need to check it at this point of time.

    I looked through the eventvwr and found another interesting event.

    I check the status of the server SQL2016 in my console and find that the server has an entry in both Agent Managed and Agentless. The only way which I can think of coming to such a scenario is to install it as agentless managed and then install it manually and approve it from the pending management.

    And since it is not supported/recommended to add the same server under agentless and agent managed at the same time, we ended up in such a situation.

    I delete the entry from agentless managed and everything is back normal and healthy.

    So in order to avoid such a situation, please make sure you do not have the option "Automatically approve new manually installed agents" selected in SCOM console. And if you have lot of agentless managed computers, do a check before approving them from pending management. You can use the below PowerShell cmdlet to do a quick check.

    Get-SCOMAgentlessManagedComputer | select computername
    Get-SCOMAgentlessManagedComputer | where {$_.computername -eq 'SQL2016'} | select computername


    Support-Info: (FIMMA): failed-creation-via-web-services

    $
    0
    0

    PRODUCTS / COMPONENTS / SCENARIOS INVOLVED

    • Microsoft Identity Manager 2016
      • Synchronization Service - FIM Service Management Agent
      • Service and Portal

    PROBLEM SCENARIO DESCRIPTION

    • Running an Export Run Profile on the FIM Service Management Agent produces the Run Status of stopped-server.  We want to understand the best way to clear out data in the FIM Service Management Agent connector space to assist with resolving this issue.

      NOTE

      To learn more about the different Run Profile Status' that is returned by the WMI RunStatus Property when executing Run Profiles, review this MSDN information: https://msdn.microsoft.com/en-us/library/windows/desktop/ms699322(v=vs.100).aspx

    FIM SERVICE MANAGEMENT AGENT ERRORS

    CAUSE (failed-creation-via-web-services):

    • The Connector Space for the FIM Service Management Agent was deleted and data from the Service and Portal was not reimported into the FIM Service Management Agent Connector Space.  This allowed some data to still exist in the Service and Portal that the FIM Service Management Agent has staged as Pending Export Adds.
    NOTE One of the causes of this issue was the deletion of the FIM Service Management Agent connector space.  The recommendation is to review information around this topic prior to deleting a connector space.  Find more information here:

     

    RESOLUTION (failed-creation-via-web-services):

    1. Remove all the Users from the Service and Portal
    NOTE DISCLAIMER:
    It is extremely important to note that this script will delete objects in the Service and Portal.  Once the user object is removed, until it is populated again into the Service and Portal that user will not have access to the Portal.

    Additionally, we highly recommend testing any process like this in a staging and/or testing environment prior to executing in production.  This is to safe guard your data.

    Once you are ready to execute, be certain that you have a verified backup of your backend FIMService and FIMSynchronizationService databases in regard to disaster recovery.

     

    1. Ensure that the Service and Portal are clear of all EREs
    2. Execute a Full Import (Stage Only) on the FIM Service Management Agent
      • This will bring in all of the Synchronization Rules into the FIM Service Management Agent Connector Space.
    3. Execute a Full Synchronization on the FIM Service Management Agent
    4. Review Pending Exports to understand the data that you will be exporting.
      • You can do this through Search Connector Space > Pending Exports
    5. Once Pending Exports is confirmed, proceed with running an Export on the FIM Service Management Agent
      • From the Actions menu, select Run and then Export
    6. Once the Export is finished, execute a Delta Import (Stage Only) to confirm the Exported Changes

    ADDITIONAL INFORMATION

    Deletion of connector spaces

    Management Agent Run Status

    Other Information

     

    Microsoft Artificial Intelligence: A platform for all information worker skill set levels

    $
    0
    0

    Bart Czernicki, Technical Architect - Advanced Analytics & AI, Azure SaaS ISV Solutions

    The Microsoft Artificial Intelligence platform is a comprehensive software ecosystem that allows anyone to become a professional AI developer. At a high level, this platform consists of three key pillars: services, infrastructure, and tools. Microsoft offers these on Azure public cloud, hybrid, and on-premises deployed software environments. In this brief article, we’ll walk through the key unique value of the Microsoft AI platform: empowering organizations with varying data science, machine learning (ML), and advanced analytics skill sets to dramatically accelerate the development of AI solutions. Be sure to check out more details on the Microsoft AI Platform here.

    Pillars of the AI platform

    Above, you can see highlights of the exclusive functionality of the aforementioned Microsoft AI platform pillars. The first pillar is AI Services, which contains a portfolio of developer APIs. These APIs are exposed in a variety of ways that allow crafting of custom ML and data science pipelines—from pre-made, ready-to-use endpoints (i.e. Facial recognition REST API) to advanced functionality (i.e. Azure ML Services). These AI Services are offered as both software (SaaS) and platform (PaaS) Azure cloud software, abstracting the complexity of managing servers, security, compliance etc.

    However, occasionally software architecture governance demands the flexibility of fine-tuning the deployments in great detail. This is where the second pillar, AI Infrastructure, comes in. It provides the core compute, storage, and networking options to build and serve all types of AI workloads. For example, Microsoft AI infrastructure includes the option to train complex neural network architectures faster, using specialized GPU hardware.

    But the process doesn’t end there. An AI developer needs a professional suite of developer software to materialize and glue all of the AI ideas. This functionality is provided in the third and final pillar, AI Tools. AI Tools provides a comprehensive set of IDEs, SDKs, deep learning frameworks, and cross-platform tooling to craft AI software as they choose. This is generally the software UIs and code frameworks that AI developers will be looking at on their screens.

    Empowering all AI skill sets

    Now that you are introduced to the Microsoft AI Platform, let’s visualize how this can help any type of developer. The Artificial Intelligence paradigm has been around for quite some time, however it only just recently gained tremendous traction in the software domain. After all, what software team doesn’t want an automated intelligent system that can work 24/7, and doesn’t mind working the weekends? Even with the demonstrated value of AI systems, some executive decision makers are apprehensive about jumping into AI. Certain organizations feel they can’t implement AI software because they don’t have super-advanced skills, nor do they employ PhD statisticians. Conversely, mature advanced analytics organizations who have been doing statistical modeling for decades sometimes have the impression that AI conventions don’t warrant a shift from their proven practices.

    As you saw earlier, the Microsoft AI platform offers a wide variety of advanced analytics functionality. Let’s look at this functionality as an “ease of AI use” and customization cross-section spectrum. The diagram below shows Microsoft AI functionality that’s easier to invest in and offers less control on the left-hand side, and functionality that requires a deeper AI investment while offering full operational control. As you can see, there are quite a lot of service and tool offerings across this cross-section.

    As you look at the above diagram, note two key points: the “ease of AI use” spectrum (shown in purple) and customization options, split between “Consume” and “Build your own.” This breakdown helps align AI service offerings with the organization’s analytics comfort level.

    Let’s look at some use cases using this diagram. For example, if an organization just wants to leverage production-ready models, they would ideally look at the left-hand side “Consume” tree nodes and gravitate to Cognitive Services or pre-trained CNTK/TensorFlow models. These pre-trained models can be used out-of-the-box to augment existing software very easily. However, if an organization wants to build their own models using their own data, the Microsoft AI platform provides wide spectrum of easy-to-use AI. Looking at the “Build your own” node above, teams that have basic data science skills will naturally gravitate to services like Azure Machine Learning Studio. Conversely, a team of PhD statisticians will be on the far-right side, looking for the platform to help them build complex neural network architectures.

    Next steps

    If you were apprehensive about getting started with AI, I hope this blog post showed how infusing your software with AI can be frictionless with the Microsoft Artificial Intelligence platform. The Microsoft AI platform provides a wide variety of services, infrastructure, and tools that help all kinds of information worker personas. Whether you are a graduate student, data steward, application developer, or a professional data scientist, the Microsoft AI platform has a functionality for you!

    In part 2 of this article, we’ll explore how this wide complexity of AI services can be allocated to different AI information worker personas with multiple detailed examples.

    Register for the Data & AI Community call on May 4 at 10 am PT

    Data & AI Partner Community

    Unmanaged Device Access Policies are Generally Available

    $
    0
    0

    In March 2017 we introduced device-based policies for SharePoint and OneDrive, that enable administrators to configure Tenant-level policies.

    Device-based access policies for SharePoint and OneDrive help administrators ensure corporate data is not leaked onto unmanaged devices such as non-domain joined or non-compliant devices by limiting access to the content to the browser, preventing files from being taken offline, printed, or synchronized with OneDrive.

    On September 1st, 2017 we continued to evolve our conditional access investments to address the ever-changing security landscape and business needs by introducing new levels of granularity with conditional access that allow administrators to scope device-based policies at the site collection level.  In addition, this granular policy can be configured to allow users on unmanaged devices to edit Office Online documents in the browser.

    Today we’re pleased to say that these policies are now available worldwide, in addition to new site-scoped policies that are available with this update.  This is our major milestone in the conditional access policy journey in SharePoint and OneDrive.

    In a world that’s mobile, social, and about getting things done you’re expected to manage a growing number of devices, both managed and unmanaged that can access corporate content.  The corporate boundary as a result, has shifted from the firewall to the employee.  The need for protecting access from the unmanaged devices is ever increasing. This unmanaged device access policy is the right solution for your need.

    What’s new in this update?

    In this update to device-based policies at the site collection level you can:

    • Blocks users from accessing sites or the tenant from unmanaged devices
    • Allows users to preview only Office file types in the browser
    • Allows office file types to be editable or read-only in the previewer
    • Based on the sensitivity of a site's contents, admins can now set access control from unmanaged devices on different sites to be full access, limited access, or block access

    In the demonstration above, the Tenant is configured with a permissive device access policy, allowing full access from unmanaged devices to include desktop apps, mobile apps, and browsers.  The Marketing site inherits the policy configured at the Tenant; however, the Legal site has a policy configured less permissive than that configured at the Tenant level.  In addition, members of the Marketing site, while limited to browser only access on unmanaged devices, can continue to edit content they have access to provide a seamless collaborative experience.

    Configuring Device Access Policies Overview

    For complete instructions on enabling device-access policies refer to the support documentation at https://support.office.com/en-us/article/Control-access-from-unmanaged-devices-5ae550c4-bd20-4257-847b-5c20fb053622?ui=en-US&rs=en-US&ad=US.

    Unmanaged device access policies can be configured with SharePoint Online Management Shell.

    Before you get started using PowerShell to manage SharePoint Online, make sure that the SharePoint Online Management Shell is installed and you have connected to SharePoint Online.

    NOTE

    The Tenant-level device-based policy must be configured to Full Access prior to configuring site-scoped policies.

    1. Connect-SPOService -Url https://<URL to your SPO admin center>
    2. $t2 = Get-SPOSite -Identity https://<Url to your SharePoint online>/sites/<name of site collection>
    3. Set-SPOSite -Identity $t2.Url -ConditionalAccessPolicy AllowLimitedAccess

    The following parameters can be used with -ConditionalAccessPolicy AllowLimitedAccess for both the organization-wide setting and the site-level setting:

    -AllowEditing $false Prevents users from editing files in the browser and copying and pasting file contents out of the browser window.

    -LimitedAccessFileType -OfficeOnlineFilesOnly Allows users to preview only Office files in the browser. This option increases security but may be a barrier to user productivity.

    -LimitedAccessFileType -WebPreviewableFiles (default) Allows users to preview Office files and other file types (such as PDF files and images) in the browser. Note that the contents of file types other than Office files are handled in the browser. This option optimizes for user productivity but offers less security for files that aren't Office files.

    -LimitedAccessFileType -OtherFiles Allows users to download files that can't be previewed, such as .zip and .exe. This option offers less security.

    External users, because they most likely use unmanaged devices, access will also be controlled when you use conditional access policies to block or limit access from unmanaged devices. If users have shared items with specific external people (who must enter a verification code sent to their email address) and you want those external users to access shared items from their devices, then you can exempt them from this policy by running the following cmdlet.

    Set-SPOTenant -ApplyAppEnforcedRestrictionsToAdHocRecipients $false

    Licensing

      1. This feature has a dependency on Azure Active Directory Conditional Access Policy.
      2. To learn more about Azure Conditional Access policies work, refer to https://docs.microsoft.com/en-us/azure/active-directory/active-directory-conditional-access-azure-portal.

    Resources

    As workforces become more globally distributed and the productivity barrier extended beyond the firewall, device-access policies allow you to provide a seamless collaborative experience across an array of devices, both managed and unmanaged, while keeping your most sensitive content that way.  To learn more about security and compliance with SharePoint & OneDrive visit https://aka.ms/SharePoint-Security.

    パートナー ビジネスの今後: トランスフォーメーション、推進力、ビジネス チャンス 【5/2更新】

    $
    0
    0

    (この記事は2018年1月24日にMicrosoft Partner Network blog に掲載された記事 Transformation, momentum, and unprecedented opportunity ahead for Microsoft partners の翻訳です。最新情報についてはリンク元のページをご参照ください。)

     

    デジタル トランスフォーメーションが生み出す空前のビジネス チャンス

    「20 兆」ドル。全世界の国内総生産の 20% 以上に相当するこの数字は、IDC が見積もった今後 5 年間のデジタル トランスフォーメーションによる経済効果です。そしてこれは、クラウド テクノロジを基盤としたビジネス イノベーションを始めようとしている企業にとって、大きなビジネス チャンスをつかめる可能性があることを意味しています。マイクロソフトは、新たなビジネス アプローチを考案し、パートナー様とのかかわり方を刷新することで、パートナー様がこの絶好のチャンスを最大限に活用できるよう支援します。

    マイクロソフトは昨年 7 月、パートナー様とのビジネスのしくみを大きく改革し、社内の営業活動の変革に乗り出しました。新たに編成された組織「One Commercial Partner」にパートナー ビジネスの人員を統合し、重点的に取り組む分野を次の 3 つに絞りました。

    • パートナー様と連携したソリューションの構築
    • ソリューションの市場投入の支援
    • パートナー様との共同販売

     

    この改革のねらいは、「モダン ワークプレース」「ビジネス アプリケーション」「アプリケーションとインフラストラクチャ」「データと AI」という 4 つのソリューション領域において、より革新的でインパクトのあるソリューションを共通のお客様に提供できるようにすることです。まだ始まったばかりの取り組みではありますが、パートナー様と共に良いスタートを切ることができました。

     

    パートナー エコシステムの成果と勢い

    マイクロソフトのビジネスの 95% 以上は、進化し続ける強固なパートナー エコシステムから生まれています。昨年秋の業績発表 (英語) では、法人向けクラウド事業の年間収益予測が、2 年前に設定した目標値の 200 億ドルを上回りました。これは、パートナー様の協力があってこその結果であり、この勢いはまだ衰えていません。

     

     

    2018 年は、この勢いがさらに加速し、新たな成功の年となるでしょう。ブログ記事「今後の展望: 2018 年のビジネス チャンス」では、セキュリティ、GDPR コンプライアンス、ミクロ革命、IoT、顧客関係の強化などに関する、マイクロソフトの各部門のリーダーの見解をご紹介しています。

     

    利用量ベースの報酬で顧客価値を高める

    2017 年、マイクロソフトはこの数十年で最大規模となる営業改革を実施しました。社内のグローバル エンタープライズ セールス部門のあり方を見直し、パートナー セールスの売上と収益の拡大を図るため、新たに 2.5 億ドルの販売インセンティブを投資しました。業界唯一のこの共同販売インセンティブ制では、エンタープライズのお客様への販売につき、年間契約金額の 10% がパートナー様に支払われます。マイクロソフトのエンタープライズ セールス チームは、長年築いてきた顧客関係や技術的なノウハウを活かして、パートナー様とお客様が共に新市場を開拓できるよう支援します。共同販売は、長期的にアクティブなお客様を増やしていく効果もあり、顧客価値、顧客満足度、収益の向上にもつながります。

    昨年 500 社のパートナー様と共に試験的に実施した共同販売では、わずか 6 か月の間にパートナー パイプラインで 60 億ドルの契約が生まれ、パートナー様の収益は 10 億ドルを超えました。共同販売によって、プロジェクト規模は平均で約 6 倍に拡大し、パートナー様の成約までの期間は 3 分の 1 に短縮しました。7 月の Microsoft Inspire にて共同販売アプローチを正式にスタートして以来、参加パートナー様の数は現在 9,000 社を超えて (増加率 543%) さらに増え続けています。

     

    OSIsoft (英語): 「Microsoft Go-To-Market サービスを活用したサブスクリプション ベースのビジネスを新たに開拓したことで、12 件の協業案件を獲得することができました」

    Barracuda (英語): 成長率 300% を超える Barracuda のビジネスの大部分は、マイクロソフトとの共同販売によるものです。「マイクロソフトとのグローバル パートナーシップによって、当社は大きく変革しました。…長年の共同販売活動とパートナー間 (P2P) 取引の経験を活かすことができました」

    DataStax (英語): 「マイクロソフトとの共同販売の機会を活かして、14 件の協業案件を受注し、140% のパイプライン成長率を達成しました」

     

     

    AI プラクティス構築プレイブックを新たに公開

    マイクロソフトの販売担当者は、共同販売を通じてパートナー様との関係を強化していく中で、AI をはじめとするソリューションのベスト プラクティスを構築、拡大したいと考えています。2025 年までに約 600 億ドルに成長すると予想されている AI 市場には、未開拓のビジネス チャンスが広がっています。既に多くのパートナー様が、これまでにない新たな方法でお客様向けソリューションの開発を進めています。

     

    Melissa Mulholland のブログ記事では、AI 分野における戦略の策定、スキルの向上、マーケティングや販売に関するガイダンスのほか、最新の調査結果を掲載したプレイブックを紹介しています。このプレイブック (英語) には、パートナー様のさまざまな経験やベスト プラクティスがまとめられています。ブログ記事では、AI エキスパートの InterKnowlogy (英語) が、画像認識とセンチメント分析の知識を活かして、映画会社向けの画期的な試写会の方法を考案し、新たなビジネスを開拓した事例を紹介しています。

     

    「数年前には、このようなソリューションは実現不可能でした。AI を活用することで、新しい問題にも対処できるようになります。ソリューション構築にかかる時間は数か月から数週間に短縮され、コストも桁違いに削減することができました。マイクロソフトのパートナーとして、こんなに嬉しいことはありません」

    InterKnowlogy、共同創業者兼 CEO、Rodney Guzman 氏

     

    シリーズ第 6 弾 (英語) となる今回のプレイブックは、効果的で収益性の高いクラウド プラクティスを構築するのに大いに役立てていただけます。ぜひ今すぐ無料でダウンロードしてください。今後も、パートナー様向けのさまざまなリソースを公開していく予定です。

    マイクロソフト パートナー コミュニティ (英語) へのご参加もお待ちしております。

     

     

    NEW: Upgrade to Windows 10 1803 without suspending BitLocker

    $
    0
    0

    One of the new features mentioned in the What’s new in Windows 10 1803 documentation is a new ability to perform a feature update without suspending BitLocker.  This is what it says:

    New command-line switches are also available to control BitLocker:

    Setup.exe /BitLocker AlwaysSuspend 
        – Always suspend bitlocker during upgrade.
    Setup.exe /BitLocker TryKeepActive 
        – Enable upgrade without suspending bitlocker but if upgrade, does not work then suspend bitlocker and complete the upgrade.
    Setup.exe /BitLocker ForceKeepActive 
        – Enable upgrade without suspending bitlocker, but if upgrade does not work, fail the upgrade.
    

    For more information, see Windows Setup Command-Line Options

    And if you look at the Windows Setup Command-Line Options page, it confirms the same thing: there are new command line options that affect how BitLocker is handled during feature updates.  Let’s dig a little deeper though to understand the requirements.  In order to successfully use this feature, the device needs to meet the following requirements:

    • It needs to be running Windows 10 1709 or higher, and needs to upgrade to Windows 10 1803 or higher.  (So this is a “going forward” change, not one that goes back into the Windows 10 stone ages.)
    • The Windows device needs to be using Secure Boot and have a TPM.
    • BitLocker needs to be using a TPM protector only (yet another good reason to not have a PIN).
    • The user profile folder can’t be on a separate volume that is also BitLocker protected.  (If you are doing something like this, we really need to talk.)

    You could add a command line option (through a ConfigMgr task sequence variable, MDT script edit, SetupConfig.ini file, etc.) to explicitly make your choice, but if you don’t, what’s the default?  The easiest way to find out (since the documentation doesn’t say) is to try it.  First, let’s look at a device that was updated from Windows Update.  The SETUPACT.LOG clearly shows the default behavior:

    2018-05-01 19:36:53, Info                  SP     Client requested to suspend BitLocker unconditionally

    Now, a second device updated using media (SETUP.EXE /AUTO UPGRADE):

    2018-05-01 19:22:27, Info                  SP     Client requested to suspend BitLocker unconditionally

    OK, also off, so both mechanisms (servicing and media) are acting like /BitLocker AlwaysSuspend was specified, so that’s the default (at this point at least, that could change in the future).  Interestingly, if you have a device that is enrolled in Insider Preview, you might see a different default as new Insider builds are installed.  I can see that TryKeepActive is the default for my Insider laptop:

    2018-04-19 15:28:08, Info                  MOUPG  SetupManager: No BitLocker command line option specified, will try to keep active but suspend on errors, because this is a WU scenario

    2018-04-19 15:28:08, Info                  MOUPG  ImageDeploy: Initializing BitLocker Mode:        [Keep Active (Best-Effort)]

    2018-04-19 15:28:15, Info                  SP     CNewSystem::PreInitialize: Velocity feature state for BitLocker auto-unlock is enabled
    2018-04-19 15:28:15, Info                  SP     Client requested to keep BitLocker active on a best-effort basis, and the device supports it. Will try to keep BitLocker on, and fall back if needed.

    (That’s the cool thing about Insider Preview, you can be trying out new features that are behind the scenes without really even being aware of them.)

    Alright, so we’ve established that the default is presently /BitLocker AlwaysSuspend for both servicing (WU, WUfB, WSUS, ConfigMgr Windows 10 Servicing) and media-based (SETUP.EXE, ConfigMgr task sequence, MDT task sequence) upgrades, while the default is /BitLocker TryKeepActive with Insider Preview devices (well, at least for mine).  So how do we change it to TryKeepAlive?  It depends:

    Of course only make those edits/changes if you are upgrading exclusively to Windows 10 1803, since earlier versions of SETUP will have no idea what to do with that command line switch and will most definitely fail.  (But hey, we’re all upgrading to 1803 now, right?)

    Viewing all 34890 articles
    Browse latest View live


    <script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>