Quantcast
Channel: TechNet Blogs
Viewing all 34890 articles
Browse latest View live

[GDPRDemopalooza] PII aus Office Metadaten entfernen

$
0
0

Basierend auf der GDPR / DSGVO Demopalooza hier das Demo zum Entfernen von PII aus Office Dokumenten.

Wie immer aufgeteilt in zwei Bereiche: Das "Why" und das "How"

Why

Bei der DSGVO geht es nicht nur um den Schutz von Kundendaten, sondern um den Schutz von Persönlich Identifizierbare Informationen (PII) von "jedem". Also auch von PII von Mitarbeitern.

D.h., dass sich auch Gedanken über den Schutz von PII in Office Dokumenten (insb. wenn diese extern verfügbar sind) gemacht werden.

Standardmäßig werden PII in den Metadaten von Office Dokumenten gemacht, z.B. wer hat das Dokument (initial) erstellt:

Autor Ansicht in Word

How

Jetzt stellt sich natürlich die Frage: "Wie bekomme ich meine PII aus den Office Dokumenten entfernt?" und diese Frage stellen sich natürlich nicht nur IT Nerds, sondern grade jeder "normale" User.

Um dies möglichst einfach zu gestalten gibt es innerhalb der Office Familie den sog. "Document Inspector".

Der Document Inspector ist in der Lage genau diese "hidden Metadata" mit PII zu identifizieren und zu entfernen.

  1. Dazu einfach ein beliebiges Word, PowerPoint, Excel Dokument öffnen oder neu erstellen+speichern
  2. Dann auf "Datei"/"File" klicken
  3. Anschließend auf "Check for Issues"
  4. Und auf "Inspect Document"
    Inspect Document
  5. Dort sollte u.a. "Document Properties and Personal Information" bereits vorausgewählt sein, falls nicht einfach die Checkbox auswählen
  6. Und auf "Inspect" klicken
  7. Nun sollte - abhängig vom gewählten Dokument - wenigstens in dem von uns fokussierten Punkt "Document Properties and Personal Information" 'Issues' gefunden worden sein
  8. Um diese Issues zu lösen klicken wir in dem Bereich auf "Remove all"
  9. Anschließend kann optional noch mal auf "Reinspect" geklickt werden, allerdings sollten dabei keine weiteren "Issues" gefunden werden.
  10. Jetzt sind im Dokument die Informationen über den/die Autor(en) und Contributer entfernt worden und dies ändert sich nun nicht mehr - es sei denn, es wird manuell ein Autor hinzugefügt.
    Sollte einer der Autoren/Contributer weitere Metainformationen wie "Categories", "Subject", etc eingefügt haben, so wurden diese ebenfalls entfernt.

Btw.: wenn das Dokument als PDF gespeichert wird, dann werden die Metadaten zum Teil auch in dem PDF gespeichert, es sei denn, die Policy "Disable inclusion of document properties in PDF and XPS output" ist aktiviert.

 

 


Office Online と Office デスクトップ アプリケーションの機能差異に関する情報

$
0
0

こんにちは、Office サポートです。

今回の投稿では、Office Online と Office デスクトップ アプリケーションの機能差異に関する情報について、現在公開されている公開資料を中心にご紹介いたします。

以下の資料は、2018 年 3 27 日時点の情報になります。

 

目次
1. 全般
2. Excel 関連資料
3. Word 関連資料
4. PowerPoint 関連資料
5. OneNote 関連資料
6. よくある質問


 

1. 全般

Office Online とOfficeデスクトップ アプリケーションの機能比較表は以下の資料をご参照ください。

タイトル : Office Online サービスの説明
アドレス : https://technet.microsoft.com/ja-jp/library/e04ddc56-d15a-44b7-91cd-1895d6b9ec68


 

2. Excel 関連資料

タイトル : ブラウザーと Excel でのブックの使用の相違点
アドレス : https://support.office.com/ja-jp/article/f0dc28ed-b85d-4e1d-be6d-5878005db3b6

タイトル : Excel Online
アドレス : https://technet.microsoft.com/ja-jp/library/excel-online-service-description.aspx

タイトル : Excel ヘルプ センター
アドレス : https://support.office.com/ja-jp/excel


 

3. Word 関連資料

タイトル : ブラウザーと Word での文書の使用の相違点
アドレス : https://support.office.com/ja-jp/article/3e863ce3-e82c-4211-8f97-5b33c36c55f8

タイトル : Word Online
アドレス : https://technet.microsoft.com/ja-jp/library/word-online-service-description.aspx

タイトル : Word ヘルプ センター
アドレス : https://support.office.com/ja-jp/word


 

4. PowerPoint 関連資料

タイトル : PowerPoint Online の主な機能の説明
アドレス : https://support.office.com/ja-jp/article/a931f0c8-1305-4428-8f7c-9cfa00ef28c5

タイトル : PowerPoint Online
アドレス : https://technet.microsoft.com/ja-jp/library/powerpoint-online-service-description.aspx

タイトル : PowerPoint ヘルプ センター
アドレス : https://support.office.com/ja-jp/powerpoint


 

5. OneNote 関連資料

タイトル : ブラウザーと OneNote でのノートブックの使用の相違点
アドレス : https://support.office.com/ja-jp/article/a3d1fc13-ac74-456b-b391-b633a62aa83f

タイトル : OneNote Online
アドレス : https://technet.microsoft.com/ja-jp/library/onenote-online-service-description.aspx

タイトル : OneNote ヘルプ センター
アドレス : https://support.office.com/ja-jp/onenote


 

6. よくある質問

Q1 : Office 365 サービスの Office Online とオンプレミス製品の Office Online Server の機能差異はあるでしょうか

A1 : あります

現在、Office 365 サービスの Office Online が提供する機能とオンプレミス製品の Office Online Server が提供する機能は、どちらも Word Online や Excel Online といった名称となりますが、提供されている機能が異なります。

例えば、Office 365 サービスの Word Online をご利用の場合、共同編集機能を利用時に編集中のユーザーの現在作業中の場所がフラグ付きで表示する機能が組み込まれています。

一方でオンプレミス製品の Office Online Server の Word Online の場合、共同編集機能を利用することができますが、編集中のユーザーが現在作業中の場所をフラグ付きで表示する機能自体は、まだ組み込まれていません。

以下の資料は、Office 365 サービスの Office Online の利用を前提とした資料になりますので、このような詳細な機能の違いが不明の場合は、弊社までお問い合わせいただけますようお願いいたします。

タイトル : OneDrive での Office ドキュメントの共同作業
アドレス : https://support.office.com/ja-jp/article/ea3807bc-2b73-406f-a8c9-a493de18258b

タイトル : Office Online を使って Office 365 で共同作業を行う
アドレス : https://support.office.com/ja-jp/article/ff709b92-1c61-4c4b-8f8e-e2f65d2e0c1b

タイトル : Excel ブックの共同編集を使用して同時に共同作業を行う
アドレス : https://support.office.com/ja-jp/article/7152aa8b-b791-414c-a3bb-3024e46fb104

タイトル : Word 文書でリアルタイム共同編集によりコラボレーションする
アドレス : https://support.office.com/ja-jp/article/7dd3040c-3f30-4fdd-bab0-8586492a1f1d

タイトル : PowerPoint プレゼンテーションで共同作業を行う
アドレス : https://support.office.com/ja-jp/article/0c30ee3f-8674-4f0e-97be-89cf2892a34d

 

Q2. Office Online で対応できない Office ファイルを開いた場合、Office Online ではどのような動作になりますか

A2: Office 365 サービスおよび、オンプレミス製品の Office Online Server で提供される Word Online や Excel Online、PowerPoint Online 等、どちらの場合であっても、対応できない機能やコンテンツが含まれている Office ファイルを開いた場合は、以下、のいずれかの動作となります。

  • Office Online で開くことができない場合は、クライアントの Office 製品で開くように促すメッセージが表示されます。
  • Office Online で開くことはできるが、Office Online で対応できない機能が含まれる場合は、ブラウザ内で開かれた後、ウィンドウ上部に対応できない機能に関するメッセージが表示されます。例えば、Excel ファイルであれば、Office Online で開いた後に [ブックの編集]-[Excel で編集] からクライアントの Excel を起動し、ファイルの編集操作が継続できます。

 
今回の投稿は以上です。

Hyper-V VM Import–Know Before You Go

$
0
0

Importing VMs from one host to another can run into issues due differences in the configuration of the hosts.  A common issue is that a virtual switch does not exist on the new Hyper-V host.  While it is possible to attempt the import, and then deal with the fallout afterwards,  wouldn't it be great to know about such issues before actually starting the import?

That is where the Compare-VM cmdlet comes in.  It is able to provide a compatibility report which illustrates issues between the source and destination Hyper-V hosts.

The below is probably one of the most frequent import issues which I run into.  The required virtual network does not exist on the host used for the import.

Hyper-V VM Import - Could Not Find Ethernet Switch

Notes on Compare-VM

Compare-VM can compare the contents of the VM’s configuration file and the host where the VM is to be imported.

Note that the format of this file has changed.  As discussed in What's new in Hyper-V on Windows Server 2016, note that the file extension .VMCX is now used.  Previously .XML was used.

This necessitates knowing the correct file extension for the virtual machines in question.  This post is based off Windows 10 and Windows Server 2016, so the new extension is used. 

[code language="PowerShell" light="true"]$report = Compare-VM -Path 'D:VM1Virtual Machines53EAE599-4D3B-4923-B173-6AEA29CB7F42.VMCX’[/code]

The issues can then be reviewed using:

[code language="PowerShell" light="true"]$report.Incompatibilities | Format-Table –AutoSize[/code]

Review Single VM

To review a single VM, the path to the VM can be passed as a pipeline object or directly in the Compare-VM cmdlet. In this case the former was used.

[code language="PowerShell" light="true"]Get-Item -Path '.*.VMCX' | ForEach {(Compare-VM -Path ($_.FullName)).Incompatibilities}[/code]

Reviewing Hyper-V Import Issues on A Single VM

There is an issue locating a virtual switch called “E15-Cloud”.  That needs to be created on the host.

Now that we know that, we can create the required switch[s] and then successfully import the VM.

Review Multiple VMs

The above command can be easily modified to search through a directory of VMs.  Typically I have a folder which contains all of the VMs for a given lab.  There will be a parent differencing disk to minimise the disk space footprint.  Each VM has its own sub folder which contains the child differencing disk and other VM specific files.  This allows a recursive search from the parent directory to easily identify all of the required VMs.  The –Recurse parameter of Get-ChildItem searches for files of type .VMCX and passes them down the pipeline to the Compare-VM cmdlet.

Reviewing Hyper-V Import Issues From Multiple VMs

If you wanted to count the number of VMs returned, simply add the Measure-Object to the end. For example:

[code language="PowerShell" light="true"]Get-ChildItem -Recurse '.2013 HA*.vmcx' | Measure-Object[/code]

Reviewing Multiple VMs and Creating New Virtual Switches

The below is aimed at the VMs in most of my labs.  Each VM has a single network interface, and since I frequently need to move VMs between different hosts it can be tedious having to create all of the different Virtual Switches.

The below will loop through the list of VMs which is obtained by the recursive search for the .VMCX configuration file.  It checks to see that there is an incompatibility for a missing switch, which is a variable called $VMSwitch.  This is then leveraged in the new-VMSwitch cmdlet to create the private virtual switch.  Again, this is sample code which is tailored to my purposes.  Feel free to modify so that it meets your goals.  It is not intended to be robust and enterprise grade.  For example it assumes that all VMs have one virtual network attached, and you are running it with the correct credentials.  It also assumes that the network is what caused the incompatibility message.  Feel free to buff if and make it more robust.

Copy the following code into a .PS1 file to execute it.

[code language="PowerShell"]

$VMs = Get-ChildItem -Path '.2013 HA*.VMCX' -Recurse

ForEach ($VM in $VMs) {$FullMessage = (Compare-VM -Path ($VM.FullName)).Incompatibilities.Message

IF($FullMessage) { $VMSwitch = $FullMessage.Split("'")[1]

Write-Host "Network $VMSwitch not present" -Foregroundcolor Magenta

$Exists = Get-VMSwitch $VMSwitch -ErrorAction SilentlyContinue

IF ($Exists) { Write-Host "$VMSwitch Exists"}

ELSE {New-VMSwitch -Name $VMswitch -SwitchType Private} } }

[/code]

 

Importing Multiple VMs

Now that the required virtual networks have been created, we can import the VMs using something like the below:

[code language="PowerShell"]Get-ChildItem -Path '.2013 HA*.VMCX' -Recurse | ForEach { Import-VM -path $_.FullName}[/code]

 

 

Remove All But Default VMSwitch

This was a line used in testing the post, and was included as it may be useful to some folks.  It will remove all VMSwitches that are NOT called “Default Switch”.

[code language="PowerShell" light="true"]Get-VMSwitch | Where {$_.Name -NE "Default Switch"} | Remove-VMSwitch -Confirm:$False[/code]

 

 

Cheers,

Rhoderick

Training Many Anomaly Detection Models using Azure Batch AI

$
0
0

This post is authored by Said Bleik, Senior Data Scientist at Microsoft.

In the IoT world, it's not uncommon that you'd want to monitor thousands of devices across different sites to ensure normal behavior. Devices can be as small as microcontrollers or as big as aircraft engines and might have sensors attached to them to collect various types of measurements that are of interest. These measurements often carry signals that indicate whether the devices are functioning as expected or not. Sensor data can be used to train predictive models that serve as alarm systems or device monitors that warn when a malfunction or failure is imminent.

In what follows, I will walk you through a simple scalable solution that can handle thousands or even millions of sensors in an IoT setting. I will show how you can train many anomaly detection models (one model for each sensor) in parallel using Azure's Batch AI. I've created a complete training pipeline that includes: a local data simulation app to generate data, an Azure Event Hubs data ingestion service, an Azure Stream Analytics service for real-time processing/aggregation of the readings, an Azure SQL Database to store the processed data, an Azure Batch AI service for training multiple models concurrently, and an Azure Blob container for storing inputs and outputs of the jobs. All these services were created through the Azure Portal, except for Batch AI, which was created using the Azure CLI. All source code is available on GitHub.


Solution architecture

Data Simulation and Ingestion

The data generator I use is a simple C# console app that sends random sensor readings from 3 imaginary devices, where each sensor is identified by a tag. There are 5 tags in total in this example. Each tag represents a measurement of interest, such as temperature, pressure, position, etc. The values are generated randomly within a specific range for each sensor. The app uses the Azure SDK to send JSON messages (see figure below) to Event Hubs, which is the first interface in the pipeline. The C# source file can be found here.


Messages sent to Event Hubs

Data Aggregation

Assuming the sensors send data at a very high frequency, one would want to apply filters or aggregation functions to the incoming streams. This preprocessing step is often necessary when working with time series data and allows extracting better views or features for training predictive models. In this example, I choose to read in the average measurement value of a sensor in tumbling window intervals of 5 seconds. For that, I've created an Azure Stream Analytics (ASA) job that takes in the Event Hub stream as input, applies the aggregation step, and saves the processed messages into a SQL Server database table. The aggregation step can be done in a SQL-like query within ASA as shown below.


Query definition in ASA (Event Hubs input and SQL output ports are also shown to the left)


Processed data in SQL Server

Training

One assumption I make here is that we have access to sufficiently large amounts of data for each sensor and that training one model for each sensor independently makes more sense than having a universal model trained on data from multiple sensors. The model I use is an unsupervised univariate One-class SVM anomaly detection model, which learns a decision function around normal data and can identify anomalous values that are significantly different from past normal sensor measurements. Of course, one could use other models that better fit the task and data, including supervised methods, if explicit labels of anomalies are available. I'm also keeping the model simple, as I don't do any hyperparameter tuning or data processing, besides normalizing the data before training.


One-class SVM (from scikit-learn.org)

The main purpose of this exercise is to show how multiple models can be trained concurrently in a scalable way, but before running parallel training jobs, it's important to create a parametrized training script that can run for a specific sensor (I'm using Python here). In my setup, the parameters for such a script are the sensor identifiers and timestamp range (device, tag, ts_from, ts_to). These are used to query the SQL Server database and retrieve a relevant chunk of training data for a specific sensor. The other set of static parameters are read from a JSON config file. These are common across all sensors and include the SQL query, the SQL Server connection string, and the connection details of the Azure Blob container into which the trained models are saved.

As you have noticed, we ended up with an embarrassingly parallel task to solve. For that part, I've created an Azure Batch AI service that allows submitting parallel training jobs on a cluster of virtual machines on Azure. Batch AI also allows running distributed training jobs for popular toolkits like CNTK and TensorFlow, where one model can be trained across multiple machines. In our scenario, however, I use custom jobs, where each job executes a Python script for a specific sensor (the sensor details are passed as arguments to the script). Batch AI jobs can be created using the Azure CLI or from the Azure portal, but for our case, it's easier to create them through a Python program using the Azure Python SDK, where you can enumerate all sensors by looping through the devices and tags and run a corresponding job for each sensor. If you look at the CLI commands I used to create the cluster, you would notice that the cluster is made up of two Ubuntu Data Science Virtual Machines (DSVMs) and that I specified the Azure Blob storage container to be used as a shared storage location for all nodes in the cluster. I use that as a central location for both inputs (training script and config file) and outputs (serialized trained models) of the Batch AI jobs. The Blob container is mounted on each node in the cluster and can be accessed just like any storage device on those machines.

Finally, after running the program, you can look into the Blob storage container and make sure that the serialized models are there. The models can be loaded and consumed afterwards in batch, using similar parallel jobs, or perhaps individually through some client app, in case prediction latency is not an issue.


Saved models (shown in Azure Storage Explorer)

That was a simple, yet scalable solution for training many models in the cloud. In practice, you can easily create a cluster with many more virtual machines and with greater compute and memory resources to handle more demanding training tasks.

Said

 

Resources


Storage Spaces Direct: 10,000 clusters and counting!

$
0
0

It’s been 18 months since we announced general availability of Windows Server 2016, the first release to include Storage Spaces Direct, software-defined storage for the modern hyper-converged datacenter. Today, we’re pleased to share an update on Storage Spaces Direct adoption.

We’ve reached an exciting milestone: there are now over 10,000 clusters worldwide running Storage Spaces Direct! Organizations of all sizes, from small businesses deploying just two nodes, to large enterprises and governments deploying hundreds of nodes, depend on Windows Server and Storage Spaces Direct for their critical applications and infrastructure.

Hyper-Converged Infrastructure is the fastest-growing segment of the on-premises server industry. By consolidating software-defined compute, storage, and networking into one cluster, customers benefit from the latest x86 hardware innovation and achieve cost-effective, high-performance, and easily-scalable virtualization.

We’re deeply humbled by the trust our customers place in Windows Server, and we’re committed to continuing to deliver new features and improve existing ones based on your feedback. Later this year, Windows Server 2019 will add deduplication and compression, support for persistent memory, improved reliability and scalability, an entirely new management experience, and much more for Storage Spaces Direct.

Looking to get started? We recommend these Windows Server Software-Defined offers from our partners. They are designed, assembled, and validated against our reference architecture to ensure compatibility and reliability, so you get up and running quickly.

To our customers and our partners, thank you.

Here’s to the next 10,000!

Note on methodology: the figure cited is the number of currently active clusters reporting anonymized census-level telemetry, excluding internal Microsoft deployments and those that are obviously not production, such as clusters that exist for less than 7 days (e.g. demo environments) or single-node Azure Stack Development Kits. Clusters which cannot or do not report telemetry are also not included.

Windows 10 and File Recovery

$
0
0

There were a lot of features in Windows 10 that were "different" to previous versions, and some changes were cosmetic, some were more useful. Among one of the most useful features I found was something called "File History" in Windows 10. For the record, File History isn't exactly a new feature, it was in Windows 8.1. Since many enterprises skipped this (like Vista), most people didn't know the feature existed then, like many features brought to us in Windows 7 and later were introduced in Vista.

File History actually goes back a lot further than that, if anyone remembers Server 2003 had a feature called "Previous Versions", in which users on network shares could save a subset of the files they modified (thanks to the new creation at the time known as Volume Shadow Copy Service). It also required a client on the workstation. A moment of silence for the history. Done.

Going back to my previous blog on Word Troubleshooting, having good backups will keep the need for having to try extracting data out of Word files away. Believe me, it's a painful process.

File History is a great way to keep a running backup of your documents in a safe place on your local system. It's great for those "I can't believe I did that" moments where a document was changed and you can't remember what you did or you liked "yesterday's copy" better. This isn't quite the same as "Undo", as the snapshot has to occur for File History to work.

Another thing I find file history great for is my scripting and coding work. It almost keeps version control of files for me, and if I write something that I need to revert back to, I can simply restore the old version back in place.

The Hardware and Software

  • Surface Pro
  • MicroSD card for backups
  • Bitlocker

Yes, really it is that simple. Now, it is best practice to use an external drive for backups, although as of the time of this writing, Windows 10 1709 still supports backing up to network drives.

Note that there are little hacks out there where you can create a share on your local system and back up to that, if you have a single drive. I believe this still works, but not really fond of this idea. I also understand that can be a pain having a bulky USB drive hanging out of your laptop (it could crunch if you accidentally leave it in when you throw your laptop in a bag in a hurry). It also can be problematic if you have something like the Surface Pro that only offers one USB port. So, if you have a MicroSD slot (or something similar), this is a great use for it.

For safety, I recommend Bitlocker on both your system and the external disk, and auto-unlock on boot. This will insure that no matter what, your data will be protected if it's stolen. Of course, you won't have a backup, but they won't have your data either. Windows will warn you when you are setting up File History if your drive is not encrypted as well.

What does it do?

By default, File History will take copies of files (by default) in your well known folders, such as documents, pictures, etc. Now, what about a OneDrive account, or Google Drive or some other folder you want to back up, say C:Temp? This can be added manually later.

The default settings seem extreme, but not so bad. Keep in mind it is only keeping copies of changed files, not a full backup every time.

By default, File History will take versions of changed files every hour and keep those backups until the end of time. This, of course is configurable. You can take backups as little as every 10 minutes up to once a day. Usually an hour is good for most people, but you can cut it by a half hour or so if you need to. It will also keep files forever, but you may want to change that to something more reasonable for you, maybe after a year, old copies may not matter. You can keep files as little as 1 month up to 2 years if you configure a time limit. If you prefer to not run out of space, you can just delete old copies of files when you need space.

How do you get into it?
Open the Settings app in Windows 10. From there, open Update & Security and then click Backup. This is where you may be asked to set up a drive for File History if you wish to enable it.

— Easy link to my blog: http://aka.ms/leesteve
If you like my blogs, please share it on social media and/or leave a comment.

Field Learnings: Automate alerts for updates from the Surface Blog for IT Pros

$
0
0

An important practice for success with Surface in your organization is to stay up to date with firmware and Surface news for IT Pros. The Surface Blog for IT Pros is your main source of latest updates and Surface firmware releases for managing Surface. To help make staying informed easier in this blog we cover how you can automate receiving an email summary for each new blog posted.

Introducing Microsoft Flow.

Flow is service that can be used with both personal Microsoft accounts and Office 365 organizational accounts to make repetitive tasks, such as checking for blog updates, easy with workflow automation. Best of all Microsoft Flow has a free tier, more details on pricing are available here.

Get automatic emails from the Surface IT Pro Blog in 4 clicks

  1. Go to the public templates for Microsoft Flow related to Surface and click Get email update from the Surface IT Pro Blog:
  2. Click Use this template.
  3. Follow the steps to sign up or sign in with your Microsoft or Office 363 account, if you have not already done so.
  4. Click Continue.
  5. Click Create flow.

Create a customized Flow for a specific Surface model in your organization

The first process will configure Flow to send you an email summary for each new blog post. This next section is only if you want to limit the emails to a specific model of Surface. Start with the Surface IT Pro Blog and click a tag from either the Tags section at the bottom of each post or the Popular Tags section on the blog homepage that relates to the posts you want to be notified about. In this example we will use the tag Surface Pro 4:

  1. Now that you have the tag URL, to set up automatic Flow updates begin with steps 1-3 as before. Once you're on this screen instead click Edit under When a feed item is published:
  2. Replace the URL with the URL from the tag you picked, such as the example:
  3. Important: add to the end of the URL the text /feed/, such as in the example:
  4. Finally click Create flow.

Now you have an automatic workflow that will email you when the Surface Blog for IT Pros makes a post with the tag you selected.

 

This field learnings blog written by Dominic Williamson, Surface Technical Specialist in Australia.

Nuevo acuerdo solar adelanta el programa de creación de una nube más limpia

$
0
0

Por: Brad Smith, Presidente y Director Jurídico

Hace dos años, nos establecimos incrementar nuestro uso de energías renovables para dar energía a los centros de datos y operaciones que soportan nuestra nube en crecimiento. Nuestra meta fue utilizar el 50 por ciento de energía renovable para el 2018, 60 por ciento para el 2020 y a partir de ahí continuar con el crecimiento. Hoy, hemos dado otro paso hacia adelante en esta jornada con la firma del acuerdo solar corporativo más grande de los Estados Unidos, un proyecto solar en Virginia de 350 mega watts (MW), la adquisición energética más grande de Microsoft hasta la fecha.

Este anuncio lleva nuestra compra total directa de energía renovable hasta 1.2 giga watts, energía suficiente para iluminar 100 millones de bombillas de LED. Al combinarse con los seis acuerdos que hemos realizado en los últimos dos años, esta firma nos posiciona sobre nuestro objetivo de alcanzar el 50 por ciento, y nos adelanta hacia nuestra meta del 60 por ciento, mucho antes de lo planeado, incluso mientras continuamos en crecimiento.

El compromiso de Microsoft de otorgar energía limpia para nuestros centros de datos, mientras que ayudamos a acelerar la transición hacia una red más verde, hoy, tiene un total de 1.2 giga watts, con nueve acuerdos directos para el uso energía solar y eólica en los Estados Unidos, Irlanda, Países Bajos y Singapur. Den clic a la imagen para ver la infografía completa.

 

Este proyecto es nuestro segundo acuerdo solar en Virginia y permite que nuestros centros de datos funcionen por completo a través de la energía solar. Para darles una idea de la escala, nuestro proyecto utilizará 750 mil paneles solares a través de 2 mil acres. También forma parte un proyecto más grande de 500 MW que duplicará la capacidad actual de energía solar en el Territorio Autónomo, lo cual generará que nuestros vecinos dentro de la comunidad tengan energía más limpia.

Pero este proyecto significa más que sólo giga watts, porque nuestro compromiso abarca más que sólo transformar nuestras operaciones, también trata de permitir que otros se beneficien y adopten las energías limpias. Nuestra meta como corporación sobre la energía es parte de un objetivo más grande de hacer las redes más verdes en todo el mundo. Para alcanzar esta meta, debemos encontrar nuevas maneras para acelerar la transición hacia un futuro con menos uso de carbono. Esta es la razón de por qué hemos elaborado nuestros acuerdos energéticos de manera que ofrezcan a nuestros generadores como un respaldo energético para la red de Wyoming, así como pruebas para baterías de almacén de energía integrada con GE en Irlanda.

Nuestro acuerdo más reciente en Virginia utiliza de manera similar un nuevo enfoque. Aprovechamos la participación mayoritaria en el proyecto al usar 315 MW de los 500 MW de este. Como menciona el desarrollador del proyecto, sPower, durante el lanzamiento, esto permite ofrecer tasas más competitivas a otros compradores por porciones más pequeñas de energía, incluso dentro de este mega acuerdo. Este tipo de modelo podría ser utilizado para permitir a los compradores de cualquier tamaño o sofisticación, participar en la economía de la energía limpia, acelerar el camino del cambio y reducir las emisiones de carbono, todo al mismo tiempo.

A pesar de que hemos logrado un gran progreso, aún tenemos un largo camino por recorrer. Mientras que nos movemos hacia el 2020, hemos comenzado a explorar nuevos modelos y métodos de acumulación. Vamos a continuar nuestros avances sobre el acuerdo con R&D, además de buscar nuevas maneras para mejorar nuestra eficiencia energética y permitir que nuestros centros de datos se beneficien de la red. Y somos más activos que nunca del lado de las políticas de desarrollo, hemos trabajado en Washington D.C, capitales de estados y también de manera internacional para ayudar a elaborar políticas que les permitan a todos tener acceso a tasas justas y competitivas de mercado, así como un mejor acceso a energías limpias.


Using the mS-DS-ConsistencyGuid attribute to fix sync issues to Office 365/AAD.

$
0
0

By Cesar Hara, Victor Santana and Caio Cesar

Greetings everyone!

Today we are going to cover a very interesting way to troubleshoot user synchronization issues to Office 365 (Azure Active Directory). The method consists of using the attribute: “mS-DS-ConsistencyGuid”.

We recommend going through this article for a better understanding of what is being discussed in this post.

As many of you know, historically, AD Connect used to choose “ObjectGUID” as the sourceAnchor attribute to sync objects to the cloud. Starting on version 1.1.524.0, we can use mS-DS-ConsistencyGuid instead.

What is the big deal with it? Unlike ObjectGuid, this attribute is writable. This means we can manipulate synced objects easier.

Recently, customers started to raise tickets related to this matter. We were able to fix these incidents without causing service disruption to the end users, scenarios may vary from: “I have a synced multi-forest environment and the user was migrated between them”, “A user was deleted from Active Directory by mistake while AD Connect was not working and we reinstalled it” and “I lost the only DC in my forest, so I had to rebuild my AD forest completely”.
(PLEASE, ALWAYS PROMOTE AT LEAST TWO DOMAIN CONTROLLERS!!)

Pre-requisites: You will need to have access to AAD PowerShell connected to your tenant (with global admin permissions), AD Connect Server and access to the Domain Controller.

Scenario 1:

“A user was deleted from AD by mistake while AD Connect was not working. We created a new account since we cannot recover the old one and then we reinstalled AD Connect, but the changes performed on local Active Directory object are not syncing to the cloud account.”
NOTE: There are other possible scenarios, this is only one of the scenarios we have seen.

Understanding the problem:

• As AD Connect was down, the removal of the user was not “picked” by AD Connect.
• As a result, this deletion did not occur in AAD. This means the account is orphan in AAD.
• The new account has a new on-prem ImmutableID (originated from ObjectGUID). This attribute cannot be changed in the cloud for synced accounts.
To reproduce this behavior, we have uninstalled AD Connect, deleted the user account from Active Directory and re-installed AD Connect.

To confirm that the problem is with the different ImmutableID, you can follow these steps:
• Get the ImmutableID of the affected account in the cloud:

Get-MsolUser -UserPrincipalName john@cesarhara.com | select ImmutableID
ImmutableID: kKfL2wwI+0W+rN0kfeaboA==

• Perform a metaverse search for the new user created in AD (or convert the ObjectGUID taken from AD into a base64 format with the GUID2ImmutableID tool) to confirm the new ImmutableID:

• If you have attribute resiliency, AD Connect will not show any errors. You can go to the Portal or PowerShell to identify the error:

In summary, any changes we make to this on-prem object will not be reflected to the cloud.

Steps to resolve:

1- Confirm that the attribute being used in AD Connect as the “SourceAnchor” is the “mS-DS-ConsistencyGuid”, just run the AD Connect Wizard and choose “View Current Configuration”:

2- Make sure that the “UPN” and “ProxyAddresses” attributes are the same in O365 and AD accounts - you can download the users report in the Admin Portal or use PowerShell to extract this information. These attributes are essential, if you are aware of any other one missing, proceed to add it to the account in your local AD accordingly.

3- Grab the ImmutableID value of the cloud/original account. Here is the cmdlet again:

Get-MsolUser -UserPrincipalName john@cesarhara.com | select ImmutableID
ImmutableID: kKfL2wwI+0W+rN0kfeaboA==

4- We have to convert the value of the “ImmutableID” to a HEX format so we can add it into the on-prem account:

[system.convert]::FromBase64String("kKfL2wwI+0W+rN0kfeaboA==") | %{$a += [System.String]::Format("{0:X}", $_) + " "};$result = $null;$result = $a.trimend();$result

The output is:
90 A7 CB DB C 8 FB 45 BE AC DD 24 7D E6 9B A0

5- Now let’s get back to our AD Connect server and stop the scheduled task (it’s a good practice to do so, avoiding a sync task to run while we are troubleshooting this):

Set-ADSyncScheduler -SyncCycleEnabled $false

6- Move the account to a non-synced OU and run a delta sync (this is required to the match occurs later).

Start-ADSyncSyncCycle -PolicyType Delta

7- In the new account created in ADDS, edit its attributes. Do this by locating the mS-DS-ConsistencyGuid and adding the HEX value from step 2 (if there is a value, AD Connect added the current ObjectGUID value as the default behavior, proceed to add the new one):

8- Move the account back to a synced OU and run a delta sync again, if everything ran as expected the sync would occur successfully.

9- Add or change an attribute in AD to make sure the sync worked as expected. Finally, check in the Portal and the error will be gone:

 

Conclusion:

With this new feature, it is possible to resolve problems that used to be hard to deal with it when using “ObjectGUID”. Regardless the scenario, we must always use the original ImmutableID (already set in the cloud), convert it to an HEX value and add into the “mS-DS-ConsistencyGuid” so the match occurs.
If you are going to migrate accounts between forests make sure to populate in the target forest object “mS-DS-ConsistencyGuid”, the value of the source object “ObjectGUID”.

Der KI-Faktor

$
0
0

Logbucheintrag 180327:

55 Prozent der Deutschen, das hat jetzt eine Studie aufgedeckt, glauben, noch nie etwas mit künstlicher Intelligenz zu tun gehabt zu haben. Lediglich 15 Prozent sind sich sicher, mindestens einmal in den vergangenen zwölf Monaten in den Genuss von KI-Services gekommen zu sein. Erstaunlich! Eine der größten Revolutionen nicht nur in der Informationstechnik, sondern in der gesamten Technikgeschichte vollzieht sich von der Öffentlichkeit unbemerkt.

Dabei hat der Umsatz mit KI-Lösungen laut dem Statistikportal Statista im vergangenen Jahr bereits 2,42 Milliarden Dollar betragen – knapp doppelt so viel wie im Jahr davor. Und wenn man den Forecasts glaubt, die dem Statista-Zahlenwerk zugrunde liegen, dann wird sich dieses Wachstumstempo weiter beschleunigen. Im laufenden Jahr werden vier Milliarden Dollar umgesetzt, Und 2025 werden sich die weltweiten Revenues mehr als verzehnfacht haben: auf dann knapp 60 Milliarden Dollar.

Einer der Gründe dafür, dass sich diese Technologie einerseits so rasant, andererseits so unerkannt entwickelt, dürfte darin liegen, dass KI vor allem als Service aus der Cloud angeboten wird. Also anders als in der öffentlichen Meinung tritt KI nicht in Roboter-Gestalt auf, sondern in freundlichen Apps auf dem Smartphone und in mehr Automation bei Geschäftsprozessen. Für den individuellen Nutzer entstehen in der Regel keine oder kaum Kosten. Er bezahlt mit seinen Daten oder pro Transaktion. Der Nutzen dagegen ist gewaltig.

Die Kosten – aber auch die Gewinne – liegen vielmehr bei den Plattform-Anbietern. Und hier geht der Kampf um Marktanteile gerade erst richtig los. Deshalb baut Microsoft die Azure-Plattform zu einer umfassenden KI-Basis aus, die sowohl Cortana, Bing, Office 365, Dynamics 365 und Microsoft 365 umfasst. Deshalb hat CEO Satya Nadella die Marktführerschaft bei KI-Angeboten zu einem der vorrangigsten Ziele für die nächsten Monate erkoren. Wir stehen im harten Wettbewerb mit den führenden Internet-Anbietern aus den USA, Europa und nicht zuletzt China. Doch es lohnt sich: 50 Milliarden Dollar Umsatz im Jahr 2025 ist ein ziemlich großer Kuchen.

Und es ist nicht der einzige Kuchen, nach dem wir auf der Azure-Plattform greifen: Der Markt für Big Data Analytics oder Smart Data ist bereits weiter entwickelt, profitiert aber jetzt zusätzlich vom KI-Faktor: 33 Milliarden Dollar wurden damit nach Statista-Schätzungen 2017 umgesetzt. Im Jahr 2025 werden es 84 Milliarden Dollar Umsatz sein. Dagegen ist der Markt für autonom fahrende Autos, die mit KI-Hilfe durch den Verkehr gelotst werden, noch vergleichsweise klein. Sechs Milliarden Dollar werden hier im Jahr 2025 umgesetzt. Der größte Teil der aktuellen KI-Umsätze kommt bereits jetzt den automatisierten Enterprise Solutions: 2016 haben Unternehmen für den KI-Faktor in ERP-Systemen und Finanzanwendungen bereits 360 Millionen Dollar ausgegeben – das entspricht einem Viertel der gesamten KI-Ausgaben!

Wie der KI-Faktor heute schon die Welt verändern kann, hat Microsoft jetzt zusammen mit dem Audi-Partner EFS – Elektronische Fahrwerksysteme – in Gaimersheim bewiesen. Auf der Basis des Grafikprozessors Tesla P100 von NVIDIA unter Azure NC-Series VM haben wir Deep Learning-Funktionalitäten und Speichersysteme aus der Azure-Cloud beigesteuert, um zu zeigen, dass autonome Fahrzeuge ihr Fahrverhalten selbstlernend optimieren können. Dabei werden die Kamerabilder des Fahrzeugs analysiert und in Fahrbefehle umgesetzt. Damit reduziert sich der Aufwand für digitalisierte Straßenkarten signifikant.

Das Beispiel zeigt, dass in modernen, komplexen KI-Anwendungen mehrere KI-Faktoren zusammenkommen: Azure als hoch-performante VM-Plattform, schnelle Speichermedien, Bilderkennung und Deep Learning. Dieses Zusammenspiel wird mehr und mehr typisch für Projekte, wie wir sie mit Partnern und Kunden zusammen realisieren. Deshalb investieren wir nicht nur in die Technologie und ihre Bereitstellung in neuen regionalen Rechenzentren. Wir investieren auch, um unsere Partner und über sie unsere Kunden zu befähigen, diese KI-Angebote für neue Lösungen zu nutzen – auch wenn dann wieder kaum jemand den KI-Faktor bemerkt…

 

 

 

 

 

 

 

Security baseline for Windows 10 v1803 “Redstone 4” – DRAFT

$
0
0

Microsoft is pleased to announce the draft release of the security configuration baseline settings for the upcoming Windows 10 version 1803, codenamed “Redstone 4.” Please evaluate this proposed baseline and send us your feedback via blog comments below.

Download the content here: DRAFT-Windows-10-v1803-RS4

The downloadable attachment to this blog post includes importable GPOs, scripts for applying the GPOs to local policy, custom ADMX files for Group Policy settings, and all the recommended settings in spreadsheet form and as a Policy Analyzer file (DRAFT-MSFT-Win10-RS4.PolicyRules).

The differences between this baseline package and that for Windows 10 v1709 (a.k.a., “Fall Creators Update,” “Redstone 3”, RS3) include:

  • Two scripts to apply settings to local policy: one for domain-joined systems and a separate one that removes the prohibitions on remote access for local accounts, which is particularly helpful for non-domain-joined systems, and for remote administration using LAPS-managed accounts.
  • Increased alignment with the Advanced Auditing recommendations in the Windows 10 and Windows Server 2016 security auditing and monitoring reference document (also reflected here).
  • Updated Windows Defender Exploit Guard Exploit Protection settings (separate EP.xml file).
  • New Windows Defender Exploit Guard Attack Surface Reduction (ASR) mitigations.
  • Removed numerous settings that were determined no longer to provide mitigations against contemporary security threats. The GPO differences are listed in a spreadsheet in the package’s Documentation folder.

We’d like feedback regarding an Advanced Auditing setting that we have considered adding to the baseline but haven’t so far. The auditing and monitoring reference, mentioned above, recommends auditing failure events for Filtering Platform Connection. This is somewhat redundant because the Windows client baseline’s firewall configuration logs dropped packets. The Advanced Auditing setting collects richer data, but can add large numbers of events to the Security event log. The reference recommends against auditing successful connections. So, should the baseline:

  • Stay as it is, with firewall logging only and no Advanced Auditing for Filtering Platform Connection?
  • Keep firewall logging as it is, and add Failure auditing for Filtering Platform Connection? (This creates duplication between dropped packet logging and failure audit events.)
  • Keep firewall successful-connection logging only, and replace the recommendation for dropped-packet logging with Failure auditing for Filtering Platform Connection? (An obvious disadvantage is that admins have to look in two places for firewall-related events.)
  • Another alternative?

Please let us know in the comments below.

Thanks.

PowerPivot for SharePoint 2016 – Email notifications are not sent for scheduled data refresh failures

$
0
0

You may encounter an issue with email notifications not getting sent for scheduled data refresh failures of PowerPivot workbooks in PowerPivot for SharePoint 2016.

This error is caused by a bug in PowerPivot. The fix will be released in the sppowerpivot16.msi file in the Feature Pack for the upcoming release of SQL Server 2016 SP2.

Support-Info: (CONNECTORS): How to work around the “Replicate Directory Changes” to connect to AD for the ADMA or GalSync MA

$
0
0

PRODUCTS INVOLVED

  • Forefront Identity Manager 2010, R2, R2 SP1
  • Microsoft Identity Manager 2016, SP1

COMPONENTS INVOLVED

  • Active Directory Management Agent
  • GalSync Management Agent

PROBLEM SCENARIO DESCRIPTION

  • By default out of the box, the Active Directory Management Agent and/or GalSync Management Agent connect to Active Directory utilizes the DirSync Control. In doing so, it needs/requires the "Replicate Directory Changes" to communicate with Active Directory. However, if we do not want to provide the "Replicate Directory Changes", how can we access the Active Directory.

RESOLUTION

Resolution Steps
      1. Open the Windows Registry on the Synchronization Service Machine
      2. Navigate to HKLMSystemCurrentControlSetServicesFIMSynchronizationServiceParameters
      3. Add a New DWORD Key called ADMAUseACLSecurity
      4. Provide it a value of 1
0 Use the DirSync Control and the Replicate Directory Changes
1 Use Active Directory ACLs for permission

 

ADDITIONAL INFORMATION

You may run into issues with permissions on the Deleted Objects container. Here are steps to resolve that issue if encountered.

Resolution Steps for Deleted Objects Container
To make this work, we had to explicitly grant the AD MA account list and read permissions to the Deleted Objects container in the domain.  This is done using the dsacls.exe utility to:

1. Change ownership of the Deleted Objects container to the currently logged in user

2. Grant the ADMA account list and read permissions

More information:

Use the dsacls.exe utility to explicitly grant the AD MA account list and read access to the Deleted Objects container in the domain.  Without this permission, we can't guarantee that the user will be able to read from the deleted objects container during delta import.

This utility will need to be run as a domain administrator from an administrative cmd.exe prompt.

https://support.microsoft.com/en-us/help/892806/how-to-let-non-administrators-view-the-active-directory-deleted-objects-container

One of the differences between the domain administrator and the standard user object, is that the domain administrator automatically has access to the deleted objects container.  This list/read property access that domain administrators have may make the difference in being able to discover the object deletion in delta import, and not.

Please use the dsacls.exe utility to check the current permissions on the deleted objects container.  If the AD MA account doesn’t have list and read properties access, please use the dsacls.exe utility to add these permissions, and re-test.

Default permissions on Deleted Objects container

C:Usersmimadmin>dsacls.exe "cn=deleted objects,DC=contoso,dc=com" /takeownership

Owner: CONTOSODomain Admins

Group: NT AUTHORITYSYSTEM

Access list:

{This object is protected from inheriting permissions from the parent}

Allow BUILTINAdministrators  SPECIAL ACCESS

LIST CONTENTS

READ PROPERTY

Allow NT AUTHORITYSYSTEM     SPECIAL ACCESS

DELETE

READ PERMISSONS

WRITE PERMISSIONS

CHANGE OWNERSHIP

CREATE CHILD

DELETE CHILD

LIST CONTENTS

WRITE SELF

WRITE PROPERTY

READ PROPERTY

 

The command completed successfully

Updated permissions with my AD MA account added

C:Usersmimadmin>dsacls.exe "cn=deleted objects,DC=contoso,dc=com" /takeownership

Owner: CONTOSODomain Admins

Group: NT AUTHORITYSYSTEM

 

Access list:

{This object is protected from inheriting permissions from the parent}

Allow CONTOSOma_ADMA  SPECIAL ACCESS

LIST CONTENTS

READ PROPERTY

Allow BUILTINAdministrators   SPECIAL ACCESS

LIST CONTENTS

READ PROPERTY

Allow NT AUTHORITYSYSTEM      SPECIAL ACCESS

DELETE

READ PERMISSONS

WRITE PERMISSIONS

CHANGE OWNERSHIP

CREATE CHILD

DELETE CHILD

LIST CONTENTS

WRITE SELF

WRITE PROPERTY

READ PROPERTY

 

The command completed successfully

ADDITIONAL LINKS / INFORMATION

Oh no!!! That ninja’s going to slice open the sun!!!! (Wiki Life)

$
0
0

No! I'm serious! Someone stop that Wiki Ninja! He's about to slice open the sun:

I know what you're thinking...

You're thinking, "Hey, Ed. You know that Wiki Ninja who's about to slice open the sun?"

And I'm like, "Yeah?" You know, because I can read your thoughts and stuff.

And then you think, "Well, how do you know it's a dude? You can't see the Wiki Ninja very well; that could totally be a Wiki Ninjette!"

And then I'm like (yeah, still communicating telepathically), "Sorry. I mean that someone should stop that Wiki Ninjette before she slices open the sun! I mean, if the sun juices spill out, we're all done for!!!"

I know. I know. You're probably not really thinking that. You're probably thinking, "Why does that other Wiki Ninja/Ninjette over on the right have his/her back turned to the one who's about to slice open the sun? Quick! Turn around, raise your blade, and save the sun!!!!"

Or maybe you're not thinking that. I don't know.

Maybe you're thinking, "So, why can I see the souls of those Wiki Ninjas/Ninjettes? And why are their souls branded with Microsoft colors?"

Maybe that's just what I'm thinking.

Anyway...

I look at images like that... and it makes me ponder. It makes me wonder, who makes these amazing images, and what did we do to deserve the privilege of gazing upon such awesomeness?

Speaking of awesomeness...

Where did you get that boss contact lens!??? It looks like it hurts.

Or maybe it shoots a laser out of it or something.

And what if you got hit by that laser? Would you throw up in red, green, blue, and yellow?

Also...

I think a ninja just fell out of the moon! I was wondering where those ninjas came from!

Anyway, go read this blog if you haven't yet:

 

I can't believe we got 80 new banners! That's crazy!

 

Jump on in. The Wiki is warm! And apparently it's inside the moon. Oh, and the next chance you get, you should slice that sun open! Or just shoot it with your Microsoft laser eye.

I'm back! That's what you get! Mwahahahaha!

- Ninja Ed

PS: Special thanks to Kanwwar Manish for the top image, and to Kia Zhi Tang for the second and third images!

PPS: And in case you're wondering what the inside of the moon looks like (where the Wiki Ninjas run free), Gaurav Aroraa helps you envision it:

Juntos, la IA y la naturaleza protegen los sistemas hidráulicos de la Tierra para el futuro

$
0
0

Por: Josh Henretig, Director Senior de Sustentabilidad Ambiental en Microsoft.

Cada 22 de marzo, el mundo enfoca su atención en el agua. El tema de este año para el Día Mundial del Agua es “Nature for Water”, que explora cómo podemos utilizar a la naturaleza para superar los retos relacionados con el agua en el siglo XXI. Estos retos incluyen todo desde escasez a calidad a inundaciones. Aunque las causas principales de estos retos relacionados con el agua varían, dos cosas están claras – la degradación ambiental agudiza todos esos problemas, y las soluciones basadas en la naturaleza ofrecen un increíble potencial de resolver muchos de estos retos.

Este no es sólo un concepto hipotético – hoy en día los neoyorquinos disfrutan de un inmaculado abastecimiento de agua servido por la naturaleza. Pero en 1997, la Ciudad de Nueva York enfrentaba un declive en la calidad del agua y una costosa decisión – invertir en nuevas instalaciones de tratamiento de agua y drenaje por 6 mil millones de dólares con 250 millones anuales en mantenimiento o invertir en la cuenca Catskills a un costo aproximado de 1.5 mil millones de dólares. Eligieron la segunda opción.

Ahora, más de mil millones de galones de agua son entregados y consumidos cada día por la gente de Nueva York, servidos por sistemas naturales. La tierra y las raíces de los árboles filtran el agua, los microorganismos descomponen los contaminantes, las plantas en los arroyos absorben el nitrógeno de las emisiones de los automóviles y los escurrimientos de fertilizantes, y las plantas de las humedades absorben nutrientes a la vez que atrapan sedimentos y metales pesados. Estos sistemas naturales que capturan la escorrentía y limpian nuestra agua, agregan eficiencia a las instalaciones existentes de tratamiento de agua, y reducen el impacto de las tormentas en nuestras comunidades.

Sin embargo, para aprovechar mejor estos sistemas naturales, también necesitamos tecnología. Garantizar la seguridad y calidad de más de mil millones de galones de agua que fluyen a diario a través de una extensa red de lagos, reservas y acueductos requiere datos… datos que son reunidos a partir de una combinación de redes de sensores, estaciones de muestreo, y cientos de miles de pruebas realizadas en la cuenca. Este enorme aparato de monitoreo es una parte crítica del abastecimiento de agua potable de la ciudad de Nueva York – la naturaleza y la tecnología trabajan de la mano para habilitar agua limpia y la salud de una ciudad.

Esto podría ser replicado alrededor del mundo. Después de todo, cada comunidad en el planeta se apoya en los servicios del ecosistema en cierto nivel. Pero justo ahora, una buena parte del mundo carece de los datos y de la infraestructura tecnológica para monitorear, modelar, y gestionar estos sistemas de manera más efectiva.

Este es justo el tipo de brecha que buscamos resolver a través de AI for Earth. Hemos comenzado a ver la promesa de IA para habilitar soluciones basadas en la naturaleza y trabajamos para acelerar un futuro donde todos tengan acceso a agua fresca y limpia.

Algunos de nuestros socios y beneficiarios se encuentran entre los líderes de este esfuerzo. Chesapeake Conservancy utiliza los más recientes conjuntos de datos de alta resolución para apoyar la precisión del movimiento de conservación. A través de algoritmos avanzados acumulación de flujo desarrollados por científicos líderes, producen mapas que representan el flujo concentrado en la superficie a escala de la parcela. Cuando estos mapas de drenaje se combinan con conjuntos de datos de alta resolución de la cubierta terrestre, pueden asistir en la identificación de áreas que tienen el mayor potencial de reducir los sedimentos y las cargas de nutrientes hacia los cuerpos de agua adyacentes.

Organizaciones como WetDATA desarrollan plataformas de datos que reúnen conjuntos relevantes de datos referentes al agua con analítica y herramientas de visualización para decisiones informadas referentes a políticas y negocios. WetDATA agrega datos de agua disponibles a nivel público combinados con herramientas de visualización de datos para permitir a los interesados entender el suministro de agua y los escenarios de demanda, además de las intervenciones para gestionar riesgos referentes al agua. También se encuentran en desarrollo de herramientas de IA para involucrar de mejor manera a las personas interesadas sobre riesgos y oportunidades referentes al agua.

En Bangalore, India, el monzón no siempre trae consigo lluvias de las cuales confiar, así que los planeadores de la ciudad deben maximizar la distribución de agua desde la fuente y evitar el agotamiento del nivel freático. A través del trabajo con el Instituto de Ciencias de la India (IIS, por sus siglas en inglés), hemos implementado una rede de sensores basada en Internet de las Cosas (IoT) en el campus del IIS para monitorear de manera eficiente el flujo de agua desde la fuente hasta su consumo. Con estas nuevas capacidades habilitadas por datos, el equipo puede recibir alertas de incidentes relacionados con la calidad del agua de ubicaciones específicas a través de una aplicación móvil, mientras que la analítica de datos ayuda a garantizar que el agua disponible es bombeada de manera eficiente a cada edificio del campus.

En Seattle, Microsoft trabaja con Nature Conservancy para desarrollar soluciones geoespaciales y de aprendizaje automático para enfrentar la contaminación del sistema de drenaje de agua. Los mapas interactivos son utilizados en numerosas maneras, que incluyen la protección de áreas de litoral que sirven de hogar para el salmón que se encuentra en peligro, o que son vulnerables a la erosión e inundaciones, así como una herramienta que ayuda a priorizar proyectos para limpiar las vías acuáticas contaminadas.

También hemos dado pasos para integrar nuestro foco en tecnología con lo que sabemos que la naturaleza puede ofrecer. En Silicon Valley, construimos el primer campus tecnológico con una certificación de cero consumo de agua, y que en parte se apoyará en soluciones naturales y en rehabilitar la cuenca local en Stevens Creek. También trabajamos dentro de nuestras operaciones y con nuestros socios para afrontar problemas locales relacionados con el agua. Esto incluye nuestra asociación con Ecolab para desarrollar Water Risk Monetizer, una herramienta de modelado que permite a los negocios factorizar en su toma de decisiones los riesgos actuales y futuros relacionados con el agua. La herramienta ya permite a Microsoft reducir el consumo de agua en nuestros centros de datos.

¡Si tienen ideas para aprovechar la IA para transformar la manera en que conservamos y protegemos los recursos de agua potable, háganlo saber al aplicar para una de nuestras subvenciones de AI for Earth!


 「チャネル イネーブルメント」について掘り下げる 【3/28更新】

$
0
0

(この記事は2017年12月20日にMicrosoft Partner Network blog に掲載された記事 Deep dive into channel enablement の翻訳です。最新情報についてはリンク元のページをご参照ください。)

 

チャネル イネーブルメントとは、営業活動の改善を目指す「セールス イネーブルメント」よりもはるかに大きなコンセプトです。チャネル運用の改善を支援するためには、マーケティング チーム、セールス チーム、テクニカル セールス チーム、セールス オペレーション チームなど、いくつものグループに目を向ける必要があります。しかも、継続的なアカウント管理と平行して対応しなければなりません。今回の記事では、クラウド プロバイダーとの提携を検討しているパートナー様向けに、DocuSign (英語)Scott Owen 氏、Burke Fewel 氏、Aaron Burkhart 氏から伺ったチャネル イネーブルメントについてのヒントをまとめました。チームの力を高め、チャネルを最適化するうえで、ご参考になれば幸いです。

マーケティング チームの強化

会社の規模によっては、チャネル マーケティングを担当する専門のマーケティング チームが存在しない可能性があります。小規模な ISV では、マーケティング チームのメンバーが 1 人で何役もこなしている場合も考えられます。まずは次の項目からスタートしましょう。

 

1.構築するチャネルについて計画する: そのチャネルには、リセラーや ISV パートナーに直接販売するモデルと、2 層の流通モデル (ディストリビューター経由でリセラーや ISV パートナーに販売するモデル) のどちらを採用しますか。それによって準備するマーケティング マテリアルが変わります。また、マーケティングの対象が、パートナーなのか、お客様なのか、その両方なのかも決まります。

 

2.マーケティングの土台を固める: エンド カスタマーとパートナーの両方に向けて中核となる価値提案を練り上げます。この価値提案を基にマーケティング コンテンツやキャンペーンを作成します。

 

3.オーディエンスへのメッセージを決定する: DocuSign の場合、次の 2 つのシナリオについてメッセージを用意する必要がありました。

  1. パートナー向けのメッセージ: 自社ソリューションの販売がパートナーの収益アップにつながることを説明するメッセージ
  2. エンド ユーザーを対象としたコンテンツ: 自社ソリューションをパートナーがお客様に販売し、マーケティングを行うために使用するコンテンツ

 

4.チャネル マーケティングの成果を柔軟な姿勢で測定する: マーケティング戦略には、リード獲得や収益拡大といった具体的な目標が含まれています。マーケティング活動の成果を測定することは重要です。ただし、コンテンツをリリースしたり、キャンペーン用マテリアルを提供したり、パートナーの Web キャストに出演したりした場合、こうしたインサイトの収集は困難になるということを頭に入れておいてください。ここでは、パートナーとの関係を維持し、ベスト プラクティスを共有することが鍵になります。

 

 

セールス チームとテクニカル セールス チームの強化

このセクションは、「セールス = 製品 + テクニカル セールス + プリセールス + ポストセールス」という式で表現することができます。セールス チームとテクニカル セールス チームを強化するための実施項目は次のとおりです。

 

1.ノルマと目標を見直す: 新しいソリューションの導入時には、ノルマと目標を見直してセールス担当者のモチベーションを喚起します。

 

2.セールス チームの編成を確認する: 新たなチャネル パートナーと提携した場合は、セールス チームの全員で担当するのか、専用のグループを設置するのかを確認します。

 

3.コンテンツを最適化する: 短いシナリオベースのセールス支援コンテンツを準備します。このコンテンツには、反論を受けた場合のシナリオ、最新の FAQ ドキュメント、動画、モバイル対応のチュートリアルなどが含まれます。

 

 

セールス オペレーション チームの強化

このプロセスでは、セールス担当者またはリセラーの準備を行い、ISV とリセラーの両方の視点からセールス プロセスについて綿密に計画します。リセラーや ISV が注文の処理方法を把握している必要があるため、セールス プロセスが文書化されていることを必ず確認してください。準備段階でこのプロセスに十分な時間を掛けておくことは、長期的に見れば時間の節約になります。

リセラーにこれまでサードパーティ サービスの販売経験がない場合は、購入、請求、製品の提供といったプロセスをリセラーの社内チームが理解していることを確認する必要があります。事前に作成した計画に従うことで、すべてのプロセスが「ベスト プラクティス」の基準を満たすようになります。

DocuSign の場合、こうした変更にかなりの時間を取られましたが、その努力は報われました。リセラーとの優れた関係を確立し、規模に合わせて最適化を実現したうえに、取引を最初から最後までよりスムーズに進められるようになったのです。

 

 

 

継続的なアカウント管理

多くの企業が規模の拡大を望んでいます。DocuSign のような ISV の場合、規模を拡大したことで、市場での差別化を図り、収益アップを実現できました。このような目標を達成するためには、リセラーとの関係構築やリセラー向けツールの開発に、時間と労力を掛けて取り組むことが欠かせません。DocuSign のパートナーのうち、最も高い成果を挙げているのは、DocuSign 製品を使用し、綿密かつ効果的なビジネス計画を共同で作成して、プロセスを繰り返し見直して改善しているパートナーでした。

DocuSign と共にこの取り組みを開始したとき、これほど価値が高く実用的なコンテンツを共有できるとは思ってもいませんでした。チャネル イネーブルメントを実施するには、セールス イネーブルメントだけにとどまらず、基本的な事項をチームが理解したうえで、準備を整え、実行する必要があります。上記の各セクションの重要ポイントを確認して、チャネルを活性化しましょう。

 

 

 

マイクロソフト パートナー様は、次のサイトをご覧ください。

  • ISV リソース ハブ: IP 開発に役立つ、テンプレートなどのリソースを提供しています。

 

 

マイクロソフト パートナー様でない皆様も、ぜひマイクロソフト パートナー ネットワーク (英語) に参加して、提供中のリソースやツールをフル活用してください。

 

 

March 2018 Exchange Security Updates–Have You Updated?

$
0
0

Patch Tuesday this month featured updates to address security issues in Exchange 2010, 2013 and 2016.   Tuesday the 13th heralded the arrival of Rollup Update Rollup 20 (RU20) for Exchange Server 2010 Service Pack 3 along with updates for Exchange 2013 and 2016.

Exchange 2010 SP3 RU20 is the latest rollup of customer fixes currently available for Exchange Server 2010.  All updates, both security and product fixes, are delivered via a RU for Exchange 2010.  This means that if you want to install a security fix for Exchange 2010 you must install it via a RU.

Exchange 2013 and 2016 have a different servicing strategy, where security updates can be decoupled from the regular product updates.  Exchange 2013 and 2016 utilise Cumulative Updates (CUs) rather than the Rollup Updates (RU/UR) which were used previously.

For a reference point Exchange 2013 CU18 and Exchange 2013 CU19 were released in September 2017 and December 2017.  Exchange 2016 CU7 and Exchange 2016 CU8 were released on the same timeline.

Security updates were released for Exchange 2010, 2013 and Exchange 2016.  The released updates are covered in KB 4073392.  In addition the Microsoft Security Update Guide also provides a mechanism to search and filter on security updates.  Filtering the March 2018 Exchange updates in the Microsoft Security Update Guide shows the below:

Security Update Guide - March 2018 Exchange Updates

Drilling into the table shows that updates are available for all supported versions of Exchange.  Exchange 2007 exited out of extended support in April 2017, thus is not listed in the table.

It is worth reviewing the different versions of Exchange to note how the security fixes are delivered and thus how they are to be applied.

Exchange 2010

Exchange 2010 is serviced by releasing a new Rollup Update (RU).   These security fixes are delivered in Exchange 2010 SP3 RU20.

Download Exchange 2010 SP3 RU20

Please see the installation notes at the bottom of this post.  There are also known issues listed in KB 4073537.

Exchange 2013

Separate security updates are available for Exchange 2013 SP1 (CU4), CU18 and CU19.  If you are running one of these CUs, then you can download and install the security update from KB 4073392.  In reality though CU4 is a very dated release and you really should be on a current build of Exchange.

Exchange 2013 CU19 Security Update

Exchange 2013 CU20 already includes these security fixes.

For all other Exchange 2013 CUs the security update is not available.  In order to apply the security update then you must update to a current CU.

Exchange 2016

A separate security update is available for Exchange 2016 CU7 and CU8.  If you are running one of these CUs, then you can download and install the security update from KB 4073392.

Exchange 2016 CU9 Security Update

Exchange 2016 CU9 already includes these security fixes.

For all other Exchange 2016 CUs the security update is not available.  In order to apply the update then you must update to a current CU.

Cheers,

Rhoderick

Instalando e configurando o Cloud Distribution Point – Parte 1 – Certificados

$
0
0

 

Vamos explorar agora em uma série de dois posts, o Cloud Distribution Point. O Cloud Distribution Point pode armazenar conteúdo e disponibilizá-lo para clientes do Configuration Manager localizados na rede local com acesso à internet, Internet Based Client e também gerenciados pelo Cloud Management Gateway.
Para utilização do Cloud Distribution Point, é necessário ter uma assinatura ativa no Azure, o mesmo não necessita de gerenciamento da infraestrutura assim como o Cloud Management Gateway, é de fácil implementação e configuração através da console do próprio System Center Configuration Manager.

Para instalar e configurar o Cloud Distribution Point é necessário seguir os passos abaixo.

Passo 1: Configuração do certificado de gerenciamento do Azure.
Passo 2: Configuração do certificado de serviço.
Passo 3: Configuração do Cloud Distribution Point.
Passo 4: Configuração da Client Settings.
Passo 5: Criação do CNAME no DNS para o Cloud Distribution Point.

Vamos iniciar criando o certificado de gerenciamento do Azure com o script powershell abaixo. Lembre-se de alterar os campos AzureSubs e SuaSenha, e logo após realize o export do certificado com o segundo comando conforme abaixo.

#Comando para criar o certificado.
$cert = New-SelfSignedCertificate –DnsName AzureSubs.cloudapp.net -CertStoreLocation “cert:LocalMachineMy” -KeyLength 2048 -KeySpec “KeyExchange”
$password = ConvertTo-SecureString -String “SuaSenha” -Force –AsPlainText

 

#Comando para fazer o export do arquivo PFX.
Export-PfxCertificate -Cert $cert -FilePath “.my-cert-file.pfx” -Password $password

01 Cert_MgmtAzure

Agora realize o export do arquivo .CER com o comando powershell abaixo.

Export-certificate –Type CERT –Cert $cert –FilePath .my-cert-file.cer

02 Cert_MgmtAzure

No Portal do Azure, acesse Subscription, selecione a Subscription que você irá utilizar para o Cloud Distribution Point, clique em Management Certificates e em upload.

03 Cert_AzureImport

Selecione o certificado(.cer) criado e clique em Open.

04 Cert_AzureImport

Agora que já fez o upload do certificado, pode fechar o portal do Azure.

05 Cert_AzureImport

Na CA do seu domínio acesse o Certificate Templates(Executar > MMC > Add Snap-ins > Certificate Templates) e selecione o Web Server conforme abaixo. Logo após, clique com o direito em Duplicate Template.

07 Cert_Cloud_DP

Na guia General dê um nome ao certificado e, se necessário, altere o período de validade. Acesse a guia Request Handling, marque a opção Allow private key to be exported e na guia Security adicione um grupo de segurança que o servidor SCCM seja membro ou crie um selecionando para ele as opções Enroll e Read.
Aplique e dê OK finalizando o template.

08 Cert_Cloud_DP 09 Cert_Cloud_DP

10 Cert_Cloud_DP

11 Cert_Cloud_DP

Agora vamos publicar o template já configurado acessando o Certification Authority(Executar > certsrv.msc). Clique com o direito em Certificate Template, New e Certificate Template to issue.

12 Cert_Cloud_DP

Selecione o Template que configuramos e clique em OK.

13 Cert_Cloud_DP

Agora no Servidor do SCCM vá em Certificates(Executar > certlm.msc), Personal, clique com o direito em All tasks e Request New Certificate conforme abaixo. Logo após, clique em Next e selecione Active Directory Enrollment Policy.

15 Cert_Cloud_DP

Selecione o template que configuramos e clique em More Informarion. Na guia Subject selecione o Type Common Name e digite o nome que dará ao Cloud Distribution Point.
Existe uma observação nesse ponto. Antes de digitar o common name vá ao Portal do Azure, clique em adicionar um cloud services e em DNS name verifique se o nome desejado está disponível ou já esta sendo utilizado. Caso não esteja em uso, cancele a ação e adicione o Common Name NOMEDisponivel.cloudapp.net.
Clique em Apply, Ok, Next e Finish.

16 Cert_Cloud_DP

17 Cert_Cloud_DP

Agora que temos o certificado selecione o mesmo e clique em Export conforme abaixo.

19 Cert_Cloud_DP

Na janela que abriu selecione as opções conforme abaixo definindo uma senha(anote, você irá utilizar.), um local para salvar o certificado e clique em Finish.

20 Cert_Cloud_DP 21 Cert_Cloud_DP

22 Cert_Cloud_DP 24 Cert_Cloud_DP

 

No próximo Post vamos concluir a instalação do Cloud Distribution Point.

 


Conteúdo criado e publicado por:
Jeovan M Barbosa
Microsoft PFE
Configuration Manager


Armene kan næsten ikke komme ned igen…

$
0
0

Leder

Af Morten Ovesen

Jesper Balslev og Ole Sejer Iversen drøftede for nogle uger siden på DR2 Deadline Balslev bog, "Kritik af den digitale fornuft - i uddannelse" Og selvom jeg ikke er helt enig i, at kritikken skal rettes på indkøbet af devices eller portaler - så er der noget at komme efter i den dialog som Balslev og Iversen udfoldede. Nemlig, at kompetenceudviklingen af lærerne, nytænkningen mod det 21. århunderedes kompetencer i skolen og fokus på kreativitet, innovation og samskabelse ikke er blevet prioriteret tilstrækkeligt. Det er i al ydmyghed det, som vi i Microsoft gør os store anstrengelser for hjælpe jer med ude på skoler, uddannelsesinstitutioner og i kommuner. Derfor giver det også god mening, at læse videre på bloggen om, hvilke arrangementer der er planlagt de næste måneder.

Vi er her, for at sikre os, at I bliver en succes med det I har købt hos os!

Vi er landet på benene efter en hektisk og spændende Danmarks Læringsfestival. Vi stod på stand 13 sammen med ATEA og hele Minecraft setuppet og selvom 13 kan være et ulykkestal for mange, så var det ikke tilfældet for os. Vi talte med rigtigt mange af jer og havde gode dialoger med jer om Minecraft: Education Edition, licenser og læringsmuligheder. Vi spørger os altid om, hvorvidt det giver mening at være på Læringsfestivalen og igen må vi konstatere at det giver kæmpe mening. Så tak for det! Husk at finde de relevante danske undervisningsforløb og verdner her: https://education.microsoft.com/Story/Lesson?token=76tOV Find flere ved at søge på BLOKBY.

Og lige så meget vores nyeste update med kemi direkte i Minecraft Education Edition her: https://education.minecraft.net/chemistry

Jeg håber også I havde lejlighed til at besøge bussen hos HippoMini, hvor vi havde teamet op med dem til et fedt

Makerspace setup med vores virkelig seje Surface Studio - den bedste pc til FabLab produktion!

 

Vil du gerne realisere dit digitale potentiale?

$
0
0

I Microsoft vil vi gerne hjælpe vores kunder med at lykkes med jeres kerneforretning - undervisningen og læringen.

Derfor er det glædeligt for mig at kunne præsenterer forskellige arrangementer, hvor noget af dette kan erhverves her hos Microsoft. Så listen over fede events som er planlagt de kommende måneder.

 

 

  • Vi fortsætter vores træninger i Minecraft de kommende måneder, hold øje med bloggen og andre sociale medier, så kan I tilmelde jer gratis.
  • Vi starter vores OnsdagsSessioner op igen - første gang er 11.04. tilmelding lige her… Kom og bliv klædt på til, hvordan Office365 skal bruges i undervisningen. Specifikt OneNote, Sway, Forms og Teams.
  • Den 16.5. holder vi Office365 roundtable i Viborg og den 18.5. hos os selv i Lyngby. Vi har inviteret kommuner til at dele ud af deres erfaringer med Office365. Det er hhv Varde og Rudersdal, der kommer og videndeler. Vi giver slevfølgelig også et kig ind i maskinrummet og hvad der kommer i fremtiden. Dette event er tænkt som en grundskoleevent og med fokus på netværk og videndeling. Vi glæder os til at se jer. Tilmeldingslink kommer her på siden meget snart!
Viewing all 34890 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>