Quantcast
Channel: TechNet Blogs
Viewing all 34890 articles
Browse latest View live

相見恨晚的兩項Forms功能 –強化表單原來那麼簡單!

$
0
0

想要讓表單更親切生動嗎?想要收到更有效的回饋嗎?今天小編將告訴您兩項Forms實用功能,讓您不僅能增加填答者完整填答問卷的意願,還能在整理收到的回應時更得心應手。

只有文字敘述還不夠,小撇步讓問題更生動

如果您覺得純文字的問題敘述可能會讓填答者覺得不夠明瞭,或是表達不清楚,那麼您可以選擇在任何問題中「插入媒體」。讓填答者覺得更親切、生動活潑。或者是問題本身就需要先觀看影片或圖片才能填答的情況,直接插入媒體讓填答者不需要透過網址再另開視窗檢視。

Step1:點選「插入媒體」。

Step2:選擇您要插入「影像」或「影片」。

選擇影像後,需要選擇影像來源。

可以直接從Bing搜尋並插入,也可以從您的OneDrive中選擇,還可以從您的電腦中上傳檔案。

選擇影片後,可以貼上YouTube URL來插入影片。

 

妥善安排問卷流程,接收有效回饋更輕鬆

想針對不同種類的填答者問問題嗎?其實表單中每個有選項的問題,都可以針對填答者選擇的不同選項,直接跳至下一個不同的問題。只要您「設定分支選項」就可以做到!這樣的安排讓填寫問卷的流程更順暢,也讓檢視回應者得以收到更有效的回饋。

Step1:點選右上角的按鈕

Step2:點選「分支」

Step3:選擇某個欲設定分支選項的問題。

Step4:設定當選擇不同選項時,填答者要接續填答的問題是哪一個。

 

以上圖為例,對職涯講座非常滿意、滿意者,下一個填答的問題就是「最喜歡本次教育訓練的哪個part?」而對職涯講座非常不滿意、不太滿意者,以及覺得尚可者,下一個填答的問題就是「認為職涯講座可以如何改善」直接針對沒有選擇滿意的填答者,詢問他認為能夠怎麼改善,以利下一次再舉辦職涯講座時可以參考,進而提升下一次的滿意度。

 

以上就是兩項能幫助您強化表單的Forms功能-插入媒體、設定分支選項。想讓問卷變出新花樣的您、想收到更多有效回應提升問卷效度的您,一定要試試!


Cloud Platform Release Announcements for May 21, 2018

$
0
0

Azure DevOps Tool Integrations | Disclosure—Terraform OSS Resource Provider in Azure Resource Manager

Aligned Event: 05/07/2018 — 05/09/2018 — Microsoft Build 2018

We're announcing a private preview of a new Azure Resource Provider, Microsoft.TerraformOSS. The goal of this Resource Provider is to allow customers to manage resources that have a Terraform provider through Azure Resource Manager in the same way as they can manage native Azure resources. Initial Terraform providers that will be supported are Kubernetes, Cloudflare, and Datadog.

To learn more, read the blog post.

Azure security and operations management | Log alerts for Application Insights—GA

We are happy to announce the general availability of log alerts for Application Insights, an extensible Application Performance Management (APM) service for web developers building and managing apps on multiple platforms. Customers use Application Insights to monitor live web applications, detect performance anomalies, diagnose issues, and to understand usage patterns. In addition to the alerts on application health based on different metrics, you can now monitor even log files by setting up a query to run at specified intervals, and trigger alerts based on the result. For example, let’s say you deployed a new fix for a specific exception and want to make sure it doesn’t surface again. You can now set up an alert to trigger if that exception appears in your Application Insights trace file.

To learn more, read the full blog post.

SAP HANA certification for M-series VMs and Write Accelerator GA release | Disclosure

Announcing the SAP HANA certification for Azure M-series virtual machines (VMs). Customers can run their production SAP HANA workloads on Azure M-series VMs with this certification. M-series VMs offer up to 128 vCPUs and memory from 1 TB to 4 TB, enabling in-memory database workloads such as SAP HANA and high performance database workloads such as SQL Server.

We also recently announced the general availability of Write Accelerator for M-series VMs, to accelerate database writes such as log transactions to sub-millisecond performance. Write Accelerator is required for production SAP HANA workloads on M-series.

To learn more, read the full blog post.

Azure Cosmos DB | Async Cosmos DB Java SDK – public preview

New Async Java SDK for Azure Cosmos DB in preview

Now in preview, the new asynchronous Java SDK for the Azure Cosmos DB SQL API is now open sourced on GitHub. This SDK uses the popular RxJava library to add a new asynchronous API surface area for composing event-based programs with observable sequences.

To learn more, read the full blog post.

Azure SQL Database | Vulnerability Assessment now available

Vulnerability Assessment for Azure SQL Database now generally available

Now generally available, Vulnerability Assessment for Azure SQL provides you with a one-stop-shop to discover, track, and remediate potential database vulnerabilities. It gives visibility into your security state, includes actionable steps to investigate, manages and resolves security issues, and enhances your database fortifications. The feature is now also part of a new security package for your Azure SQL Database, known as SQL Advanced Threat Protection, that provides a single go-to location for advanced SQL security capabilities, including Threat Detection, Vulnerability Assessment, and Information Protection.

Azure SQL Data Warehouse | Query performance improvement

SQL Data Warehouse replicated tables now generally available for boosting query performance

Azure SQL Data Warehouse now delivers the ability to distribute your data across the instance for faster query response time and increased data transformation throughput. Replicated tables allow for a schema design where dimension data is available on all compute nodes. This reduces the need to move data at runtime and thus queries run faster.

Find out more.

Power BI Embedded | May feature roundup

Power BI Embedded feature roundup—new features now GA

The following new features are now available in Power BI Embedded:

Custom report Tooltips and Q&A Explorer

New Azure Resource Metrics

In  a recent blog post we announced the integration of Azure Monitor Metrics to be used on Power BI Embedded resources. This month, we're releasing three new metrics that, in addition to existing metrics, will give a good view of the status and performance of your resource:

  • Memory—Shows the usage of memory in your resource. The units are in gigabytes (GB) so you can track exactly how much of your RAM is being used every minute.
  • Memory thrashing—Shows the percentage of memory being thrashed from the RAM of your resource, relative to the total size of RAM in the resource. Note that this metric is relevant only for Import mode datasets, since they're hosting data in-memory. Data sets using DirectQuery or Live connection to Analysis Service are not monitored by this metric.
  • QPU high utilization—Tracks the load on the query processing units on your resource. Every minute, the metric checks if your resource’s QPU usage has exceeded 80 percent utilization of the resource so you can check for a potential case of performance degradation.

Setting up Alerts on your Azure resource

Power BI Embedded in Azure, which allows you to use monitoring metrics to track your resource performance and load, has been integrated with Azure Alerts. Alerts offers a method of monitoring in Azure that allows you to configure conditions over data and to be notified when the conditions match the latest monitoring data. When you have an active Power BI Embedded resource, you can use the Metrics that monitors your resource and define rules over those metrics.

Read more at our Power BI developer blog.

Enterprise Deployment Tips for Azure Data Science Virtual Machine (DSVM)

$
0
0

This post is authored by Gopi Kumar, Principal Program Manager at Microsoft.

The Data Science Virtual Machine (DSVM), a popular VM image on the Azure marketplace, is a purpose-built cloud-based environment with a host of preconfigured data and AI tools. It enables data scientists and AI developers to iterate on developing high quality predictive models and deep learning architectures and helps them become much more productive when developing their AI applications. DSVM has been offered for over two years now and, during that time, it has seen a wide range of users, from small startups to enterprises with large data science teams who use DSVM as their core cloud development and experimentation environment for building production applications and models.


Deploying AI infrastructure at scale can be quite challenging for large enterprise teams. However, Azure Infrastructure provides several services supporting enterprise IT needs, such as around security, scaling, reliability, availability, performance and collaboration. The Data Science VM can readily leverage these services in Azure to support the deployment of large scale enterprise team -based Data Science and AI environments. We have assembled guidance for an initial list of common enterprise scenarios in a new DSVM documentation section dedicated to enterprise deployment guidance. We will continue to iterate and add more scenarios over time. In this blog post, we will summarize a few scenarios and refer you to the documentation and scripts on the DataScienceVM github repository to help guide you with enterprise deployments.

Scaling Your AI Environment with Data Science VM Pools

For large teams, a shared pool of Data Science VMs can be an effective way to manage their AI infrastructure. The benefit of using a shared pool is better resource utilization and availability, along with more potential for sharing and collaboration across the team. It also allows IT to manage the DSVM resources more effectively and at a lower cost. We provide guidance for creating a pool for batch-oriented workloads and one for interactive workloads. DSVM batch pools can leverage the Azure Batch AI service or Azure Batch. One of the less well known features of DSVM is that these virtual machine images are can be used on Azure Batch AI and Azure Batch services. This allows you to have a similar environment as the one you use while interactively developing your AI model and application, to be used in a batch processing environment where you periodically retrain your models. Batch and Batch AI provides horizontal scaling. A pool of interactive DSVMs can be created using the Azure Virtual machine scale set (VMSS) feature. This allows you to create a farm of DSVM instances behind a single RDP, SSH, Jupyter endpoint. The VMSS takes care of routing users to the appropriate DSVM instance in the pool. It is also common to have a shared disk (like Azure Files) mounted on each node of your pool such that you have access to your data irrespective of the node on which you are working on. The documentation on DSVM pools provides more details on the process. We also provide Azure Resource Manager (ARM) templates and scripts to create a VM Scale set with DSVM nodes on our GitHub repository.

Common Identity Using Active Directory Integration

By default, on Azure VMs including the Data Science VM (DSVM) local user accounts are created while provisioning the VM and the users authenticate to the VM with these credentials. If you have multiple VMs that you need to access, managing credentials can get cumbersome. Common user accounts and management using a standards-based identity provider allows you to use a single set of credentials to access multiple resources on Azure including multiple DSVMs, databases and data lakes. Active Directory (AD) is a popular identity provider and is supported both on Azure as a service as well as on premises. You can use AD or Azure AD to authenticate users on both a standalone DSVM or a cluster of DSVMs on an Azure virtual machine scale set. This is done by joining the DSVM instances to an AD domain. If you already have an Active Directory to manage the identities, you can use it as your common identity provider. In case you do not have an AD, you can run a managed AD on Azure through a service called Azure Active Directory Domain Services (Azure AD DS). Details for these can found on our article on Active Directory integration. With Active Directory integration, you can login (RDP or SSH) to the DSVM or authenticate to applications such as JupyterHub using a common set of credentials.

Can You Keep Your Secrets Safely?

Yes, you can! A common challenge when building enterprise applications is managing credentials that may be needed in your code for authenticating to external data storage services, web services and cloud-based services. Keeping these credentials secure is an important task. They should never appear on your developer workstations or stored with source code or config files. Azure offers a set of nifty services – Managed Service Identity (MSI) and Azure Key Vault – that can be used to secure credentials to external data sources and cloud services. Managed Service Identity lets you give Azure services (including VM instances, VM Scale sets) an automatically managed identity in Azure AD. You can use this identity to authenticate to any service that supports Azure AD authentication without having any credentials in your code. One common pattern to secure credentials is to use MSI in combination with the Azure Key Vault, a managed Azure service to store secrets and cryptographic keys securely. You can access Key Vault using the managed service identity and retrieve the authorized secrets and cryptographic keys from it. While the documentation on MSI and Key Vault is pretty comprehensive, we have provided a cheat sheet for some common AI development scenarios in the enterprise guidance for DSVM documentation, covering the basics of using these services.

These are just a few considerations when deploying DSVM in large enterprise configurations. Some other aspects that you may need to consider are: monitoring, management, role based access control, policy setting and enforcement, anti-malware and disk encryption, to name a few. As mentioned earlier, the DSVM gives you the full control to leverage all these services within the Azure compute infrastructure while architecting your AI solution. The Azure architecture center is also a great resource and it provides detailed end-to-end architecture and patterns for building and managing your cloud based analytics infrastructure.

We would love to hear feedback on the new enterprise guidance documentation articles for DSVM.

Gopi
@zenlytix

(RDS) Tip of the Day: Azure DNS Private Zones in Public Preview

$
0
0

Today's tip...

Azure announces Public Preview of Azure DNS Private Zones, a key feature addition to Azure DNS. This capability provides a reliable, secure DNS service to manage and resolve names in a VNet, without the need for you to create and manage custom DNS solution.

This feature allows you to use your company domain rather than the Azure-provided names available today, provides name resolution for VM’s within a VNet and across VNets. Additionally, you can configure zones names with a split-horizon view – allowing for a private and a public DNS zone to share the same name.

Zone and record management is done using the Azure REST APIs, SDKs, PowerShell and CLI.

The feature has been available for a few months now in Managed Preview. Some of the previous limitations with the Managed Preview have been lifted, notably the region-wide availability. With this release the feature is now available in all Public Azure regions.

References:

Таблица свойств (aka ShapeSheet) –ключевая особенность Microsoft Visio

$
0
0

Общие сведения

Когда в реальной жизни мы пытаемся описать собеседнику какой-нибудь предмет, мы указываем его цвет, материал, габариты, и т.п.

Для описания объектов компьютерной графики необходимы более полные описания. Таблицы свойств ShapeSheet в приложении Microsoft Visio содержат стандартизированный набор разделов и свойств для описания параметров объектов, в том числе не только графических. У каждого типа объектов будет свой набор разделов и параметров.

Таблицы свойств используются для описания параметров: фигур (собственно графических объектов), листов из которых состоит документ.

Документ также обладает собственной таблицей свойств содержащей раздел, где содержатся параметры отображения документа.

Где найти

Вызов команд открывающих Таблицу свойств возможен только при активированной вкладке Разработчик!

В одной из предыдущих записей блога вы можете найти видео с описанием, как активировать эту вкладку и какие преимущества дает ее активация.

Интерфейс

Таблица свойств не локализирована на другие языки, т.е. названия разделов и свойств в ней исключительно на английском языке!

Вкладка ленты Конструктор для работы с таблицей свойств локализована.

Я владею английским языком в рамках программы средней школы, серьезных проблем с пониманием названий разделов и свойств у

меня не возникало.

Достоинства

При описания объектов в других приложениях очень часто хранятся в виде некой базы данных, к ним нет доступа из интерфейса самого

приложения. Также данные хранятся в форме, когда параметр в описании идет под неким числовым идентификатором, а не под своим названием в явном виде на английском языке!

И самое интересное в особенности реализации подобного хранения параметров объектов в Microsoft Visio:

  1. Значения в ячейках таблицы свойств меняются динамически. Т.е. пользователь изменил местоположение фигуры перетащив ее мышью или используя окно Размер и положение, в соответствующих ячейках таблицы свойств обновится соответствующие свойства.
  2. При изменении значений в ячейках таблицы происходят соответствующие изменения параметров фигуры.
  3. Возможность использования параметрических формул и функций, похожих на функции применяемые в Microsoft Excel. Например пользователь в ячейке содержащей значение ширины фигуры указал, написал формулу что она должна быть в два раза больше высоты фигуры. При дальнейшем увеличении высоты фигуры ее ширина будет масштабироваться автоматически!

Возможность использования параметрических формул позволяет создавать интерактивные фигуры и документы, внешний вид и содержимое, которых может изменяться после изменения одной из ячеек таблицы свойств !

Видео: Что такое ShapeSheet

Хочу также порекомендовать вам посмотреть это видео, в котором демонстрируется краткое описание дополнительных возможностей использования таблицы свойств.

Цифровое рабочее место: повышаем эффективность и вовлеченность сотрудника

$
0
0
Авторы статьи - компания Ай-Ти-Про.

В чем заключается концепция цифрового рабочего места (Digital Workplace) и какие инструменты выбрать для ее реализации.

Чем малый и средний бизнес отличается от крупных корпораций? Можно назвать много разных факторов. Но давайте остановимся на различиях в деятельности персонала.

Очевидно, что в малых и средних компаниях сотрудники (как рядовые, так и занимающие руководящие позиции) вынуждены решать большое количество разнородных бизнес-задач, активно коммуницировать с коллегами и контрагентами, работать с большим объемом различной информации. И «новая экономическая ситуация» только способствует росту подобной нагрузки.

А значит, задачи, связанные с автоматизацией совместной работы, вспомогательных бизнес-процессов, управлением проектной работой, корпоративным контентом (информацией), становятся критически важными для бизнеса.

Информационные системы для бизнеса: многообразие разнообразия

Для решения каждой конкретной задачи давно придуманы специализированные информационные системы (ИС) (BPMN, ECM, PM и пр.). Однако осилить затраты на внедрение таких ИС не всегда может даже средний бизнес. Поэтому на практике какие-то задачи решаются с помощью open-source ПО, какие-то на условно-бесплатных облачных приложениях (иногда даже без ведома ИТ-подразделения), что-то работает на приложениях, дописанных к системам учета (как правило 1С).

Типы ИС

BPMS ECM PM СЭД CRM
BI HRM LMS SCM ITSM

 

И в результате внедрение ИС вместо повышения эффективности деятельности персонала зачастую может оказать обратный эффект. Когда каждое подразделение использует свое приложение для автоматизации бизнес-задач, а админ хватается за голову от необходимости администрировать и поддерживать все это многообразие, о совместной работе и эффективном взаимодействии говорить не приходится. Добавим сюда отсутствие единого пользовательского интерфейса, сложность адаптации, и станет понятным, почему сотрудники изо всех сил стараются избежать использования корпоративных систем.

Основной удар в такой ситуации на себя принимает система электронной почты. И некоторое время подобная интеграция работает. Ровно до того момента, как количество уведомлений достигает критической массы, и пользователи перестают на них реагировать.

Цифровое рабочее место – концепция, ориентированная на пользователя

Чтобы решить эту проблему, потребовалось полностью изменить подход к автоматизации. Если раньше основной упор делался на автоматизацию бизнес-процессов, и действия пользователя были жестко ограничены этими рамками, то теперь фокус сместился на самого пользователя и инструменты, позволяющие организовать его эффективную работу. И так возникла концепция Цифрового рабочего места (Digital workplace), в которой сотрудник стал, прежде всего, потребителем контента и сервисов, необходимых именно ему «в нужное время в нужном месте».

Вот что говорит на этот счет Gartner: «Digital Workplace обеспечивает новые, более эффективные методы работы, повышает вовлеченность и гибкость сотрудника, использует стили и технологии, ориентированные на потребителя».

В основе концепции цифрового рабочего места лежат следующие идеи:

  • Создание единой коммуникационной среды компании – интеграция всех типов коммуникаций от электронной почты и чатов до аудио- видеозвонков и корпоративных блогов. Как личных, так и групповых.
  • Создание корпоративного информационного пространства – единая точка доступа ко всем типам корпоративных данных: документам, справочникам, базам знаний, графическим и видеоматериалам. Основной инструмент – корпоративный поиск
  • Интеграция корпоративных бизнес приложений и сервисов – единая точка доступа к бизнес-сервисам, агрегация оповещений и уведомлений, централизованный сбор аналитики и визуализация отчетов/дашбордов
  • Мобильность – отсутствие привязки к выделенному рабочему месту, доступ через интернет, запуск на мобильных платформах (смартфонах и планшетах)
  • Безопасность и управляемость – многоуровневое разграничение прав доступа пользователей, многофакторная аутентификация, возможность мониторинга действий пользователя.
  • Дизайн на основе UX (пользовательского опыта) – дизайн, выполненный с учетом анализа поведения пользователя, его целей и ценностей, интегрированных в цели и ценности компании.

Решения Microsoft Teams и Office 365

Идеи цифрового рабочего места изначально закладывались в экосистему MS Office 365, а с появление MS Teams они обрели четкую форму.

Цифровое рабочее место на платформе MS Teams в большей степени соответствует описанным выше требованиям, а также предоставляет еще несколько приятных бонусов как пользователям, так и администраторам:

  • Простота интеграции с приложениями и службами Microsoft, речь в том числе и о наземных сервисах и службах, таких как AD.
  • Поддержка популярных офисных форматов файлов (Microsoft Office, PDF).
  • Доступность для малого и среднего бизнеса за счет гибкой системы лицензирования и выгодной стоимости.

Более подробно об архитектуре и возможностях, а также конкретных бизнес-сценариях, реализуемых на платформе Teams, мы расскажем в следующих статьях и на сайте компании Ай-Ти-Про.

Будем рады провести бесплатную демонстрацию Оставить заявку на демонстрацию

Hyper-V Integration Services – Where Are We Today?

$
0
0

Hyper-V Integration Services provide critical functionality to Guests (virtual machines) running on Microsoft's virtualization platform (Hyper-V). For the most part, virtual machines run in an isolated environment on the Hyper-V host. However, there is a high-speed communications channel between the Guest and the Host that allows the Guest to take advantage of Host-side services. If you who have been working with Hyper-V since its initial release you may recognize this architecture diagram –


As seen in the diagram, the Virtualization Service Client (VSC) running in a Guest communicates with the Virtualization Service Provider (VSP) running in the Host over a communications channel called the Virtual Machine BUS (VMBUS). The Integration Services available to virtual machines today are shown here:


Integration Services are enabled in the Virtual Machine settings in Hyper-V Manager or by using the PowerShell cmdlet Enable-VMIntegrationService. These correspond to services running both in the virtual machine (VSC) itself and in the Host (VSP).

To ensure the communication flow between the Guest and the Host is as efficient as possible, Integration Services may need to be periodically updated. It has always been a Microsoft 'best practice' to keep Integration Services updated to ensure the functionality in the Guest is matched with that in the Host. There are several ways to accomplish this including custom scripting, using System Center Configuration Manager (SCCM), using System Center Virtual Machine Manger (SCVMM), and mounting the vmguest.iso file on the Host in the virtual DVD drive in the Guest (Windows only Guests.)


Linux Guests use a separate LIS (Linux Integration Services) package. After installing the latest package, you can verify the version for the communications channel (VMBUS):


You can also list out the Integration Services and other devices connecting over the communications channel:


Note: The versioning shown here for LIS is the result of installing LIS v4.2 in a CentOS 7 virtual machine.

More detailed information related to the capabilities of Linux Integrations Services can be found here.

With the release of Windows Server 2016, updating Integration Services in Windows Guests has changed and will be primarily by way of Windows Update (WU) unless otherwise stated here. Up until very recently, this process had not been working and even now has not been fully implemented for all Windows Guest operating systems. To date (as of the writing of this blog), the Integration Components for Guests running Windows Server 2012 R2 and Windows Server 2008 R2 SP1 are updated using Windows Update. The latest versions of Integration Components for the down-level Server SKUs as well as their corresponding Windows Client SKUs is shown here:


Note: Testing was conducted by deploying virtual machines, in Windows Server 2016 Hyper-V, using ISO media downloaded from a Visual Studio subscription. Each virtual machine was then stepped through the updating process using only Windows Update until it was fully patched. The latest Integration Services for Windows Server 2012 R2 and Windows Server 2008 R2 SP1 are included in KB 4072650.

Integration Services versioning (Windows) information can be obtained using a variety of scripting methods, but a quick way to do it from inside the virtual machine itself is to run one of these commands in PowerShell –


Revisiting the method for updating Integration Services on earlier versions of Hyper-V by mounting the vmguest.iso file from the Host in the virtual machines' DVD drive, if you open any of the *.xml files in the package, you can ascertain version information -


As of this writing, versioning information is older in a vmguest.iso file as compared to what is registered in virtual machines updated by KB 4072650. This seems to indicate the vmguest.iso file on the Host (prior to Windows Server 2016Windows 10) is no longer being updated. Instead, virtual machines are updating their Integration Services using Windows Update. Even if you run setup.exe in the ISO package, the result is an output of the version registered in the Guest.


Thanks for your attention and I hope this information was useful to you.

Charles Timon, Jr.
Senior, Premiere Field Engineer
Microsoft Corporation

Вышел новый курс “Развертывание 1С: Предприятие на платформе Microsoft Azure”

$
0
0

Дорогие друзья, предлагаем вам посмотреть готовый видео-курс, посвященный теме развертывания 1С: Предприятия на платформе Microsoft Azure. 

Курс посвящен размещению 1С-приложений на облачной платформе Microsoft Azure. В курсе на практических примерах показывается полный цикл развертывания 1С в облаке: начиная от выбора и конфигурирования виртуальной машины в облаке и заканчивая установкой и настройкой самого 1С-приложения, как с файловой базой, так и с SQL. Дополнительно будут продемонстрированы различные настройки.

Курс предназначен как для технических специалистов, которые уже имеют опыт проектирования, программирования, внедрения, автоматизации и мониторинга решений на платформе Microsoft Azure, так и для тех, кто с платформой еще не сталкивался, но хочет получить практические навыки и освоить технологию сразу «в деле». Курс не требует глубоких знаний 1С.

Курс включает в себя 4 видео-сессии. Для просмотра сессий необходимо заполнить регистрационные формы:

  1. Развертывание 1С: Предприятие на платформе Microsoft Azure. 1 сессия
  2. Развертывание 1С: Предприятие на платформе Microsoft Azure. 2 сессия
  3. Развертывание 1С: Предприятие на платформе Microsoft Azure. 3 сессия
  4. Развертывание 1С: Предприятие на платформе Microsoft Azure. 4 сессия

Курс бесплатный. Делитесь ссылками c коллегами!


Using Query APIs to Unlock the Power of Azure Time Series Insights Part 2: The Aggregates APIs

$
0
0

By Basim Majeed, Cloud Solution Architect at Microsoft

This is the second part of the blog series which aims to clarify the query APIs of Azure Time Series Insights (TSI) by the use of examples. It is recommended that you visit the first part of this series in order to learn more about how to set up the TSI environment and how to use Postman to replicate and modify the examples we show here.

In the example we are using here, the message sent by the simulated device is of the following JSON format:

{"tag":"Pres_22","telemetry":"Pressure","value":12.02848,"factory":"Factory_2","station":"Station_2"}

We have two types of telemetry; Temperature and Pressure. The data represents a simple industrial environment with 2 factories, 2 stations per factory and two sensors for each station measuring Temperature and Pressure. For simplicity, all the data is channelled through one IoT device (iotDevice_1) connected to the to the IoT Hub, though you might want to have many devices connected directly to the Hub.

In this blog we focus our attention on the Aggregates API, which provides the capability to group events by a given dimension and to measure the values of other properties using aggregate expressions, which apply to the property types “Double” and “DateTime”. For the properties of “Double” we can use “min”, “max”, “avg” and “sum” expressions, while for “DateTime” we can only use the “min” and “max” expressions. The dimensions that we can use to group the events are "uniqueValues", "dateHistogram" and "numericHistogram".

 

Building the Aggregate query

You start an aggregate query by defining a “searchSpan” to determine the time period over which the data is collected. Then you need to build the “aggregates” clause of the query, within which you need to define the “uniqueValues” dimension that determines the grouping of data. For example, if you group by a “sensorID” then all the following calculations will be done for each unique value of “sensorID”. To limit the number of unique values returned you need to use a “take” or “sample” clause, as will be shown in the examples.

Next, still within the “aggregates” clause, you need to decide on the property you want to return an aggregate measure for. As an example, you could choose to calculate the minimum value of a property over the full “searchSpan”.

All the requests in the following examples are directed to the “aggregates” resource path as explained in part 1 of this blog series:

POST https://<environmentFqdn>/aggregates?api-version=<apiVersion>

 

Example 1: Return the minimum, maximum and average sensor readings

In this example we return the average, minimum and maximum readings for each unique sensor tag value over the defined time period defined by the “searchSpan” clause. We are limiting the number of sensors that we will apply the aggregate calculation to by using the “take” clause, in this case set to 10 sensors. Since we only have 8 sensor tags in the data sample then all the tags will be returned in the response. If, for example, the “take” clause defined a value of 4,  only 4 sensor tags will be used for the aggregation. Note that the use of “take” returns property values without any particular order. The response is shown in Figure 1.

 

Request Body

{
    "searchSpan": {
        "from": { "dateTime":"{{startdate}}" },
        "to": { "dateTime":"{{enddate}}" }
    },
    "aggregates": [
        {
            "dimension": {
                "uniqueValues": {
                    "input": { "property": "tag", "type": "String" },
                    "take": 10
                }
            },
            "measures": [
                {
                    "avg": {
                        "input": { "property": "value", "type": "Double" }
                    }
                },
                {
                    "min": {
                        "input": { "property": "value", "type": "Double" }
                    }
                },
                {
                    "max": {
                        "input": { "property": "value", "type": "Double" }
                    }
                }
            ]
        }
    ]
}

 

Response Body

Figure 1: The average, minimum and maximum values per sensor tag

 

Example 2: Using “predicate” to restrict the dimensions based on a condition

If we want to be more specific about the “dimension”, we can insert a “predicate” clause before the “aggregates” clause. We want to limit the sensors selected for the aggregation dimensions to only those that provide the temperature telemetry. This is done using the equality expression “eq”.  Note also that we are only choosing to return the results for 2 sensors as specified by the “take” clause. The response is shown in Figure 2.

POST https://<environmentFqdn>/aggregates?api-version=<apiVersion>

 

Request Body

{
    "searchSpan": {
        "from": { "dateTime":"{{startdate}}" },
        "to": { "dateTime":"{{enddate}}" }
    },
    "predicate":{
        "eq": {
            "left": {
                "property": "telemetry",
                    "type": "String"
            },
            "right": "Temperature"
        }
    },
    "aggregates": [
        {
            "dimension": {
                "uniqueValues": {
                    "input": { "property": "tag", "type": "String" },
                    "take": 2
                }
            },
            "measures": [
                {
                    "avg": {
                        "input": { "property": "value", "type": "Double" }
                    }
                },
                {
                    "min": {
                        "input": { "property": "value", "type": "Double" }
                    }
                },
                {
                    "max": {
                        "input": { "property": "value", "type": "Double" }
                    }
                }
            ]
        }
    ]
}

Response Body

Figure 2: The effect of the “predicate” clause on the query response

 

Example 3: Using the date histogram

In this example we take a look at how to report a measure using a date histogram, i.e. by dividing the time axis into a number of fixed buckets and reporting the measure(s) over each individual bucket.

We start the query as usual by defining the “searchSpan”, and we also use the “predicate” clause to restrict our data to that coming from the temperature sensors. We then start the “aggregates” clause and choose the “tag” property as our “uniqueValues” dimension so the histogram calculations will be per “tag”. The “take” clause limits the number of tags considered here to two only. The “dateHistogram” is a dimension in itself and thus it needs to be defined inside an inner “aggregate” clause (note the difference between the outer “aggregates” and the inner “aggregate” clauses).

We define the “dateHistogram” by two properties. The first one is the “input” property which defines the time axis for the histogram, and in this case we use the built-in time variable “$ts” (the $ts property is generated by the TSI event ingestion process based on the information from the event source, e.g IoT hub, and can be replaced by another property that represents the time axis if required as long as it is defined as part of the event). The second property is the “breaks” property which defines the time duration of each of the buckets we need to report against. In this case we are dividing the time axis into 20 minute intervals.

We also need the “measures” definition inside the “aggregate” clause. Notice that we have included two measures in this case; the average value of the temperature (defined by the “avg” clause) and the number of values of temperature readings within each interval (defined by the “count” clause). So in this example we have combined two “dimension” properties together to produce the histogram; the “uniqueValues” and the “dateHistogram” dimensions. The results are shown in Figure 3.

 

Request Body

{
    "searchSpan": {
        "from": { "dateTime":"{{startdate}}" },
        "to": { "dateTime":"{{enddate}}" }
    },
    "predicate":{
        "eq": {
            "left": {
                "property": "telemetry",
                    "type": "String"
            },
            "right": "Temperature"
        }
    },
    "aggregates": [
        {
            "dimension": {
                "uniqueValues": {
                    "input": { "property": "tag", "type": "String" },
                    "take": 2
                }
            },
            "aggregate": {
                "dimension": {
                    "dateHistogram": {
                        "input": { "builtInProperty": "$ts" },
                        "breaks": { "size": "20m" }
                    }
                },
                "measures": [
                    {
                        "avg": {
                            "input": { "property": "value", "type": "Double" }
                        }
                    },
                    {
                        "count": {}
                    }
                ]
            }
        }
    ]
}

 

Response Body

Figure 3: Results for the “dateHistogram”

 

Example 4: Using the numeric histogram

The numeric histogram is used to report the measure (e.g. the count) against value intervals. For example, if we have a sensor reporting some measurement in the range of 10 to 15, we can divide this range into a number of value buckets (10-11, 11-12, and so on). Then we can measure the number of events coming within each range. The way we construct the query is similar to the previous example apart from replacing “dateHistogram” with “numericHistogram”, and then defining how many value intervals we need in the “breaks” clause.

The TSI documentation states that “For numeric histogram, bucket boundaries are aligned to one of 10^n, 2x10^n or 5x10^n values”. So, if we have sensor data that ranges in values between 10 and 15 such as in the Pressure sensor of this example, we can choose 5 buckets of size 1, or 3 buckets of size 2, or 50 bucket of size 0.1 and so on, but we cannot have buckets of size 0.25 or 0.3 etc. We can only set the value of the “breaks” in the query but not the value of the bucket size. TSI will work out the sizer of buckets accordingly.

For the range of 10-15 that we have here, if we choose “breaks” to be 5 then we will get 5 buckets of size 1 each, however if we ask for 15 buckets then we will only get back 10 buckets of size of 0.5 each. The results are shown in Figure 4 for the cases “breaks” set to 5 and 15.

 

Request Body

{
    "searchSpan": {
        "from": { "dateTime":"{{startdate}}" },
        "to": { "dateTime":"{{enddate}}" }
    },
    "predicate":{
        "eq": {
            "left": {
                "property": "telemetry",
                    "type": "String"
            },
            "right": "Pressure"
        }
    },
    "aggregates": [
        {
            "dimension": {
                "uniqueValues": {
                    "input": { "property": "tag", "type": "String" },
                    "take": 1
                }
            },
            "aggregate": {
                "dimension": {
                    "numericHistogram": {
                        "input": {
                            "property": "value",
                            "type": "Double"
                        },
                        "breaks": {
                            "count": 5
                        }
                    }
                },
                "measures": [
                    {
                        "count": {}
                    }
                ]
            }
        }
    ]
}

 

Response Body

Figure 4: The “numericHistogram” results

 

Conclusions

The examples we explored here show the power of the aggregation APIs within Azure Time Series Insights in summarising events data and thus providing a way for users to build custom analytics that focus on their specific requirements. In the third part of this blog series we will look at more query capabilities within the TSI APIs.

O365 Groups Tidbit – Create/Delete/Upgrade O365 Groups

$
0
0

Hello All,

As O365 Groups become more important in managing SharePoint I thought I would provide you with some information about them

Who should be using O365 Groups?

Groups or people that work in the following manner:

  • Frequent email communication
  • Email distribution lists (Upgrade)
  • Sharing Office documents

Who can create groups?

By default all users can create O365 Groups, this was done because groups are used in so many different locations that requests for groups could be to much for Helpdesk to keep up with, however there are times when companies need to restrict the ability to create groups for governance or other reasons, in that case I recommend you follow this article.

The article walks you thru the following steps (With in-depth information):

  1. Get the ObjectId of the security group for all users that are allowed to create groups.  You can use the cmdlet Get-AzureADGroup to achieve this.
  2. Get the setting template for Unified Groups, by running the line

$Template = Get-AzureADDirectorySettingTemplate | where {$_.DisplayName -eq 'Group.Unified'}

  1. Then configure new settings by running the lines

$Setting = $Template.CreateDirectorySetting()

New-AzureADDirectorySetting -DirectorySetting $Setting

$Setting = Get-AzureADDirectorySetting -Id (Get-AzureADDirectorySetting | where -Property DisplayName -Value "Group.Unified" -EQ).id

$Setting["EnableGroupCreation"] = $False

$Setting["GroupCreationAllowedGroupId"] = (Get-AzureADGroup -SearchString "<Name of your security group>").objectid

  1. Save the settings template by running this line

Set-AzureADDirectorySetting -Id (Get-AzureADDirectorySetting | where -Property DisplayName -Value "Group.Unified" -EQ).id -DirectorySetting $Setting

NOTE: You must use AzureADPreview to achieve these results, and require AAD Premuim.

How to create O365 Groups?

Once you open your environment to being Self-Hosted end-users or if not self-hosted then anybody who has permission to create groups will have several ways to create O365 Groups:

  1. Outlook – When you create a group thru Outlook you get the following objects Shared Inbox, Shared Calendar, SharePoint Document Library, Shared OneNote Notebook, SharePoint Team Site, and Planner
  2. Teams – When you create a group thru Teams you get the following objects Chat based workspace, Shared Inbox, Shared Calendar, SharePoint Document Library, Shared OneNote Notebook, SharePoint Team Site, and Planner
  3. Yammer – When you create a group thru Yammer you get the following objects Yammer Group, SharePoint Document Library, SharePoint OneNote Notebook, SharePoint Team Site, and Planner

Administrators can create groups thru the following manners

  1. PowerShell/API

To create O365 Groups with PowerShell you will need to first connect to Exchange Online and retrieve cmdlet’s the following lines perform this

$Creds = Get-Credential

$Session = New-PSSession -ConfigurationName Microsoft.Exchange -ConnectionUri https://outlook.office365.com/powershell-liveid/ -Credential $Creds -Authentication Basic -AllowRedirection

Import-PSSession $Session

Now we can create a group using the cmdlet New-UnifiedGroup and example of this would be:

              New-UnifiedGroup -DLIdentity “My New Group”

If you wanted you can use several Optional parameters like this

New-UnifiedGroup -DLIdentity “My New Group” -Alias “GroupAlias” -SubscriptionEnabled -AutoSubscribeNewmembers -AccessType Private

We can modify the group settings by using the cmdlet Set-UnifiedGroup

Set-UnifiedGroup -Identity “My New Group” -AccessType Public -AlwaysSubscribeMembersToCalendarEvents

We can add Member or Owners by using the cmdlet Add-UnifiedGroupLinks

Add-UnifiedGroupLinks -Identity “My New Group” -LinkType Owners -Links chris@contoso.com          #Adds owner

Add-UnifiedGroupLinks -Identity “My New Group” -LinkType Members -Links george@contoso.com,linda@contoso.com         #Adds members

Note: See Remove-UnifiedGroupLinks to remove Members/Owners from group

  1. You can manually create/modify O365 Groups using the following portals
    1. Azure Active Directory
    2. Office Admin Portal
    3. Exchange Admin Center

How to remove/cleanup O365 Groups?

  1. A great way to automate the cleanup of O365 Groups in your tenant is thru an Expiration Policy which is off by default.  If you configure it, then owners will get an email XX days before it is soft-deleted at which point owners will have XX days to recover it before it is permanently deleted.

Configuring the policy requires Global Admin permission and is done in AAD portal, you can choose from 180 days, 365 days, or custom which has to be greater then 30 days.  In the portal go to User and Groups -> Group Settings -> Expiration and set the desired policy.

Note: All objects attached to the group including the group itself can have a retention policy, and once the group is deleted those policies will be enforced (For more info see this article)

  1. PowerShell/API

To remove O365 Groups with PowerShell you will need to first connect to Exchange Online and retrieve cmdlet’s the following lines perform this

$Creds = Get-Credential

$Session = New-PSSession -ConfigurationName Microsoft.Exchange -ConnectionUri https://outlook.office365.com/powershell-liveid/ -Credential $Creds -Authentication Basic -AllowRedirection

Import-PSSession $Session

              To remove the O365 Group run the cmdlet Remove-UnifiedGroup

Remove-UnifiedGroup -Identity “My New Group” -Force

  1. You can manually remove O365 Groups using the following portals
    1. Azure Active Directory
    2. Office Admin Portal
    3. Exchange Admin Center

How to upgrade Distribution lists and which ones can I not upgrade?

There are two ways to upgrade a DL to an O365 Group

  1. You can use the Exchange Admin center to upgrade all eligible DL’s, see this article for steps.
  2. You can use PowerShell to upgrade individual DL’s or all eligible DL’s, Cmdlets you will possibly  use are Upgrade-DistributionGroup and Get-EligibleDistributionGroupForMigration and Get-UnifiedGroup
    1. To upgrade a single DL you would run the following command Upgrade-DistributionGroup -DLIdenties <DLName>
    2. To upgrade multiple DL you have two choices
  3. Upgrades all named DL’s Upgrade-DistributionGroup -DLIdenties <DLName1>,<DLName2>
  4. Upgrade all eligible DL’s Get-EligibleDistributionGroupForMigration | Upgrade-DistributionGroup

NOTE: You need to be either an Exchange Admin or a Global admin to perform this task

Any DL that falls into these categories will not be eligible for upgrade:

  • Nested
  • Security groups
  • Dynamic distribution lists
  • On-premises owned

Watch for further emails to look at further managing of O365 groups.

Pax

Dynamics 365 ブログのご紹介【5/22 更新】

$
0
0

Dynamics 365関連の情報収集に役立つ「Dynamics 365 ブログ」をご存知ですか?

ぜひ定期的にチェックしていただき、業務にお役立てください。

 

 

▼ Dynamics 365 ブログはこちらから

 

 

 

 

 

 

 

(Cloud) Tip of the Day: Updates to global subscription filtering

$
0
0

Today's tip...

We've made changes to the way subscription filtering works. You can filter subscriptions locally on browser resource blades without affecting the global subscription filter or subscription filtering settings on browser resource blades in other services.

If you need subscription filtering to persist across all browser resource blades, you should always set the subscription filter in the Settings or Directory + Subscriptions pane.

To set the global subscription filter:

  1. Open the Settings or Directory + Subscriptions pane.
  2. Select the subscriptions drop-down list.
  3. Select the subscriptions. You might need to use the filter option to see the desired subscriptions.

Adaptive Application Controls 公開預覽現在發佈了

$
0
0

撰 / Senior Product Manager, Azure Security Center

在微軟 Ignite 中,我們發布了新的 Adaptive Application Controls (自適應應用程式控件),通過使用白名單規則來保護您的應用程序免受惡意軟件的侵害。今天,我們很高興能與大家分享這些可在 Azure 安全中心上的使用的功能的公開預覽。

應用程式控制(例如白名單)可以限制易受攻擊的應用程式對惡意軟體的暴露。應用程式白名單不會與快速發展的惡意軟體和新漏洞保持同步,而是會阻止所有除了已知的良好應用程式以外的東西。對於通常運行一組固定應用程式的專用服務器,白名單可以提供顯著的附加保護。應用程式控件解決方案現在已經存在一段時間了,但是組織通常認為它太複雜並且難以管理,特別是當每個服務器或一組服務器需要獨特的規則並且大規模時。

自適應應用程式控件利用機器學習來分析 Azure 虛擬機的行為,創建應用程式基線,將虛擬機分組,並建議並自動應用適當的白名單規則。您可以在 Azure 安全中心查看,修改和接收這些規則的警報。

自適應應用程式控件目前適用於在 Azure 中運行的 Windows 虛擬機(所有版本,經典或 Azure資源管理器)。想要開始的話,請打開安全中心並選擇應用程式白名單磁貼。

 

啟用自適應應用控件並套用策略

在 Adaptive Application Controls blade 中,您可以輕鬆地為您選擇的虛擬機組啟用自適應應用程序控件。 您可以查看建議並創建策略,以確定哪些應用程序最初將以審核模式運行,並在應用程序違反規則時接收警報。 創建和應用您自己的策略可降低管理複雜性,同時增加對應用程序的保護。

 

監控和編輯 Adaptive Application Controls 的策略

在 Adaptive Application Controls blade 中,您可以輕鬆管理和監控配置了自適應應用程式控件策略的現有虛擬機組。 您可以查看和修改在特定組內的虛擬機上應用的白名單規則,並在遇到違反這些規則時收到警報。 此外,您可以更改應用特定自適應應用程式控件策略的模式,並使用強制模式開始阻止未經批准的應用程式。 對應用程式安全狀態的可見性有助於您保持控制。

以上是在 Azure 安全中心的標准定價層中提供的新功能,您可以在免費試用 60 天。

請參閱我們的文檔以了解更多有關 Adaptive Application Controls 的詳情。

Wish you were here? How to sell collaboration with scenarios

$
0
0

 

You might have seen it on TV. On Saturday mornings, tens of thousands of fans fill the arena. The green flag falls. Dozens of cars drive inches apart at 200 miles an hour, banking through hundreds of 30-degree left-hand turns. This is NASCAR, and it's two days of adrenaline-fuelled excitement, 36 weeks in a row.

Behind the scenes, though, things move even faster. More than 200 employees and vendors work together to produce some of the most popular sporting events in the world. To keep up with broadcast and race logistics, they need to stay in touch any time, on any device. So Microsoft SharePoint Online is their cloud-based digital cockpit.

This is a real world example of collaboration, powered by Microsoft. And it's the perfect way to tell the story, and sell what we have to offer.

Find their goal

What stands out in the scenario you just read? People, processes, and technology. It's not about any one of them, but how they all work together towards a single goal. Broadly speaking, this is what your customers want from collaboration. The first question you should be asking when building their scenario is: What's their goal?

If you read our security strategy blog post, you'll remember the conversation starters. These are top level talking points you can use to spot your customers' goals, and create the scenario that'll get them excited. If you didn't catch the post, or want a refresher, you can read it here.

For now, let's take a look at three conversation starters for selling collaboration scenarios in your customers' businesses.

Modern collaboration with effective teamwork

Connecting people to people. How do your customers' people like to work? With a strong foundation of security, they're free to share files, work on the move, and join virtual teams. So everyone's connected.

Employee engagement and empowerment

Connecting people to information. Do your customers get the most from their data? With a central hub, everyone can share their knowledge, brush up on their training, and streamline task management.

Business transformation

Connecting people to systems. What can collaboration do to put your customers in front? Business transformation finds new ways to use existing technology - and transform business processes.

The partner opportunity

Remember, these are just a few places you can start. Your customers will have their own ambitions - and it's up to you to find a scenario that fits. Make them wish they were here.

This is your opportunity. When you sell scenarios, not technology, there are lots of ways you can add value. Deployment, adoption, managed services, and developer and integration. You can use any and all of these to get your customers to their goal - and generate more profit for your business.

If you want to start using this collaboration strategy to sell scenarios to your customers, you'll want to get your hands on the playbook. Inside, you'll discover some more examples of successful scenarios our partners have sold. You'll also get more detail on the ways you can add value - and the Microsoft tech that can help - as you combine people, processes, and technology in your customers' businesses.

Send Authenticated SMTP with PowerShell

$
0
0

Today, while I was testing out some transport rules, I wanted to send a bunch of test messages to make sure they were firing correctly.  I wanted to create some custom messages and be able to automate them, and I wanted to use an outside relay service that requires SMTP authentication.

It took a good bit of tinkering, but here's what I cobbled together:

# Sender and Recipient Info
$MailFrom = "sender@senderdomain.com"
$MailTo = "recipient@recipientdomain.com"

# Sender Credentials
$Username = "SomeUsername@SomeDomain.com"
$Password = "SomePassword"

# Server Info
$SmtpServer = "smtp.domain.com"
$SmtpPort = "2525"

# Message stuff
$MessageSubject = "Live your best life now" 
$Message = New-Object System.Net.Mail.MailMessage $MailFrom,$MailTo
$Message.IsBodyHTML = $true
$Message.Subject = $MessageSubject
$Message.Body = @'
<!DOCTYPE html>
<html>
<head>
</head>
<body>
This is a test message to trigger an ETR.
</body>
</html>
'@

# Construct the SMTP client object, credentials, and send
$Smtp = New-Object Net.Mail.SmtpClient($SmtpServer,$SmtpPort)
$Smtp.EnableSsl = $true
$Smtp.Credentials = New-Object System.Net.NetworkCredential($Username,$Password)
$Smtp.Send($Message)

There's a few other interesting properties to the message and ways to get them in there.  As I am wont to do, I like to dig and poke around:

As you see, there are are plenty of things you can do.  Want to add an attachment?  Easy as pie:

$Message.Attachments.Add("C:Temppie.txt")
$Message.Attachments.Add("C:Temppie2.txt")

Of course, maybe you didn't want to share both pie recipes.  I know how you are.  You can remove them (though it's not quite as intuitive, at least when I've tried to do it).

$Message.Attachments.RemoveAt("0") # Remove the attachment at index 0
$Message.Attachments.RemoveAt("1") # Remove the attachment at index 1

One of the interesting properties that we have available is Headers.  The MSDN documentation doesn't have too much on it, but if you want to use it to add a custom header, you can use the ... wait for it ... Add method:

$Message.Headers.Add("X-My-Test-Header","SomeData")

In this example, I populated $Message.Body with a Here-String.  If you have a larger HTML message body, you can also import it using Get-Content:

$Message.Body = Get-Content htmlemail.html

Hopefully this is helpful to someone out there in the universe.  If not, just disregard as the ramblings of an increasingly older man.

 


Dois comunicados de segurança da Microsoft lançados (18012 e 18013) em 21 de Maio de 2018

$
0
0

A Microsoft lançou dois novos comunicados de segurança em 21 de maio de 2018, para fornecer diretrizes aos clientes sobre novas vulnerabilidades de segurança. Essas novas vulnerabilidades de segurança estão relacionadas a vulnerabilidades de segurança de ataque de temporização de execução especulativa divulgadas anteriormente. Esse alerta fornece uma visão geral desses dois novos comunicados de segurança da Microsoft, juntamente com links para fornecer informações adicionais.

Comunicado de Segurança 180012 | Diretrizes da Microsoft para Bypass de Repositório Especulativo ‒ visão geral

Em 21 de maio de 2018, a Microsoft lançou o Comunicado de Segurança 180012 | Diretrizes da Microsoft para Bypass de Repositório Especulativo.  Aqui está uma visão geral:

Comunicado de Segurança da Microsoft 180012 Diretrizes da Microsoft para Bypass de Repositório Especulativo
Sinopse Em 3 de janeiro de 2018, a Microsoft lançou um comunicado e atualizações de segurança relacionadas a uma classe de vulnerabilidades de hardware recém-descobertas (conhecidas como Specter e Meltdown) envolvendo ataques de temporização de execução especulativa que afetam CPUs AMD, ARM e Intel em graus variados. Em 21 de maio de 2018, uma nova subclasse de vulnerabilidades de ataque de temporização de execução especulativa, conhecida como SSB (Bypass de Repositório Especulativo), foi anunciada e atribuída ao CVE-2018-3639.

Um invasor que explorar com êxito essa vulnerabilidade poderá ler dados privilegiados entre limites de confiança. Padrões de código vulneráveis no SO (sistema operacional) ou em aplicativos podem permitir que um invasor explore essa vulnerabilidade. No caso de compiladores JIT (Just-in-Time), como o JavaScript JIT adotado por navegadores modernos, pode ser possível que um invasor forneça JavaScript que produza código nativo que poderia dar origem a uma instância de CVE-2018. 3639 No entanto, o Microsoft Edge, o Internet Explorer e outros dos principais navegadores tomaram medidas para aumentar a dificuldade de criar com êxito um ataque de temporização.

No momento da publicação, não estamos cientes de nenhum padrão de código explorável dessa classe de vulnerabilidade em nosso software ou infraestrutura de serviços em nuvem, mas continuamos a investigar.

Estratégia de Mitigação da Microsoft A Microsoft implementará a seguinte estratégia para mitigar o Bypass de Repositório Especulativo:
  • Se um padrão de código vulnerável for encontrado, nós o abordaremos com uma atualização de segurança.
  • O Microsoft Windows e o Azure adicionarão suporte para SSBD (Desabilitação de Bypass de Repositório Especulativo), conforme documentado pela Intel e pela AMD. O SSBD inibe a ocorrência do Bypass de Repositório Especulativo e, assim, elimina completamente o risco de segurança. A Microsoft está trabalhando com a AMD e a Intel para avaliar a disponibilidade e a preparação desses recursos, incluindo o microcódigo, quando necessário, e o impacto sobre o desempenho.
  • A Microsoft continuará a desenvolver, lançar e implantar mitigações de defesa em profundidade para vulnerabilidades de ataque de temporização de execução especulativa, incluindo o Bypass de Repositório Especulativo.
  • A Microsoft continuará a pesquisar ataques de temporização de execução especulativa, incluindo a comunicação com pesquisadores e o programa de prêmio pela detecção de execução especulativa.
Respostas a perguntas antecipadas Acesse o Comunicado de Segurança 180012 para ver a lista completa de respostas a perguntas antecipadas/frequentes. Essa lista de respostas às perguntas será atualizada à medida que novas informações forem disponibilizadas.
Ações recomendadas
  • Para ser notificado quando o conteúdo for alterado neste e em outros comunicados de segurança da Microsoft, assine as Notificações de Segurança Técnica da Microsoft ‒ Edição Abrangente, nesta página da Web: https://technet.microsoft.com/pt-br/security/dd252948.aspx.
  • Familiarize-se com o conteúdo no Comunicado de Segurança 180012, incluindo detalhes técnicos, ações recomendadas, respostas a perguntas frequentes e links para obter informações adicionais.
  • Avalie a implicação de desempenho do SSBD em seu ambiente quando ele estiver disponível.
  • Continuar a implantar mitigações de Spectre e Meltdown, incluindo a implantação de microcódigo de processador disponível no momento. 
URL do comunicado de segurança 180012: https://portal.msrc.microsoft.com/pt-br/security-guidance/advisory/ADV180012

Comunicado de Segurança 180013 |
MDiretrizes da Microsoft para Leitura Não Autorizada de Registro do Sistema ‒ visão geral

Em 21 de maio de 2018, a Microsoft lançou o Comunicado de Segurança 180013 | Diretrizes da Microsoft para Leitura Não Autorizada de Registro do Sistema.  Aqui está uma visão geral:

Comunicado de Segurança da Microsoft 180013 Diretrizes da Microsoft para Leitura Não Autorizada de Registro do Sistema
Sinopse Em 3 de janeiro de 2018, a Microsoft lançou um comunicado e atualizações de segurança relacionadas a uma classe de vulnerabilidades de hardware recém-descobertas (conhecidas como Specter e Meltdown) envolvendo ataques de temporização de execução especulativa que afetam CPUs AMD, ARM e Intel em graus variados. Em 21 de maio de 2018, a Intel anunciou a Vulnerabilidade de Leitura Não Autorizada de Registro do Sistema e atribuiu o CVE-2018-3640.

Um invasor que explorar essa vulnerabilidade com êxito poderá ignorar as proteções KASLR (Aleatorização de Layout de Espaço de Endereço de Kernel). Para explorar esta vulnerabilidade, o invasor teria que fazer logon no sistema afetado e executar um aplicativo especialmente criado. A mitigação dessa vulnerabilidade é realizada exclusivamente por meio de uma atualização de microcódigo/firmware, e não há nenhuma atualização adicional do sistema operacional Microsoft Windows.

Respostas a perguntas antecipadas Acesse o Comunicado de Segurança 180013 para ver a lista completa de respostas a perguntas antecipadas/frequentes. Essa lista de respostas às perguntas será atualizada à medida que novas informações forem disponibilizadas.
Ações recomendadas
  • Para ser notificado quando o conteúdo for alterado neste e em outros comunicados de segurança da Microsoft, assine as Notificações de Segurança Técnica da Microsoft ‒ Edição Abrangente, nesta página da Web: https://technet.microsoft.com/pt-br/security/dd252948.aspx.
  • Familiarize-se com o conteúdo no Comunicado de Segurança 180013, incluindo detalhes técnicos, ações recomendadas, respostas a perguntas frequentes e links para obter informações adicionais.
  • Implante o microcódigo atualizado quando ele estiver disponível. Os clientes do Surface receberão um microcódigo atualizado na forma de uma atualização de firmware por meio do Windows Update. Para hardware de dispositivo OEM de terceiros, recomendamos que os clientes consultem o fabricante do dispositivo para obter atualizações de microcódigo/firmware.
URL do comunicado de segurança 180013: https://portal.msrc.microsoft.com/pt-br/security-guidance/advisory/ADV180013

O Guia de Atualização de Segurança da Microsoft

O Guia de Atualizações de Segurança é nosso recurso recomendado para informações sobre atualizações de segurança. Você pode personalizar suas exibições e criar planilhas de softwares afetados, além de baixar dados por meio de uma API RESTful. Como lembrete, o Guia de Atualizações de Segurança agora substituiu formalmente as páginas de boletins de segurança tradicionais.

Portal do Guia de Atualizações de Segurança:  https://aka.ms/securityupdateguide

Página da Web de perguntas frequentes sobre o Guia de Atualizações de Segurança: https://technet.microsoft.com/pt-br/security/mt791750

Sobre a consistência das informações

Nós nos empenhamos para fornecer a você informações precisas usando conteúdos estáticos (esta mensagem) e dinâmicos (baseados na Web). O conteúdo de segurança da Microsoft postado na Web é atualizado ocasionalmente para informar sobre novidades. Se isso resultar em uma inconsistência entre as informações descritas aqui e as informações no conteúdo de segurança baseado na Web da Microsoft, as informações no conteúdo de segurança baseado na Web da Microsoft prevalecerão.

Em caso de dúvidas sobre este aviso, entre em contato com seu Gerente Técnico de Conta (TAM)/Gerente de Prestação de Serviços (SDM).

Obrigado,

Equipe de Segurança Microsoft CSS

Publicación de dos avisos de seguridad de Microsoft (18012 y 18013) en 21 de mayo de 2018

$
0
0

El 21 de mayo de 2018, Microsoft publicó dos nuevos avisos de seguridad para proporcionar asistencia a los clientes en relación con nuevas vulnerabilidades de seguridad. Estas nuevas vulnerabilidades de seguridad están relacionadas con las vulnerabilidades de seguridad de canal lateral de ejecución especulativa divulgadas anteriormente. En esta alerta se proporcionan una descripción general de estos nuevos dos avisos de seguridad de Microsoft, junto con vínculos con información adicional.

Aviso de seguridad 180012 | Asistencia de Microsoft para la derivación de almacenamiento especulativo: descripción general

El 21 de mayo de 2018, Microsoft publicó el aviso de seguridad 180012 | Asistencia de Microsoft para la derivación de almacenamiento especulativo.  Aquí hay una descripción general:

Aviso de seguridad de Microsoft 180012 Asistencia de Microsoft para la derivación de almacenamiento especulativo
Resumen ejecutivo El 3 de enero de 2018, Microsoft publicó un aviso y actualizaciones de seguridad relacionados con una clase recientemente descubierta de vulnerabilidades de hardware (conocidas como Spectre y Meltdown) que implican canales laterales de ejecución especulativa que afectan a las CPU de AMD, ARM e Intel en grados variantes. El 21 de mayo de 2018, se anunció y se asignó el aviso CVE-2018-3639 a una nueva subclase de vulnerabilidades de canal lateral de ejecución especulativa conocidas como Derivación de almacenamiento especulativo (SSB).

Un atacante que aprovechara con éxito esta vulnerabilidad podría leer datos confidenciales entre todas las limitaciones de confianza. Patrones de código vulnerable en el sistema operativo (SO) o en las aplicaciones podrían permitir a un atacante aprovecharse de esta vulnerabilidad. En el caso de compiladores tipo "justo a tiempo" (JIT), como JavaScript JIT que emplean los exploradores web modernos, un atacante podría publicar JavaScript que produjera código nativo que podría dar lugar a una instancia de CVE-2018-3639. Sin embargo, Microsoft Edge, Internet Explorer y otros exploradores principales han tomado medidas para aumentar la dificultad de la creación exitosa de un canal lateral.

En el momento de la publicación, no conocemos ningún código de código aprovechable de esta clase de vulnerabilidad en nuestro software o infraestructura de servicio en la nube, pero seguimos investigando.

Estrategia de mitigación de Microsoft Microsoft implementará la siguiente estrategia para mitigar la derivación de almacenamiento especulativo:
  • En el caso de que se encontrara un patrón de código vulnerable, lo trataremos con una actualización de seguridad.
  • Microsoft Windows y Azure agregarán compatibilidad con la inhabilitación de la derivación de almacenamiento especulativo (SSBD) según lo hayan documentado Intel y AMD. SSBD inhibe la derivación de almacenamiento especulativo, por lo que se elimina por completo el riesgo de seguridad. Microsoft está trabajando con AMD e Intel para evaluar la disponibilidad y preparación de estas características, como por ejemplo, microcódigo cuando sea necesario, y el impacto en el rendimiento.
  • Microsoft seguirá desarrollando, publicando e implementando mitigaciones de defensa en profundidad para las vulnerabilidades de canal lateral de ejecución especulativa, como la derivación de almacenamiento especulativo.
  • Microsoft seguirá investigando los canales laterales de ejecución especulativa, con la participación de investigadores y el programa de recompensa de ejecución especulativa.
Respuestas a preguntas anticipadas Visite Aviso de seguridad 180012 para ver la lista completa de respuestas a las preguntas frecuentes o anticipadas. La lista de respuestas a las preguntas se actualizará a medida que haya nueva información disponible.
Acciones recomendadas
  • Para recibir notificación sobre el cambio de contenido de este y otros avisos de seguridad de Microsoft, suscríbase a las Notificaciones técnicas de seguridad de Microsoft: edición completa, en esta página web: https://technet.microsoft.com/es-es/security/dd252948.aspx.
  • Familiarícese con el contenido del Aviso de seguridad 180012, como los detalles técnicos, las acciones recomendadas, respuestas a preguntas frecuentes y vínculos a información adicional.
  • Evalúe las implicaciones de rendimiento de SSBD en su entorno, cuando se haga disponible.
  • Siga implementado las mitigaciones para Spectre y Meltdown, como la implementación del microcódigo de procesador disponible actualmente. 
URL del aviso de seguridad 180012: https://portal.msrc.microsoft.com/es-es/security-guidance/advisory/ADV180012

Aviso de seguridad 180013 |
Asistencia de Microsoft para la lectura de registro de sistema solitario: descripción general

El 21 de mayo de 2018, Microsoft publicó el aviso de seguridad 180013 | Asistencia de Microsoft para la lectura del Registro de sistema solitario.  Aquí hay una descripción general:

Aviso de seguridad de Microsoft 180013 Asistencia de Microsoft para la lectura de registro de sistema solitario
Resumen ejecutivo El 3 de enero de 2018, Microsoft publicó un aviso y actualizaciones de seguridad relacionados con una clase recientemente descubierta de vulnerabilidades de hardware (conocidas como Spectre y Meltdown) que implican canales laterales de ejecución especulativa que afectan a las CPU de AMD, ARM e Intel en grados variantes. El 21 de mayo de 2018, Intel anunció la vulnerabilidad de lectura del Registro de sistema solitario y asignó el aviso CVE-2018-3640.

Un atacante que aprovechara con éxito esta vulnerabilidad podría derivar las protecciones de Aleatoriedad en la disposición del espacio de direcciones (KASLR). Para aprovechar esta vulnerabilidad, un atacante tendría que iniciar sesión en un sistema afectado y ejecutar una aplicación especialmente diseñada. La mitigación para esta vulnerabilidad se aplica exclusivamente a través de una actualización del microcódigo o firmware, y no hay ninguna actualización del sistema operativo Microsoft Windows.

Respuestas a preguntas anticipadas Visite Aviso de seguridad 180013 para ver la lista completa de respuestas a las preguntas frecuentes o anticipadas. La lista de respuestas a las preguntas se actualizará a medida que haya nueva información disponible.
Acciones recomendadas
  • Para recibir notificación sobre el cambio de contenido de este y otros avisos de seguridad de Microsoft, suscríbase a las Notificaciones técnicas de seguridad de Microsoft: edición completa, en esta página web: https://technet.microsoft.com/es-es/security/dd252948.aspx.
  • Familiarícese con el contenido del Aviso de seguridad 180013, como los detalles técnicos, las acciones recomendadas, respuestas a preguntas frecuentes y vínculos a información adicional.
  • Implemente el microcódigo actualizado cuando esté disponible. Los clientes de Surface recibirán código actualizado en la forma de una actualización de firmware a través de Windows Update. Para hardware de dispositivo OEM de terceros, recomendamos que los clientes consulten con el fabricante del dispositivo para obtener actualizaciones de microcódigo o firmware.
URL del aviso de seguridad 180013: https://portal.msrc.microsoft.com/es-es/security-guidance/advisory/ADV180013

Guía de actualizaciones de seguridad de Microsoft

La Guía de actualizaciones de seguridad es nuestro recurso recomendado para obtener información sobre actualizaciones de seguridad. Puede personalizar sus vistas y crear hojas de cálculo del software afectado, así como descargar datos a través de una API de RESTful. Le recordamos que la Guía de actualizaciones de seguridad ya ha sustituido a las páginas web de los boletines de seguridad habituales.

Portal de la Guía de actualizaciones de seguridad:  https://aka.ms/securityupdateguide

Página web de preguntas más frecuentes (P+F) sobre la Guía de actualizaciones de seguridad: https://technet.microsoft.com/es-es/security/mt791750

Respecto a la coherencia de la información

Procuramos proporcionarle información precisa a través de contenido estático (este correo) y dinámico (basado en web). El contenido de seguridad de Microsoft publicado en la web se actualiza ocasionalmente para incluir la información más reciente. Si esto provoca incoherencias entre la información de aquí y la información del contenido de seguridad basado en web de Microsoft, la información autorizada es esta última.

Si tiene alguna pregunta respecto a esta alerta, póngase en contacto con su administrador técnico de cuentas (TAM) o director de prestación de servicios (SDM).

Saludos!

Microsoft CSS Security Team

Get answers to your presales and deployment questions by visiting the updated Top Partner Product Questions page!

$
0
0

Explore the recently updated Top Partner Product Questions page, designed as a self-help resource to guide you through the presales and deployment stages of customer projects. Quickly find answers to frequently asked questions as well as helpful documentation and resources, so you no longer have to contact Microsoft for help. Included in the latest updates is a consolidated list of questions by product areas as well as the latest trending product questions over the past few months.

Find answers to the trending questions and bookmark for future reference during the presales and deployment phases of your customer projects: http://aka.ms/TopProductQuestions.

Updated top 5 trending questions:

  1. How can I deploy Azure?
  2. How do I perform migration from IMAP/Staged/Cutover/Hybrid, on-premises system or third-party solution and data to Exchange Online?
  3. How do I migrate to Exchange hybrid and what are the best practices?
  4. How can Azure service be leveraged to fulfill my customers’ requirements to remove IT locally?
  5. What are the features of Dynamics 365?

In additional to this resource, don’t forget to leverage the full suite technical webinars, consultations and chats available for presales and deployment guidance. http://aka.ms/TechnicalServices.

How to add a field to Item Card in Business Central

$
0
0

For a Normal tenant a user has possibility for Personalise only (Hide and Move existed controls).

 

If we speak about an additional field/control...

As a first step, you must create a sandbox tenant from a normal tenant.

 

Sandbox tenant

 

For creation .app file needs to install Visual Studio Code with AL extension and compile text (.al) files (AL: package).
https://docs.microsoft.com/en-US/dynamics365/Business-Central/dev-ITPro/developer/devenv-get-started

 

Al: Download symbols

 

Al: Package

 

As a result you should see .app file.

 

 

Normal tenant

 

If you need to add another control, you can simply create a new extension and save it (without compiling with the VS code), and then add. al file only to the old extension folder and compile the package (do not forget to increase the version).

 

Event Logs CSV Collector: Created a Graphical User Interface around the Get-EventsFromEventLogs.ps1 script

$
0
0

Hi mates,

 

This is just a quick post to let you know that if you would prefer some mouse interaction and a more graphical input for the Get-EventsFromEventLogs script.

Basically, this GUI will let you fill the Computers, the Event ID and/or the Event Sources, as well as the Event Log types you wish to search in (Application and/or System and/or Security - note that for the "Security" event logs, you need Local Admin permissions on whichever machine you want to collect these), and the Events Level you want to search or export.

As you fill the boxes and check the options, you'll see the command line that will built itself as options are checked or unchecked, and as input boxes (Servers, Event ID(s), Event Sources) are filled or cleared:

NOTE: the "auto-fill" part of the PowerShell script is not more than a function that is called each time a mouse "click" event is registered on the Graphical Interface, and each time the text inside a box is changed, and that function checks all the WPF form controls (a control is a check box, or an inputbox, or a button, etc... it's an element of the form a user can interact with) and update the text in the "Function Command Line" box each time a "click" or "text changed" event is detected on the WPF form.

Here's the whole interface:

NOTE: when the "[X] Save events to file" checkbox is checked, the program will automatically save your results on a file on the same directory where the script is located, and open up the file in NOTEPAD at the end of the execution. If you leave this unchecked, the results will just be printed in the underlying PowerShell window (that's great to have an overview of the events as well as a summary of the Errors / Warnings / Critical / Information events before saving these in to a file for further analysis - with PowerBI Template for example 🙂 )

 

NOTE2: by default, if you don't specify a computers list, and if you don't check any log types (App / Sys), the program will check and display the 30 last events of the local machine where the script is executed, from the Application and System event logs.

 

NOTE3: the "Speech" section is in alpha version for now, as 1) I didn't translate every messages into both French and English, and 2) PowerShell will wait until the computer finishes to speak before releasing the interface to you, and I made it speak each time you check a box ... 

 

You can download the PowerShell Events Collector GUI here - note that I also include the PowerBI Template in the archive, it's optional to use, just in case you'll like to test using PowerBI with event logs analysis some time...

Have a great one,
Cheers
Sam

Viewing all 34890 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>