Quantcast
Channel: TechNet Blogs
Viewing all 34890 articles
Browse latest View live

RPO-RTO Backup y Site Recovery

$
0
0

RPO y RTO en Azure Backup y Azure Site Recovery

 

Hola.

 

En la actualidad la información crece cada vez más, se estima que la información se duplica cada año en las empresas, y el reto mas grande que se presenta es la protección confiable de esa información. Esta protección debe ser desde borrados accidentales hasta desastres naturales. Y para ello se tienen 2 soluciones, respaldos y recuperación de sitios.

 

A menudo se confunden las funcionalidades entre Respaldos y Recuperación de Sitio. Ambos capturan datos y proveen procedimientos de recuperación, pero sus propósitos centrales son diferentes.

Azure Backup respalda datos de servidores y equipos en la nube. Azure Site recovery coordina replicaciones de máquinas virtuales y físicas, así como transferencias de control o “failovers” entre el sitio y la nube. Se necesitan ambas para una solución completa de recuperación de desastres. Su estrategia de recuperación de desastres necesita guardar su información segura y recuperable (Respaldos o Backup) así como mantener sus cargas de trabajo disponibles y accesibles (Site Recovery) cuando ocurran eventualidades.

 

Para entender las diferencias de oportunidad entre Respaldo y recuperación de Sitio, debemos tener en cuenta estos conceptos:

 

RPO – Punto Objetivo de Recuperación

Este concepto se utiliza para definir el tiempo transcurrido desde la última replicación o punto de recuperación de datos, y el momento de la eventualidad de interrupción de servicio, y representa la potencial pérdida de datos en el plan de continuidad de negocio.

 

RTO – Tiempo Objetivo de Recuperación

Este concepto se utiliza para definir el tiempo transcurrido desde que ocurre la eventualidad de interrupción del servicio hasta que mis sistemas están cien por ciento en operación para los usuarios finales.

 

 

RESPALDOS

La solución de respaldos obtiene copias de seguridad de la información tales como archivos y bases de datos contra perdida de información, errores humanos y descomposturas de equipos. En otras palabras, hacer copias de la información en distintos puntos en el tiempo para poder recuperarla cuando la necesitemos. Dependiendo de los requerimientos de las empresas, tenemos horarios de respaldos (típicamente a la media noche) y periodos de resguardo semanales, mensuales y anuales. Es por esta razón que los respaldos son ideales para la recuperación de información histórica.

 

De modo que típicamente tenemos puntos de recuperación diarios, lo cual, en caso de desastre, nuestro punto más cercano de recuperación es la noche anterior, pudiendo dar un RPO de hasta 20 horas, perdiendo la información de todas las operaciones del día en curso.

 

Sabemos que la recuperación de respaldos puede llevar grandes cantidades de tiempo, así que, en caso de emergencia, recuperar los equipos en base a respaldos puede tener un gran efecto en la continuidad del negocio, hasta de días dependiendo del volumen de la información a recuperar, dando un RTO muy elevado con impacto a la inoperatividad del negocio.

 

 

RECUPERACION DE SITIOS

Por otro lado, las soluciones de recuperación de sitios tienen el objetivo de recuperar la operación del negocio lo más rápido posible con mínima perdida de información, esto se obtiene con replicación frecuente de la información hacia un centro de datos alterno, ya sea físico, o en la nube para poder operar en el menor tiempo posible y con la menor perdida de información en caso de algún evento que interrumpa los servicios en el centro de datos original.

 

Azure Site Recovery permite que Azure se utilice como centro de datos de recuperación de desastre para sus máquinas virtuales. En un mundo en el que todos esperan conectividad ininterrumpida, es más importante que nunca mantener la infraestructura y las aplicaciones en funcionamiento. El propósito de continuidad del negocio y recuperación ante desastres (BCDR) es restaurar componentes con errores para que la organización pueda reanudar rápidamente las operaciones normales.

 

Es crucial para la planeación de BCDR que el Tiempo Objetivo de recuperación (RTO) y el Punto Objetivo de recuperación (RPO) se definan como parte de un plan de recuperación ante desastres. Cuando se produzca un desastre en el centro de datos, con Azure Site Recovery, los clientes pueden poner en línea rápidamente (con bajo RTO) sus máquinas virtuales replicadas ubicadas en el centro de datos secundario o en Microsoft Azure con pérdida mínima de datos (RPO bajo).

 

El servicio de Recuperación de Sitios o Centros de Datos contribuye a una solución robusta de Recuperación de Desastres que protege los servidores e información automatizando la replicación y transferencia de servicio hacia Azure o a un Sitio de Datos secundario.

 

 

Por lo tanto, las principales diferencias entre los objetivos de las soluciones de Respaldos y de Recuperación de Sitios las podemos resumir en:

Concepto

Detalles Respaldo

Recuperación de Desastres

Recovery Point Objective (RPO) – Punto Objetivo Recuperación La cantidad de información perdida que es aceptable en caso de necesitar recuperación del Sitio Soluciones de respaldo tienen amplia variedad en el RPO aceptable. Respaldos usualmente tienen un RPO de un día (Respaldos diarios), mientras que respaldos de BD tienen RPO tan bajo como 15 minutos Soluciones de Recuperación de desastres tienen RPOs extremadamente bajos. La copia de recuperación puede ser de unos pocos minutos.
Recovery Time Objective (RTO) – Tiempo Objetivo de Recuperación Cantidad de tiempo que toma completar la recuperación de los servicios. Debido a los RPOs inherentes a los Respaldos, la cantidad de información que un Respaldo necesita procesar es típicamente muy grande. Esto lleva a RTOs grandes. Por ejemplo, podría tomar días para restaurar la información de cintas, dependiendo del tiempo que tome transportar las cintas al sitio de recuperación. Las soluciones de Recuperación de desastres tienen RTO mucho menor debido a que están basadas en sincronizaciones con los servidores fuentes, por lo tanto, se necesitan procesar menos cambios.
Retención Cuanto tiempo necesita almacenarse la información Para escenarios que requieren recuperación de operaciones (datos corruptos, borrados accidentales, fallas del OS), los respaldos típicamente se retienen 30 días o menos.

Para cumplimiento de normas, los datos pueden almacenarse por meses o años. Los respaldos son ideales para estas situaciones de información histórica.

Recuperación de desastres solo necesita recuperar la operación de los datos. Esto típicamente toma algunas horas o hasta un día. Debido a la captura de datos detallada usada en las soluciones de Recuperación de Desastres, no es recomendado tener puntos de retención por largos periodos.

 

 

Gracias y esperamos que sea de su agrado.

 

Saludos

 

Mariano Carro

Enviar correo a latampts

 

 

 

 

 


July 2018 Hot Sheet partner training schedule

$
0
0

Welcome to the US Partner Community Hot Sheet, a comprehensive schedule of partner training, webcasts, community calls, and office hours. This post is updated frequently as we learn about new offerings, so you can plan ahead. Looking for product-specific training? Try the links across the top of this blog.

Community call schedule

Community calls for the US Partner Community are led by experts from across the US Partner Team, and provide practice-building and business-building guidance.

Community name

July calls information

August calls information

Applications & Infrastructure

No call in July. Look for new schedule soon

No call in August. Look for new schedule soon

Azure Government

No call in July. Look for new schedule soon

No call in August. Look for new schedule soon

Business Applications

No call in July. Look for new schedule soon

No call in August. Look for new schedule soon

Cloud Services Partner Incentives

July 26

Call schedule will be available soon

Data & Artificial Intelligence (AI)

No call in July. Look for new schedule soon

No call in August. Look for new schedule soon

Marketing SureStep Office Hours

Every Thursday

Every Thursday

Modern Workplace – Productivity

No call in July. Look for new schedule soon.

No call in August. Look for new schedule soon.

Modern Workplace –  Security

No call in July. Look for new schedule soon

No call in August. Look for new schedule soon

Modern Workplace – Windows & Devices

No call in July. Look for new schedule soon.

No call in August. Look for new schedule soon.

MPN 101

July 11 - Know before you go to Microsoft Inspire

Call schedule will be available soon

Open Source Solutions

No call in July. Look for new schedule soon

No call in August. Look for new schedule soon

Partner Insider

No call in July. Look for new schedule soon

Call schedule will be available soon

Week of June 25–29

Date

Location

Course, webcast or call

Who should attend

June 26

Online Creating apps for the Intelligent Cloud: Serverless and integration scenarios

Technical roles

June 26

Online

Getting started with Azure Stack

Technical roles

June 27

Community call

MPN 101: Know before you go to Microsoft Inspire 2018

Business roles

June 27

Online Adopting Microsoft 365 Proactive Attack Detection and Prevention

Technical roles

June 27

Online Getting started with Partner Center CSP – Technical scenarios

Technical roles

June 27

Online Enhance your business with Dynamics 365 PowerApps and Flows

Business and technical roles

June 29

Online What's new in Azure Infrastructure as a service

Technical roles

Week of July 2–6

Date

Location

Course, webcast or call

Who should attend

July 5

Online

Azure Stack architecture & deployment

Technical roles

Week of July 9–13

Date

Location

Course, webcast or call

Who should attend

July 10

Online

Introduction to Skype for Business

Technical roles

July 10

Online

What's new & highlights in Business Applications

Business and technical roles

July 11

Community call

MPN 101: Know before you go to Microsoft Inspire 2018

Business roles

July 11

Online

What’s new in Office 365

Business and technical roles

July 11

Online

Introduction to Microsoft 365 Deployment

Technical roles

July 12

Online

Enhance your business with Skype for Business Online Academy

Technical roles

July 12

Online

Partner Center CSP – Application onboarding

Technical roles

July 13

Online

Creating apps for the Intelligent Cloud: Architecting cloud apps for scale

Technical roles

Week of July 16-20

Date

Location

Course, webcast or call

Who should attend

July 15-19

Las Vegas, NV

Microsoft Inspire

Business, sales, and technical roles

July 16

Online

Introduction to Microsoft 365 Management

Technical roles

July 17

Online

What’s new and highlights in Business Applications Business and technical roles

July 17

Online

Adopting Microsoft Teams

Technical roles

July 18

Online

Adopting Microsoft 365 powered device: Deployment

Technical roles

July 19

Online

Cortana Intelligence Suite: Big Data Analytics using Data Lake

Technical roles

Week of July 23-27

Date

Location

Course, webcast or call

Who should attend

July 22–24

Seattle, WA

Microsoft Business Applications Summit

Analysts, Business Users, IT Professionals, Developers and Microsoft Business Applications Partners

July 24

Online

Adopting Microsoft 365 powered device: Management

Technical roles

July 24

Online

Introduction to Azure Site Recovery and Backup

Technical roles

July 24

Online

Introduction to Dynamics 365 Customer Engagement: Technical onboarding

Technical roles

July 25

Online

Introduction to Microsoft Azure IaaS

Technical roles

July 26

Community call

Cloud Services Partner Incentives

Business roles

July 26

Online

Migrating Applications to Microsoft Azure

Technical roles

July 26

Online

What's new in Azure Infrastructure as a service

Technical roles

July 26

Online

Introduction to Dynamics 365 Customer Engagement: Basics of customization

Technical roles

Week of July 30–August 3

Date

Location

Course, webcast or call

Who should attend

July 30

Online

Introduction to Microsoft 365 Security and Compliance

Technical roles

Microsoft 2018 events

Microsoft Inspire 2018: July 15–19 in Las Vegas, Nevada

Microsoft Ignite 2018: September 24–28 in Orlando, FL

Virtual 2018 U.S. One Commercial Partner (OCP) Partner Briefing (on demand)

Watch the Windows Server Summit on demand

$
0
0

I'm not the type of person who sets an alarm at an unappealing time of the morning to watch an online event, especially knowing that I can watch the replay short afterwards. No, I'm not talking about the World Cup, I'm talking about the Windows Server Summit. I'm still working my way through the sessions that are most important to me, and over on the Storage at Microsoft blog, Cosmos Darwin has posted the five big announcements for Storage Spaces Direct (S2D) and Hyper-Converged Infrastructure (HCI). Note that Cosmos is focused on storage, so his top 5 list could be quite different to yours or mine. Once we've got an understanding of the SKU lineup inclusions I'll put something similar together for features that are in the Standard edition.

Cosmos' top 5 listing is...

Go bigger, up to 4PB

The new maximum size per storage pool is 4 petabytes (PB), or 4,000 terabytes. All related capacity guidelines and/or limits are increasing as well: for example, Storage Spaces Direct in Windows Server 2019 supports twice as many volumes (64 instead of 32), each twice as large as before (64 TB instead of 32 TB).

True two-node at the edge

Need to set up a two-node cluster in a branch or disconnected location? Want to use the USB drive capability of your router to act as the witness? Well, provided your router supports SMB2 (no, not SMB1) this is something that can now be done. New documentation is coming that lists the compatible hardware, and it might be a gentle reminder to those with older routers that haven't received security updates for a while that it might be time to get them up to date or to replace them.

Drive latency outlier detection

Drives with abnormal behavior, whether it’s their average or 99th percentile latency that stands out, are automatically detected and marked in PowerShell and Windows Admin Center as “Abnormal Latency” status. This gives Storage Spaces Direct administrators the most robust set of defenses against drive latency available on any major hyper-converged infrastructure platform.

Faster mirror-accelerated parity

In Windows Server 2019, the performance of mirror-accelerated parity has more than doubled relative to Windows Server 2016! Mirror continues to offer the best absolute performance, but these improvements bring mirror-accelerated parity surprisingly close, unlocking the capacity savings of parity for more use cases.

Greater hardware choice

Since Ignite 2017, the number of available hardware SKUs has nearly doubled, to 33. To deploy Storage Spaces Direct in production, Microsoft recommends Windows Server Software-Defined hardware/software offers from our partners, which include deployment tools and procedures. They are designed, assembled, and validated against our reference architecture to ensure compatibility and reliability, so you get up and running quickly.

Head on over to read to read the full post.

Dynamics 365 PSA 導入支援サービス

$
0
0

[提供: アバナード株式会社]

Dynamics 365 for Project Service Automation (PSA)を拡張し、業務に合わせて最適化

 

サービス提供のライフサイクル(営業、運用、クローズ)全体を支援。
サービス事業特有の収支要因であるプロジェクト管理やリソース管理を効率化・最適化するとともに、Microsoft 社のテクノロジーとの連携により、サービス提供ライフサイクル全体で業務支援を実現します。

 

■解決される課題

顧客に合わせてプロジェクトを構成し、プロジェクトにかかわる人材、コスト、部材を一括管理。従業員の生産性を向上しつつ、 予算内でスケジュールどおりプロジェクトを完了させます。

 

■料金

個別対応なので、お問い合わせください。

 

■対象業種

製造、流通など

 

■対応エリア

全国

 

 

 

SharePoint 2016 | CORS | JavaScript/CSOM calls not working/loading in Edge or Chrome when accessing site through Reverse Proxy URL or Network Load Balancer. SharePoint throwing 403 forbidden error.

$
0
0

SYMPTOM
Symptom 1: SharePoint is showing unexpected response (403 error) in Edge or Chrome Browsers but not in Internet Explorer whenever a call to client.svc/ProcessQuery is sent to the server as an incoming request.

For example, after adding a people column to a document library and typing in a username, test

Symptom 2: SharePoint is showing unexpected response (403 error) in Edge or Chrome Browsers but not in Internet Explorer running JavaScript from a content editor web part.

CAUSE
SharePoint 2016 has a security feature that will compare the actual request URL with the request origin header. If they don't match, the request will be rejected with status 403.

In order to verify if this is the problem, add a hosts file entry to your local client machine that resolves the SharePoint web site URL to a SharePoint Web front end server IP address to bypass the Network Load Balancer or Reverse Proxy.

RESOLUTION
Microsoft recommends configuring a rule in your Reverse Proxy or Network Load Balancer to adjust the origin to match the original request.

In case you don't have access to this, you can create a re-write rule in IIS. Implement the following IIS inbound rewrite rules to overcome the 403 error for JavaScript/CSOM calls not working/loading when accessing site through Reverse Proxy or Network Load Balancer URL.

Before trying out anything you find on the internet, make sure you are in a testing environment and have known good backups.

1.  Make sure URL Rewrite is available
               Download and install the IIS rewrite module: https://www.IIS.net/downloads/microsoft/URL-rewrite
               Close and reopen IIS

2.  Configure Rewrite Rules and add Server Variables:
               Go to your SharePoint site.
               Click on rewrite URL:



On the Right under Actions, click on View Server Variables
- Add this to allowed server variables:
HTTP_Origin
HTTP_HOST


Click on Back to Rules under Actions menu on the right. Then, click on Create an inbound rule:


- Create a new inbound rule
- Add this as regular expression filter:
.svc.+
- In Server Variables, click Add
- Use this information:
Name: HTTP_Origin
Value: http://{HTTP_HOST}
- For action choose 'None'
- Save the rule

- Create another new inbound rule to allow rewrite for the java scripts
- Add this as regular expression filter:
_api.+
- In Server Variables, click Add
- Use this information:
Name: HTTP_Origin
Value: http://{HTTP_HOST}
- For action choose 'None'
- Save the rule


In application.config you would see something like this (there may be other variables for other rules but leave them alone, make sure that these two are included)
<rewrite>

<allowedServerVariables>

<add name="HTTP_Origin" />

<add name="HTTP_HOST" />

</allowedServerVariables>

</rewrite>

In web.config, you should see this:

<rewrite>

<rules>

<clear />

                <rule name="Origin">

                    <match URL=".svc.+" />

                    <serverVariables>

                        <set name="HTTP_Origin" value="http://{HTTP_HOST}" />

                    </serverVariables>

                    <action type="None" />

                </rule>

<rule name="Origin2">

                    <match URL="api.+" />

                    <serverVariables>

                        <set name="HTTP_Origin" value="http://{HTTP_HOST}" />

                    </serverVariables>

                    <action type="None" />

                </rule>

</rules>

MORE INFORMATION

https://support.microsoft.com/en-us/help/2818415/supportability-of-rewrites-and-redirects-in-sharepoint-2013-2010-and-2

DATA ANALYSIS:

From the client machine where you just configured the fiddler, browse to the site with the public URL for the zone.

From Fiddler: Note the origin and header in the Headers tab in the upper right and find the correlation id to search for in your SharePoint logs in the miscellaneous section in the lower right listed as request-id or SPRequestGuid:

Symptom 1: Fiddler


05/29/2018 18:45:08.23    w3wp.exe (0x1DF4)    0x2618    SharePoint Foundation    Logging Correlation Data    xmnv    Medium    Name=Request (POST:http://sp.contoso.com/sites/corstest/_vti_bin/client.svc/ProcessQuery)    06046c9e-92a0-40c7-b2ae-2165c547d61c

05/29/2018 18:45:08.24    w3wp.exe (0x1DF4)    0x1804    SharePoint Foundation    CSOM    agw10    Medium    Begin CSOM Request ManagedThreadId=6, NativeThreadId=6148    06046c9e-92a0-40c7-b2ae-2165c547d61c

05/29/2018 18:45:08.24    w3wp.exe (0x1DF4)    0x1804    SharePoint Foundation    CSOM    azvn3    Medium    Request is a Cross-Origin request. Origin is : 'http://melissa.contoso.com'. Host is : http://sp.contoso.com/_vti_bin/client.svc/ProcessQuery    06046c9e-92a0-40c7-b2ae-2165c547d61c

05/29/2018 18:45:08.24    w3wp.exe (0x1DF4)    0x1804    SharePoint Foundation    CSOM    azvn4    Medium    Request is a Cross-Origin request for a user that was not authenticated using OAuth. Returning 403    06046c9e-92a0-40c7-b2ae-2165c547d61c

05/29/2018 18:45:08.24    w3wp.exe (0x1DF4)    0x1804    SharePoint Foundation    CSOM    aiv4g    Medium    OnBeginRequest returns false, do not need to continue process the request.    06046c9e-92a0-40c7-b2ae-2165c547d61c

05/29/2018 18:45:08.24    w3wp.exe (0x1DF4)    0x0934    SharePoint Foundation    Runtime    aoxsq    Medium    Sending HTTP response 403 for HTTP request POST to http://sp.contoso.com/_vti_bin/client.svc/ProcessQuery    06046c9e-92a0-40c7-b2ae-2165c547d61c

05/29/2018 18:45:08.24    w3wp.exe (0x1DF4)    0x0934    SharePoint Foundation    Monitoring    b4ly    Medium    Leaving Monitored Scope: (Request (POST:http://sp.contoso.com/sites/corstest/_vti_bin/client.svc/ProcessQuery)) Execution Time=14.3752; CPU Milliseconds=10; SQL Query Count=0;Parent=None    06046c9e-92a0-40c7-b2ae-2165c547d61c


Symptom 2:

Right click on the SPRequestGuid in the right hand lower left section, copy value only, then go open up the SP ULS and search for the correlation id:

Remember, we browsed to http://melissa.contoso.com. Here is an excerpt from the correlation id in this instance:

05/21/2018 18:49:04.38    w3wp.exe (0x163C)    0x1E10    SharePoint Foundation    Logging Correlation Data    xmnv    Medium    Name=Request (POST:http://sp.contoso.com/sites/corstest/_api/contextinfo)    1271699e-0247-40c7-b2ae-2e61ad704f51

05/21/2018 18:49:04.38    w3wp.exe (0x163C)    0x1E10    SharePoint Foundation    General    adyrv    High    Cannot find site lookup info for request Uri http://sp.contoso.com/sites/corstest/_api/contextinfo.    1271699e-0247-40c7-b2ae-2e61ad704f51

05/21/2018 18:49:04.38    w3wp.exe (0x163C)    0x1E10    SharePoint Foundation    Audience Validation    a9fy7    Medium    The audience uri loads a web application matches. AudienceUri: 'http://melissa.contoso.com/', InputWebApplicationId: '8e26ceaa-446b-45bc-ba30-4fc65baeec0f', InputURLZone: 'Default'.    1271699e-0247-40c7-b2ae-2e61ad704f51

05/21/2018 18:49:04.39    w3wp.exe (0x163C)    0x1D24    SharePoint Foundation    CSOM    agw10    Medium    Begin CSOM Request ManagedThreadId=42, NativeThreadId=7460    1271699e-0247-40c7-b2ae-2e61ad704f51

05/21/2018 18:49:04.39    w3wp.exe (0x163C)    0x1D24    SharePoint Foundation    CSOM    azvn3    Medium    Request is a Cross-Origin request. Origin is : 'http://melissa.contoso.com'. Host is : http://sp.contoso.com/_vti_bin/client.svc/contextinfo    1271699e-0247-40c7-b2ae-2e61ad704f51

05/21/2018 18:49:04.39    w3wp.exe (0x163C)    0x1D24    SharePoint Foundation    CSOM    azvn4    Medium    Request is a Cross-Origin request for a user that was not authenticated using OAuth. Returning 403    1271699e-0247-40c7-b2ae-2e61ad704f51

05/21/2018 18:49:04.39    w3wp.exe (0x163C)    0x2368    SharePoint Foundation    General    adyrv    High    Cannot find site lookup info for request Uri http://sp.contoso.com/sites/corstest/_api/contextinfo.    1271699e-0247-40c7-b2ae-2e61ad704f51

05/21/2018 18:49:04.39    w3wp.exe (0x163C)    0x2368    SharePoint Foundation    Runtime    aoxsq    Medium    Sending HTTP response 403 for HTTP request POST to http://sp.contoso.com/_vti_bin/client.svc/contextinfo    1271699e-0247-40c7-b2ae-2e61ad704f51

05/21/2018 18:49:04.39    w3wp.exe (0x163C)    0x2368    SharePoint Foundation    General    azrx9    Medium    LookupHostHeaderSite: Using site lookup provider Microsoft.SharePoint.Administration.SPConfigurationDatabaseSiteLookupProvider for host-header site-based multi-URL lookup string http://sp.contoso.com/sites/corstest for request Uri http://sp.contoso.com/sites/corstest/_api/contextinfo.    1271699e-0247-40c7-b2ae-2e61ad704f51

05/21/2018 18:49:04.39    w3wp.exe (0x163C)    0x2368    SharePoint Foundation    General    adyrv    High    Cannot find site lookup info for request Uri http://sp.contoso.com/sites/corstest/_api/contextinfo.    1271699e-0247-40c7-b2ae-2e61ad704f51

05/21/2018 18:49:04.39    w3wp.exe (0x163C)    0x2368    SharePoint Foundation    Monitoring    b4ly    Medium    Leaving Monitored Scope: (Request (POST:http://sp.contoso.com/sites/corstest/_api/contextinfo)) Execution Time=34.3893; CPU Milliseconds=21; SQL Query Count=14; Parent=None    1271699e-0247-40c7-b2ae-2e61ad704f51

Here in the network trace, we can see the request is coming from 192.168.2.51 – this is where the reverse proxy is running and we can see SharePoint (192.168.2.53) reply with the 403 Forbidden error message. Note the host and origin are highlighted and they are not matching resulting in the 403 error message.


SETUP/SCENARIO

Symptom 1:

  1. Configure environment with a path based site collection.
  2. Create a document library
  3. Add a person column to the library
  4. In Chrome, browse the library with reverse proxy URL


Symptom 2:

  1. Configure environment with a path based site collection.
  2. Configure the site collection to run JavaScript from a content editor web part
    1. Have a content editor web part configured on a page, for example: http://sp.contoso.com/sites/corstest/SitePages/example.aspx
    2. Upon editing the content editor web part, there is a content link set to /sites/corstest/SiteAssets/example.js
  3. Find the example.js code at the end of the post.

CONFIGURATION

  1. Starting config - Alternate Access Mappings

    Prior to configuring the SharePoint to use a different URL and configure the reverse proxy:

    The web app URL: http://sp


    The alternate access mapping:


  2. modified config – AAM's. FYI, AAM's are deprecated in SP 2016.

    Configured AAM for the "new" URL:


    Which automatically updates the web application URL:


    Add a DNS entry or hosts file for melissa.contoso.com.

    Irrespective of browser, there is no issue or 403 error browsing to http://melissa.contoso.com or loading the example.aspx page referencing the JavaScript.

    *Please note if the public URL for the zone is added as https, then on the SP servers in IIS it will be necessary to add a binding for https port 443 and an SSL certificate.

  3. Configure Fiddler as a reverse proxy. There's lots of documentation and videos on this, but here is the short of it.

    Tools, Options, Connections tab: check "Allow remote computers to connect"


Then, back at the menu bar, select Rules, Customize rules, and the Fiddler ScriptEditor window should open. From its menu, click on Go, click to OnBeginRequest.

Add the following (with your own URL's, of course) after the comments in the section:

static function OnBeforeRequest(oSession: Session) {

    if (oSession.HostnameIs("melissa.contoso.com"))

    {

        oSession.hostname="sp.contoso.com";

    }

JAVASCRIPT CODE EXAMPLE

The sample.js contents are:

<html>

<head>

<title>Cross-domain sample</title>

</head>

<body>

<!-- This is the placeholder for the announcements -->

<div id="renderAnnouncements"></div>

<script

type="text/javascript"

src="//ajax.aspnetcdn.com/ajax/jQuery/jquery-1.7.2.min.js">

</script>

<script type="text/javascript" src="//ajax.aspnetcdn.com/ajax/4.0/1/MicrosoftAjax.js"></script>

<script type="text/javascript" src="/_layouts/15/sp.runtime.js"></script>

<script type="text/javascript" src="/_layouts/15/sp.js"></script>

<script type="text/javascript">

//var hostwebURL;

//var appwebURL;

// Load the required SharePoint libraries

$(document).ready(function () {

SP.SOD.executeFunc('sp.js', 'SP.ClientContext', getProjectURL);

// //Get the URI decoded URLs.

// hostwebURL =

// decodeURIComponent(

// getQueryStringParameter("SPHostURL")

// );

// appwebURL =

// decodeURIComponent(

// getQueryStringParameter("SPAppWebURL")

// );

// resources are in URLs in the form:

// web_URL/_layouts/15/resource

var scriptbase = getProjectURL() + "/_layouts/15/";

// Load the js files and continue to the successHandler

$.getScript(scriptbase + "SP.RequestExecutor.js", execCrossDomainRequestA);

});

// Function to prepare and issue the request to get

// SharePoint data

function execCrossDomainRequest() {

// executor: The RequestExecutor object

// Initialize the RequestExecutor with the add-in web URL.

var executor = new SP.RequestExecutor(appwebURL);

// Issue the call against the add-in web.

// To get the title using REST we can hit the endpoint:

// appwebURL/_api/web/lists/getbytitle('listname')/items

// The response formats the data in the JSON format.

// The functions successHandler and errorHandler attend the

// sucess and error events respectively.

executor.executeAsync(

{

URL:

appwebURL +

"/_api/web/lists/getbytitle('Announcements')/items",

method: "POST",

headers: { "Accept": "application/json; odata=verbose" },

success: successHandler,

error: errorHandler,

crossDomain: true

}

);

}

function successHandlerA(data, req) {

var announcementsHTML = "";

var enumerator = allAnnouncements.getEnumerator();

while (enumerator.moveNext()) {

var announcement = enumerator.get_current();

announcementsHTML = announcementsHTML +

"<p><h1>" + announcement.get_item("Title") +

"</h1>" + announcement.get_item("Body") +

"</p><hr>";

}

document.getElementById("renderAnnouncements").innerHTML =

announcementsHTML;

}

// Function to handle the success event.

// Prints the data to the page.

function successHandler(data) {

var jsonObject = JSON.parse(data.body);

var announcementsHTML = "";

var results = jsonObject.d.results;

for (var i = 0; i < results.length; i++) {

announcementsHTML = announcementsHTML +

"<p><h1>" + results[i].Title +

"</h1>" + results[i].Body +

"</p><hr>";

}

document.getElementById("renderAnnouncements").innerHTML =

announcementsHTML;

}

// Function to handle the error event.

// Prints the error message to the page.

function errorHandler(data, errorCode, errorMessage) {

document.getElementById("renderAnnouncements").innerText =

"Could not complete cross-domain call: " + errorMessage;

}

function execCrossDomainRequestA() {

// context: The ClientContext object provides access to

// the web and lists objects.

// factory: Initialize the factory object with the

// app web URL.

var addinwebURL = getProjectURL();

var context = new SP.ClientContext(addinwebURL);

var factory =

new SP.ProxyWebRequestExecutorFactory(

addinwebURL

);

context.set_webRequestExecutorFactory(factory);

//Get the web and list objects

// and prepare the query

var web = context.get_web();

var list = web.get_lists().getByTitle("Announcements");

var camlString =

"<View><ViewFields>" +

"<FieldRef Name='Title' />" +

"<FieldRef Name='Body' />" +

"</ViewFields></View>";

var camlQuery = new SP.CamlQuery();

camlQuery.set_viewXml(camlString);

allAnnouncements = list.getItems(camlQuery);

context.load(allAnnouncements, "Include(Title, Body)");

//Execute the query with all the previous

// options and parameters

context.executeQueryAsync(

successHandlerA, errorHandler

);

}

function getProjectURL() {

var URLToReturn = "";

var baseURL = document.URL.split("/");

//URLToReturn = baseURL[0] + "//" + baseURL[2] + _spPageContextInfo.siteServerRelativeURL + pageURL + "?" + queryStringKey + "=" + queryStringValue + "&" + categoryString + "&ViewMode=1";

URLToReturn = baseURL[0] + "//" + baseURL[2] + _spPageContextInfo.siteServerRelativeURL;

return (URLToReturn);

}

// Function to retrieve a query string value.

// For production purposes you may want to use

// a library to handle the query string.

function getQueryStringParameter(paramToRetrieve) {

var params =

document.URL.split("?")[1].split("&amp;");

var strParams = "";

for (var i = 0; i < params.length; i = i + 1) {

var singleParam = params[i].split("=");

if (singleParam[0] == paramToRetrieve)

return singleParam[1];

}

}

</script>

</body>

</html>

The End 🙂
Thanks for reading, thanks for you and thanks for all those who came before us. Share your experience and submit your suggestions for SharePoint here: https://sharepoint.uservoice.com

[ウェビナー] 60分で習得する、Azure へのサーバー移行基礎知識【6/28 更新】

$
0
0

<開催日時>

2018年7月13日(金) 12:00-13:00

 

<概要>
オンプレミスのサーバーを、Azure に移行する場合の考え方、ツールなどを解説致します。
2008の EOS 対策などを含め、オンプレミスサーバーのクラウド移行をご検討中の方は是非ご参加ください。

 

<アジェンダ>
・Azure の仮想マシン環境と注意点
Azure の仮想マシン環境、移行時の注意点について簡単に説明致します。
・Azure Migrate による既存環境調査
Azure Migrate という、オンプレミス仮想環境調査ツールについて説明致します。
・移行ツールによるAzure へのサーバーマイグレーション
Azure Site Recovery を中心に、他ツールも含めたサーバーマイグレーションについて解説致します。

 

<参考>
本セッションは、オンプレミスの Hyper-V や VMware 仮想環境について提案・設計・構築・運用のいずれかの経験がある方を対象としております。
また、Azure IaaS についてはセッションに軽く含めますが、Azure IaaS を事前に理解されたい方は、「Azure IaaS の基礎から VM サイズ選択方針まで一気に理解してしまおう!」をオンデマンドでご確認ください。

 

ウェビナーの参加登録はこちら

 

 

Outlook のバージョンによってブロックされる添付ファイルの種類が異なる

$
0
0

こんにちは。日本マイクロソフト Outlook サポート チームです。

安全に Outlook をご利用いただくため、Outlook では既定でいくつかの添付ファイルがブロックされます。
ブロックされる添付ファイルの種類は、Outlook のバージョンや更新プログラムの適用状況によって異なります。
2018 年 6 月 28 日時点では以下の通りです。

 

Outlook 2016 MSI 版 16.0.4573.1000 以降 (2017 年 7 月 の更新プログラム KB4011052 以降の適用)
Outlook 2016 クイック実行版 16.0.8004.1000 以降
ade、adp、app、asp、bas、bat、cer、chm、cmd、cnt、com、cpl、crt、csh、der、diagcab、exe、
fxp、gadget、grp、hlp、hpj、hta、inf、ins、isp、its、jar、jnlp、js、jse、ksh、lnk、mad、maf、mag、
mam、maq、mar、mas、mat、mau、mav、maw、mcf、mda、mdb、mde、mdt、mdw、mdz、
msc、msh、msh1、msh2、msh1xml、msh2xml、mshxml、msi、msp、mst、msu、ops、osd、
pcd、pif、pl、plg、prf、prg、printerexport、ps1、ps2、ps1xml、ps2xml、psc1、psc2、psd1、psdm1、pst、reg、scf、scr、sct、
shb、shs、theme、tmp、url、vb、vbe、vbp、vbs、vsmacros、vsw、webpnp、website、ws、wsc、wsf、wsh、xbap、xll、xnk

 

Outlook 2010 14.0.7188.5000 以降 (2017 年 9 月 の更新プログラムKB4011089 以降の適用)
Outlook 2013 15.0.4963.1000 以降 (2017 年 9 月 の更新プログラム KB4011090 以降の適用)
ade、adp、app、asp、bas、bat、bgi、cer、chm、cmd、cnt、com、cpl、crt、csh、der、exe、
fxp、gadget、grp、hlp、hpj、hta、inf、ins、isp、its、jar、jnlp、js、jse、ksh、lnk、mad、maf、mag、
mam、maq、mar、mas、mat、mau、mav、maw、mcf、mda、mdb、mde、mdt、mdw、mdz、
msc、msh、msh1、msh2、msh1xml、msh2xml、mshxml、msi、msp、mst、ops、osd、
pcd、pif、pl、plg、prf、prg、ps1、ps2、ps1xml、ps2xml、psc1、psc2、pst、reg、scf、scr、sct、
shb、shs、tmp、url、vb、vbe、vbp、vbs、vsmacros、vsw、ws、wsc、wsf、wsh、xbap、xll、xnk

 

補足
以下の資料では「Outlook でブロックされるファイルの種類」を「新しいバージョン」と「Office 2007」に分けて説明しています。
この「新しいバージョン」とは、Outlook 2016 の最新の更新プログラム適用環境を差します。

Outlook でブロックされる添付ファイル

________________________________________
本情報の内容(添付文書、リンク先などを含む)は、作成日時点でのものであり、予告なく変更される場合があります。

Update: Create-LabUsers Tool

$
0
0

Just when you thought it couldn't get more awesome.

It has.

By popular request, I have added a few new features (and fixed an annoyance).  First, the bug fix:

-Count 1

Yes, it's true. If you ran the Create-LabUsers script with -Count 1 with the -InflateMailboxes parameter, you'd run into an issue because of how I calculated the $MaxRecipients value.  Since I didn't want to totally crush the messaging system, I had elected to set $MaxRecipients to the maximum number of mailbox users / 3.  However, for -Count parameters of 1, this would cause an error with a Get-Random cmdlet, since you couldn't exactly find a random integer between 1 and 1.  It was definitely an oversight on my part--I never imagined that someone would use a bulk user tool to create just one user.

So, fixed.

Now, on to the new stuff!

Middle Name support

Along with pointing out my -Count oops, Darryl also had an idea for populating the AD middle name.  I had originally just populated the middle initial.  This was easy enough, using the first names seed data ($MiddleName = $Names.First[(Get-Random -Minimum 0 -Maximum $Names.First.Count)]) and then setting $MiddleIntial = $MiddleName[0].

Easy peasy.

CreateResourceMailboxes

I might as well call this update the Darryl Chronicle, since this was also one of his requests.  As part of this update, I added a switch to allow you to create Exchange resource mailboxes:

  • Shared Mailboxes: Random number of shared mailboxes assigned per-department, per location
  • Equipment Mailboxes: Each location receives a fixed number (laptops and projectors)
  • Room Mailboxes: Each location receives a fixed number with varying room capacities
  • Room Lists: After creating the room mailboxes, the script will now create per-location room lists (special distribution lists that contain room objects for use with the Room Finder)

The latest version of the script is available on the Technet Gallery at http://aka.ms/createlabusers.


Update: Dynamics 365 Testing Tool

$
0
0

Earlier today, I was notified that the Dynamics 365 network URLs page was updated, so I updated my Dynamics test tool.

But then, I thought, what else could I put in it?

Never one to leave well enough alone, I started tinkering.  The result:

  • Updated network tests for crmdynint.com
  • Updated network tests for passport.net endpoints
  • Updated OS detection and reporting in log file.
  • Updated .NET Framework detection method.
  • Updated .NET Framework proxy detection.
  • Updated netsh proxy detection.
  • Updated TLS 1.2 configuration detection.
  • Added browser version detection for Internet Explorer, Edge, Chrome, and Firefox.

And, to boot, I gave it a shiny new URL: http://aka.ms/dynamicstest

サービス終了まで残り 4 か月: Access Control Service

$
0
0

執筆者: Anna Barhudarian (Principal PM Manager, Cloud Identity)

このポストは、2018 6 25 日に投稿された 4 month retirement notice: Access Control Service の翻訳です。

 

Access Control Service (ACS) は、正式にサービスを終了いたします。現在ご利用中のお客様は、2018 11 7 日まで引き続きご利用いただけますが、それを過ぎると ACS サービスが停止し、すべての要求がエラーとなります。

今回は、ACS サービスの終了に関する当初の発表記事 (英語) の補足事項をお届けします。
 

影響を受けるユーザー

上記の影響を受けるのは、Azure サブスクリプションで ACS 名前空間を 1 つでも作成しているお客様です。たとえば、Service Bus のお客様が Service Bus の名前空間の作成時に ACS 名前空間を間接的に作成したケースなどが該当します。アプリやサービスで ACS を使用していない場合は、特別な対応は必要ありません。
 

必要な対応

ACS を使用している場合は、移行の計画が必要になります。最適な移行パスは、ACS を使用しているお客様の既存のアプリやサービスの状況ごとに異なります。サポートが必要な場合は、移行ガイドをご利用ください。ほとんどの場合、移行の際にコードを変更する必要があります。

アプリやサービスで ACS を使用しているかどうかは、後述の方法で確認できます。2018 4 月に Azure Portal での ACS サービスの提供が終了してからは、名前空間の一覧を確認するためには Azure サポートに問い合わせる必要がありました。しかし、今後はその必要はありません。
 

Access Control Service PowerShell の提供を開始

ACS PowerShell は、Azure クラシック ポータルの ACS 機能を丸ごと置き換えたものです。詳細については、PowerShell ギャラリーの指示に従ってダウンロード (英語) してご確認ください。
 

ACS 名前空間の一覧を表示して削除する方法

ACS PowerShell をインストールしたら、以下の簡単な手順に従って ACS 名前空間を特定し、削除することができます。

1. Connect-AcsAccount コマンドレットを使用して、ACS に接続します。

2. Get-AcsSubscription コマンドレットを使用して、利用可能な Azure サブスクリプションの一覧を表示します。

3. Get-AcsNamespace コマンドレットを使用して、ACS 名前空間の一覧を表示します。

ACS 名前空間が表示される可能性が高いのは、Azure のお客様が 2014 年以前に Azure Service Bus にサインアップしたケースです。これらの名前空間は、–sb という拡張子によって識別できます。Service Bus チームは移行ガイドを提供しており、今後も同チームのブログ (英語) で最新情報をご案内する予定です。

4. Disable-AcsNamespace コマンドレットを使用して、ACS 名前空間を無効にします。

この手順はオプションです。移行が完了したと思われる場合は、名前空間を削除する前に無効にすることをお勧めします。無効にすると、要求に対して https://{名前空間}.accesscontrol.windows.net から 404 エラーが返されます。無効にしなかった場合、名前空間は変更されず、Enable-AcsNamespace コマンドレットを使用して復元することができます。

5. Remove-AcsNamespace コマンドレットを使用して、ACS 名前空間を削除します。

この手順を実行すると、名前空間が完全に削除されて復元できなくなります。
 

問い合わせ先

ACS 終了の詳細については ACS 移行ガイドをご確認ください。お客様に適した移行オプションが見つからない場合、または ACS の終了に関するご質問やご意見がございましたら、acsfeedback@microsoft.com までご連絡ください。

 

System Center Configuration Manager クライアントが利用するプロキシ設定について

$
0
0

みなさま、こんにちは。SCCM サポート チーム の篠木です。

 

本記事では、System Center Configuration Manager Current Branch (以下、 SCCM) クライアントがが利用するプロキシ設定についてご紹介します。

特に、今回は WSUS を単体で利用いただいている場合との違いについて、ご説明をさせて頂きます。

通常、アプリケーションがプロキシ サーバーを通るよう設計いただく場合、IE に設定されるプロキシ設定  (WinINet) と、 WinHTTP という 2つの API が利用されます。

 

WinINet (Windows Internet)

https://docs.microsoft.com/ja-jp/windows/desktop/WinInet/portal

The Microsoft Windows Internet (WinINet) application programming interface (API) enables applications to access standard Internet protocols, such as FTP and HTTP

(抄訳)マイクロソフト Windows Internet (WinINet) APIを使って、 FTP, HTTP のような標準のインターネットプロトコルでアプリケーションが接続できるようできます。

WinHTTP

https://docs.microsoft.com/ja-jp/windows/desktop/WinHttp/about-winhttp

Microsoft Windows HTTP Services (WinHTTP) provides developers with a server-supported, high-level interface to the HTTP/1.1 Internet protocol. WinHTTP is designed to be used primarily in server-based scenarios by server applications that communicate with HTTP servers.

(抄訳)マイクロソフト Windows HTTP Services (WinHTTP) は、サーバーをサポートする、HTTP/1.1 インターネットプロトコルに対するハイレベルなインターフェースを提供します。

WinHTTP はサーバーベースのアプリケーションが HTTP で通信できることを目的に設計されました。


上記
2 つの違いや、プロキシの設定についての詳細につきましては、以下弊社のブログでもご紹介しておりますので、ご確認ください。


ご参考)
IE からみるプロキシの設定について

https://blogs.technet.microsoft.com/jpieblog/2016/08/05/ie-%E3%81%8B%E3%82%89%E3%81%BF%E3%82%8B%E3%83%97%E3%83%AD%E3%82%AD%E3%82%B7%E3%81%AE%E8%A8%AD%E5%AE%9A%E3%81%AB%E3%81%A4%E3%81%84%E3%81%A6/


SCCM
クライアントの場合は、上記の後者、WinHTTP を利用して通信を行います。

SCCM クライアントは、主に SCCM サーバーとの通信で HTTP/HTTPS を利用しております。

ここで、注意すべきポイントといたしまして、ソフトウェアの更新ポイント サーバー (SUP WSUS の機能がインストールされているサイトサーバー、標準ポート番号 8530/8531) と通信する場合も、WinHTTP が利用されるという点がございます。

 

SCCM クライアントは、既定で 7 日ごとに、ソフトウェアの更新ポイントへ、ソフトウェア更新プログラムのスキャンを行います。

この際、SCCM クライアントは Windows Update Agent API を呼び出し、ソフトウェアの更新ポイントの WSUS へスキャンを行います。

この場合の動作は、通常の WSUS 単体でご利用いただいている場合の Microsoft Windows Update クライアント プログラムが行うスキャンの動作とは異なりますので、ご注意ください。

 

ご参考)Windows Update が利用するプロキシ設定について

https://blogs.technet.microsoft.com/jpwsus/2017/03/02/proxy-settings-used-by-wu/

 

ご参考)Windows Update クライアントが Windows Update Web サイトへの接続に使用するプロキシ サーバーを決定するしくみ

https://support.microsoft.com/ja-jp/help/900935/how-the-windows-update-client-determines-which-proxy-server-to-use-to-connect-to-the-windows-update-web-site

SCCM クライアントが利用するプロキシ設定についての理解に、お役立ていただければ幸いです。

 

 

- 免責事項

このドキュメントは現状有姿で提供され、 このドキュメントに記載されている情報や見解 (URL 等のインターネット Web サイトに関する情報を含む) は、将来予告なしに変更されることがあります。 お客様は、その使用に関するリスクを負うものとします。

Unable to access Crawl History from SharePoint Central Admin

$
0
0

Summary

Have you experienced an issue where "Crawl History" is inaccessible in your Farm and throwing error, "Could not find stored procedure 'Search_GetRepositoryTimePerCrawl'"?

If so, you may have some across this blog https://blogs.msdn.microsoft.com/sambetts/2014/12/10/sharepoint-2013-crawl-history-error/ which details how the store procedure is created and the issue could be that the timer job to complete the provisioning is disabled. In these cases, just enabling and stating the "Search Health Monitoring - Trace Events" timer job does the trick.

However, if this didn't work in your case, please keep reading for possible workaround...

Problem Description

Unable to access "Crawl History" with error, "Could not find stored procedure 'Search_GetRepositoryTimePerCrawl'"
and the "Search Health Monitoring - Trace Events" timer job is enabled.

 

Example:

 

Result:

Sorry, something went wrong

Could not find stored procedure 'Search_GetRepositoryTimePerCrawl'.

Technical Details

Correlation ID: 9f8f759e-2620-a083-a46d-e8b0cda512ca

Date and Time: 6/28/2018 10:30:04 AM

Cause

The "Search Health Monitoring - Trace Events" timer job unexpectedly fails to execute the provisioning process and the SQL changes are rolled back.

Resolution

Help it along you can force the provisioning process associated with the "Search Health Monitoring - Trace Events" Timer Job, by executring the following PowerShell commands.

 

 Add-PSSnapin microsoft.sharepoint.powershell -EA 0
 $diag = Get-SPDiagnosticsProvider -identity "Search Health Monitoring - Trace Events"
 $diag.OnProvisioning()

 

This process should force the initialization of the missing tables and stored procedures within the Search DB according to the definition of "Search Health Monitoring - Trace Events diagnostic provider.

More Information

Get-SPDiagnosticsProvider
https://docs.microsoft.com/en-us/powershell/module/sharepoint-server/get-spdiagnosticsprovider?view=sharepoint-ps

SPDiagnosticsProvider.OnProvisioning method
https://msdn.microsoft.com/en-us/library/office/microsoft.sharepoint.diagnostics.spdiagnosticsprovider.onprovisioning.aspx

Force Protected Apps or Devices | Conditional Access (3 of 4)

$
0
0

 

scenariosummary

 

We are back today for part two of our four part series on conditional access scenarios for success. Today, we will discuss how to restrict resource access from mobile devices unless they are managed by Intune (and compliant) or using an approved application (like Outlook mobile). You may want to protect your corporate data, but also want to balance the experience that end-users have while using these protected resources. To do this, customers can leverage Conditional Access rules to go through and secure email access, but give end users a choice on which mail client they would like to use.

 

Many users love using the native applications on their mobile devices to access email, while others may be fine using Outlook mobile instead. We can allow users to access email in the application they want while staying secure. Regardless of the choice your end users make, IT can rest assured that they will be accessing mail in a secure way. In this scenario, your users have two choices:

  • Use the native mail client, but enroll my device in to Microsoft Intune
  • Use Outlook mobile with Intune App Protection policies applied to secure the corporate data

This scenario enables users to securely access corporate data from their mobile device while giving them options; IT achieves the sweet spot of securing corporate resource access in a way that promotes positive end-user experiences.

 

Scenario Requirements

This scenario is simple to fulfill- all it requires is setting up a conditional access policy. That's it!

  • One Conditional Access policy
    • Policy: scoped to EXO/SPO, targets Mobile Apps and Desktop Clients for Modern Auth and Exchange ActiveSync, and requires devices be either Compliant or using the Approved Client App

With this single policy, we target both the modern auth and Exchange ActiveSync channel, ensuring that with either option a user chooses they will be protected (see Additional Options on how to secure Office 365 for how to protect third party apps).  This gives end users the flexibility they are looking for, while ensuring that corporate data remains secure.

 

Configuration Steps

  • Create the Conditional Access policy to require mobile devices either be enrolled or using an Approved App to access corporate data

 

capolicyconfiguration

 

Once enabled, this policy will do everything that you need in regards to setting this scenario up.

 

End User Experience

Let's take a look at how these policies impact the end user experience.

When a user tries to set up the native mail client on an iOS 11 device, they will see this message prompting them to enroll in to Intune:

 

blockmessagemodernauth

 

Since this is a modern auth client, we stop them before they can even finish setting up the mailbox on the device. The experience differs a bit using the legacy EAS authentication channel, which allows the mailbox to be set up, but quarantines the device in EXO and prompts the user to enroll. That message looks like this:

 

emailquarantinemessage

 

So these are the two prompts your end users may get when you set this scenario up, so make sure that you provide some communications to them on what to expect if they are trying to use the native mail client on their device.

If users are using Outlook mobile, they will be prompted to set up the "broker" app on their device based on their device platform. On iOS devices, they will need to install the Microsoft Authenticator app; on Android, they will need to install the Company Portal (so if you are using Intune App Protection for Android today, end users should already have this installed). Part of this process of installing the broker app is also registering the device with Azure AD. The broker app becomes the manager for connections to Azure AD/Office 365 and is in charge of determining that the application trying to connect to cloud services is indeed an approved application. You can read more about this here: https://docs.microsoft.com/en-us/intune/app-based-conditional-access-intune

 

Additional options to secure Office 365

We have also recently had the option arrive in Conditional Access to block legacy authentication. Until this was available, Conditional Access only worked with modern authentication and EAS clients. We can now block all traffic coming in to Office 365/Azure AD with Conditional Access (including Exchange Web Services, SMTP, POP, IMAP, etc). We strongly recommend you create a simple policy in Conditional Access to target "Other Clients" and block that traffic. This ensures that legacy mail clients using other connection options than modern auth or EAS will be blocked. You can read more about this new functionality here: https://cloudblogs.microsoft.com/enterprisemobility/2018/06/07/azure-ad-conditional-access-support-for-blocking-legacy-auth-is-in-public-preview/

 

In Review

Scenario Goal: Protect corporate data on mobile device while giving users a choice on how they want to use their mobile device

Scenario Scope: iOS/Android

Recommended when…

  • Customers are concerned about protecting mobile access to Office 365
  • There is an end-user population who uses the native applications today
  • Customers want to provide options to end-users in how they access Office 365 data

In the next post of this series, we will shift our focus to how we can ensure users are accessing web content via the Managed Browser instead of the native device browsers. Have more questions about securing mobile device access to Office 365? Have you tried out these conditional access scenarios? Let us know in the comments below!

 

-Josh and Sarah

Microsoft Developer Kits for Windows Mixed Reality and Project Kinect for Azure

$
0
0

Did you know that you can apply for the Windows Mixed Reality and Project Kinect for Azure development kits?

If you have a project you would like to build using Project Kinect for Azure or  Windows Mixed Reality, you can be selected to receive the development kit for the respective program.

Follows the links to apply to the program:

For Mixed Reality: 

https://iwantmr

For Project Kinect for Azure:

https://aka.ms/iwantkinect

Please don't hesitate in contact us in case you need anything.

Adopt faster using Play Sessions

$
0
0

I have always struggled with learning new topics from online videos. Videos are definitively helpful, and I know a lot of people that learn how to cook, earn IT certifications, or even fix a car just by watching videos, but not me.  I prefer classroom training because I get to learn from someone in person and I get the chance to practice, ask questions, and make mistakes.  The more mistakes I make, the more expertise I gain about the topic, because I know what can go wrong and how to troubleshoot when problems arise.  This is why I have always believed that to have better adoption rates employees should become experts in O365 workloads and we should train our people to become trainers and let them train stakeholders or champion groups with practice sessions.

 

Some time ago, I started an experiment with my colleague to create an adoption whiteboard session for the FastTrack Center in Las Colinas.  We had the idea of having FastTrack Managers and FastTrack Engineers meet in person in a conference room to learn new adoption topics, so we could better explain adoption trends to our customers.  I then mentioned the idea of what we call now Play Sessions.  The name might sound silly but let me explain.

 

One of the first meetings included a “Play Session” was for MS Teams. We decided to invite a Teams SME to do a demo and an FM to present a customer facing deck for the first half of the session.  People loved it, and although they didn’t ask any questions, they seemed to understand. For the second half of the session we created a game, a Play Session, and that's when things got get interesting.

I created a list of things to complete in 30 minutes, we created a test team in MS Teams, a scoreboard, and then grouped people in pairs.  They had to complete all the tasks from the Play Session, and every time they finished one they had to run to the scoreboard, and they would earn one point. The pair that finished first would win our first Play Session.

Then… something amazing happened!

Everybody started asking questions, talking amongst each other, and running to the scoreboard.  We discovered that even the FastTrack Engineers we thought understood the first half of the meeting didn’t understand how to do some things that the SME talked about in his demo. At the end of the session, everyone was excited to learn more about other workloads in future sessions.   In the survey, participants mentioned that they felt more prepared to talk about MS Teams to customers and do this same session in their demos.  People that were not using Teams started to use it often.   They all practiced, made mistakes, learned, and had fun at the same time.

We have many great resources to share for O365 learning, but some FMs are going an extra mile and generating play sessions with their customers because they want them to adopt more workloads and to adopt them faster.

Here is what we did in the play session:

Instructions:

  1. Navigate to MS Teams.
  2. You are already invited as an owner to our Play Session Team called “ Teams- Play Session.”

Follow these instructions.  Each time your team finishes a task, go to the whiteboard and check the task that you completed.

 

Task Description
1 Navigate to the Team “Teams- Play SESSION” and show how you feel right now with a GIF.
2 Create a Channel under that Team and name it with an original name.
3 Install these applications:  Polly, Planner, One Note, and Power BI in the Channel that you just created.
 4 Click the Files tab in the Team and edit the document called “TESTING 1 2 3.”  Share with us what you thought about the SME session.
5 Navigate to the team and @mention someone.
6 Navigate to the Store and install the “Growbot” app and send a Kudo to your team member in your Channel.
7 Send a Kudo with Growbot in the Team.
8 Create a poll in the Team using “Polly,” make sure to answer other team polls.
9 Navigate to Outlook and send an e-mail to the Teams e-mail address.
10 Create a Teams meeting from Outlook for today at 9 P.M. and send it to your other Team member.
11 Find for a GIF of your favorite movie and post it in your Team.

 

Why don’t you try doing an exercise like this for your group or team?  The adoption process will quicken, they will learn about the workload, and will be entertained at the same time.

Collaborators of the play session: Camille Jimenez (Relationship Manager), Priya Vanka (Relationship Manager), Alicia Sanchez (FastTrack Manager), Alejandro Lopez (FastTrack Engineer – Teams SME)

 


Configuration Manager – Setting up Cloud Services using Wildcard Certificates

$
0
0

Hello all,

I wanted to take a second to introduce a new contributor to the blog, Matt Toto.  Matt and I have known each other for about 5 years and have even teamed up on some customer engagements recently.  I asked Matt if he'd like to bring his expertise to this blog and he graciously agreed.  With that, take it away Matt....

===================================================================================================================

Hi Everyone!  This is Matt Toto, I'm a ConfigMgr PFE focused on Cloud Services.  In this article I'll be sharing how to use a wildcard certificate for setting up both the Cloud Management Gateway and Cloud Distribution Point.  Support for this capability was added to Configuration Manager in 1802.

Using a Wildcard Certificate to Create Cloud Service is Configuration Manager (CMG and CDP) has a lot of benefits.  It reduces cost and maintenance of PKI.  This single wildcard cert can be used as a Management Certificate, if using Classic Deployment Model.  As well as be used to create potentially unlimited CMG's and CDP's.  The process is quite simple.  Let's get started!

 

Step one is to obtain a wildcard cert for your domain.  For example, *.contoso.com, using either Internal PKI or Publicly Provided.  In my lab I use a public cert, provided by DigiCert, for an ARM CMG, this example is based off that configuration.

Next you'll need to create a Cloud Management Gateway in Configuration Manager.  On the General tab, sign in as an administrator account to provide Configuration Manager with access to your subscription info.

 

 

On the Settings Page of the Wizard specify your wildcard certificate, enter the password.  You will receive the following prompt, informing you about the Common Name (CNAME) of the certificate having a wildcard.  Its ok, we'll fix that in a bit.  For now, just click OK, ok?

 

 

Initially your screen will look something similar to this.  Note that its telling you the name cannot contain special characters.  Which, for the moment, it does.

 

 

This is where you come in!  Notice the Service FQDN box?  Yes, it does look unhappy with that red SPLAT.  But, it also looks like you can type in that box, right?  Normally you cannot enter text here because it is auto-populated, based on the CNAME field in the certificate.  In this case, because it’s a wildcard, you actually NEED to type a unique name here.

 

 

 

Go ahead, type something unique.  All you have to do is come up with a unique name, enter it in the box, then click out of the box.  Once you do that the Service Name box will display the name provided and you can continue with the setup.  Like so…

 

 

After finishing out the Wizard, the service will be provisioned with that as its 'Service Name' and the Cloud Service Name will be appended with .cloudapp.net

 

 

Now that the service is provisioned, you'll need to update DNS.  Add a CNAME that maps the Service Name (which is the name that your SCCM Client will try to resolve) to the Cloud Service Name which will be, in this example, UniqueName.cloudapp.net.

 

 

 

That's it!  Support for the wildcard certificate is a game changer for setting up the Cloud Management Gateway and Cloud Distribution Point in Azure!

Protect Critical Data With OneDrive & Known Folder Move (KFM)

$
0
0

Despite best intentions, people don't always follow instructions.

When it comes to saving files, we can all be somewhat guilty of saving to the Desktop or Documents folder when we're in a hurry with the best intentions to move it into a cloud sync'ed folder structure "when we have time". The reality is, for many they never get around to it and run the risk of losing critical data.

This is particularly true in a shared device environment like schools where many IT Admins have policies set to reset devices when a user logs out - any content not saved into the Cloud or a network file share is going to be lost.

Enter Known Folder Move with OneDrive

Announced yesterday, this feature has the potential to save a lot of heart ache for users by ensuring their most likely "dumping grounds" for files and folders are automatically saved into OneDrive:

Known folders are global pointers in Windows representing a location on the user’s drive. They help users to organize their most important files and access them across different applications. KFM helps you move your docs, desktop, and pictures into OneDrive. Even the Screenshots and Camera Roll folders are included when the Picture folder has opted into KFM.

So how does this look for the end user? Here is a typical Win10 File Explorer view:

KFM1

With KFM configured you can see the folders now redirect to OneDrive (circled in red) and the Downloads folder is not moved to OneDrive. Furthermore, this is leveraging OneDrive Files on Demand, where you can see the icons beside each folder. The cloud icon shows it's only in OneDrive, the green tick shows it remains on the device itself (as well as in OneDrive).

KFM2

There are some cool features for IT admins to enable this further including:

  • GPO to allow either guided or silent installs on users devices - here's the guided screenshot that pops up for users (e.g. students/teachers):

KFM3

  • There is support coming for Intune later this year (phew!)
  • Windows 7/8/10 are all supported with KFM
  • There is a difference between Folder Redirection and Folder Migration, with Migration recommended over Redirection:
    • Folder Redirection redirects a local Windows folder to an equivalent folder in OneDrive but does not migrate any content from the local folder to OneDrive. That’s why folder redirection should only be used on brand new machines that don’t have existing content.
    • Folder Migration redirects a local Windows folder to an equivalent folder in OneDrive and does migrate the content from the local folder to an equivalent folder in OneDrive. Folder migration can be used on brand new or existing devices with or without content
  • Be aware some file types are not supported in OneDrive.
  • If you've saved OneNote files locally, there is some advice on how to move them to OneDrive here.

My Point of View:

This is a great feature to add to Windows because data loss remains a significant risk for end users if they're only keeping files locally on their devices. With increased storage in the cloud for users now, the ability to easily redirect key "dumping areas" to automatically save to OneDrive will not doubt save a lot of users from blushes and heartache.

If you're an IT Admin in a school this is definitely worth checking out when it is released for you to access.

Cloud Platform Release Announcements for June 27, 2018

$
0
0

Azure Data Lake Storage Gen2 in preview

Azure Data Lake Storage Gen2 is a highly scalable, performant, and cost-effective data lake solution for big data analytics. Azure Data Lake Storage Gen2 combines the power of a high-performance file system with massive scale and economy to help you speed your time to insight. It extends Azure Blob Storage capabilities and is optimized for analytics workloads. Store data once and access via existing Blob Storage and HDFS compliant file system interfaces with no programming changes or data copying. Azure Data Lake Storage is compliant with regional data management requirements.

Azure Data Lake Storage Gen2 adds a Hadoop compatible file system endpoint to Azure Blob Storage and delivers the following capabilities:

  • Limitless storage capacity.
  • Support for atomic directory transactions during analytic job execution. This means that analytics jobs will run faster and require fewer individual transactions, thus leading to lower costs for Big Data Analytics workloads.
  • Fine grained, POSIX compliant ACL support to enable granular permission assignments for Data Lake directories and files.
  • Availability in all Azure regions when it becomes generally available.
  • Full integration with Azure Blob Storage.

Azure Data Lake Storage Gen2 will support all Blob tiers (hot, cool, and archive), as well as lifecyle policies, Storage Service Encryption, and Azure Active Directory integration. You can write data to Blob storage once using familiar tools and APIs and access it concurrently in Blob and Data Lake contexts.

To learn more about Azure Data Lake Storage, please visit our product page.

Azure IoT Edge | GA

Announcing the general availability of Azure IoT Edge, a fully managed service that delivers cloud intelligence locally by deploying and running artificial intelligence (AI), Azure services, and custom logic directly on cross-platform IoT devices. With general availability (GA), we are introducing several new features and capabilities, including:

  • Open source release of IoT Edge runtime.
  • Support for Moby container management system.
  • Zero touch provisioning of edge devices with Device Provisioning Service .
  • Security Manager with support for hardware-based root of trust for allowing secure boot strapping and operation of Edge.
  • Scaled deployment and configuration of Edge devices using Automatic Device Configuration Service.
  • Support for SDKs in multiple languages, including C, C#, Node, Python and Java (coming soon).
  • Tooling for module development including coding, testing, debugging, deployment—all from VSCode.
  • CI/CD pipeline using Visual Studio Team Services.

Azure services supported on IoT Edge include:

To learn more, read the announcement blog.

Azure App Service | Managed Service Identity—GA

Managed Service Identity gives Azure services an automatically managed identity in Azure Active Directory (Azure AD). You can use this identity to authenticate to any service that supports Azure AD authentication, including Key Vault, eliminating the need to manage credentials on your own.

Learn more.

Azure Logic Apps | Generally available in China

Azure Logic Apps is now generally available in China.

Logic Apps delivers process automation and integrates applications and data across on-premises, public, or private cloud environments.

Logic Apps enhance productivity with business processes automation, EAI, B2B/EDI, as well as services and applications integration using most common out-of-the-box connectors for Azure services, Office 365, Dynamics CRM, and other services.

Learn more about Logic Apps.

Azure Search | Auto complete and synonyms in preview

New query features in Azure Search

Azure Search has two new features now generally available. The auto complete API feature searches an existing index to suggest terms to complete a partial query. The synonyms functionality feature allows for Azure Search to not only return results which match the query terms that were typed into the search box, but also return results which match synonyms you have defined of the query terms.

Learn more about Azure Search.

Azure SQL Database | Data Sync—GA

Azure SQL Data Sync general availability Azure SQL Data Sync provides unidirectional and bidirectional data synchronization capabilities between Azure SQL Database and SQL Server endpoints deployed anywhere in the world. Manage your data sync topology, schema, and monitor the sync progress centrally from the Azure portal. Azure SQL Data Sync also provides a stable, efficient, and secure way to share data across multiple Azure SQL Database or SQL Server databases.

For more information, visit the Azure blog.

Azure SQL Database | Storage add-ons now available

Storage add-ons now generally available in Azure SQL Database

Now generally available, storage add-ons allow the purchase of extra storage without having to increase DTUs or eDTUs. Purchase extra storage for performance levels S3–S12 and P1–P6 databases up to 1 TB, for smaller eDTU premium elastic pools up to 1 TB, and for standard elastic pools up to 4 TB.

Learn more about these add-on storage options on the Azure blog.

Azure SQL Database | Zone Redundancy—GA

Zone redundant configuration for premium service tier of Azure SQL Database now generally available.

Announcing the general availability of zone redundant premium databases and elastic pools in select regions. The built-in support of Availability Zones further enhances business continuity of Azure SQL Database applications and makes them resilient to a much larger set of unplanned events, including catastrophic datacenter outages. The supported regions include Central US and France Central with more regions to be added over time.

Learn more.

Azure Event Hubs |Availability Zones support in preview

Availability Zones support for Event Hubs now in preview

With Azure Availability Zones support for Event Hubs, you can build mission-critical applications with higher availability and fault tolerance by using cloud messaging between applications and services.

Azure Availability Zones support for Event Hubs provides an industry-leading, financially-backed SLA with fault-isolated locations within an Azure region, providing redundant power, cooling, and networking. The preview begins with Central US and France Central, and is available to all Event Hubs customers at no additional cost.

Learn how to explore Azure Availability Zones support for Service Bus.

Azure Database for MySQL and Azure Database for PostgreSQL (open source database services) | Gen 5 new regions—GA

Azure Database for MySQL and PostgreSQL: Extended regional availability and memory optimized pricing tier

Azure Database for MySQL and Azure Database for PostgreSQL availability has been extended to the following regions; Central US (Gen4), North Central US (Gen5), France Central (Gen5), East Asia (Gen5), India Central (Gen5), India West (Gen5), and Korea Central (Gen5). You can now create and switch to the new memory optimized pricing tier, which is designed for high-performance database workloads that require in-memory performance for faster transaction processing and higher concurrency.

Azure SQL Database | Elastic Jobs in preview

Elastic Database Jobs preview now available for Azure SQL Databases
Now available in preview, Azure Elastic Database Jobs is a fully Azure-hosted service that's easy to use for executing T-SQL based jobs against group of databases. Elastic jobs can now target databases in one or more Azure SQL database servers, Azure SQL elastic pools, or across multiple subscriptions. Elastic jobs can be composed of multiple steps and can dynamically enumerate the list of targeted databases as additional databases are added or removed from the service.

Learn more on the Azure blog.

Azure SQL Database | Resumable index creation in preview

Resumable online index create feature of Azure SQL Database in preview

The resumable online index create (in preview) feature lets you pause an index create operation and resume it later from where the index create operation was paused or failed. With this release, we extended the resumable functionality by adding this feature to the resumable online index rebuild feature as well.

Learn more.

Azure Dev Spaces | Preview

Imagine you are a new employee trying to fix a bug in a complex microservices application consisting of dozens of components, each with their own configuration and backing services. To get started, you must configure your local development environment so that it can mimic production, then set up your IDE, build tool chain, containerized service dependencies, a local Kubernetes environment, mocks for backing services, and more. With all the time involved setting up your development environment, fixing that first bug could take days. With Azure Dev Spaces, a feature of Azure Kubernetes Service (AKS) now in preview, the process can be drastically simplified.

Using Azure Dev Spaces, all a developer needs is their IDE and the Azure CLI. Azure Dev Spaces provides a rapid, iterative Kubernetes development experience for teams. With minimal machine setup, developers can iteratively run and debug containers directly in AKS, even in complex environments. Teams can share an AKS cluster to collaboratively work together, with each developer able to test end-to-end with other components without replicating or mocking up dependencies. They can also use Dev Spaces to develop on the OS of their choice—Windows, Mac, or Linux—using familiar tools like Visual Studio, Visual Studio Code, or just the command line.

Learn more.

Improved user experience for navigation in Visual Studio Team Services

Announcing the preview of an improved navigation user experience (UX) for Visual Studio Team Services. The goal of this new experience is to give users a clean and modern looking task-focused navigation while enabling more functionality. It also allows customers to decide on how much complexity they would like to expose to their users by enabling or disabling parts of Visual Studio Team Services like version control or build. It also includes other improvements in notifications and homepage. Additional improvements will be coming soon.

For all the details on what’s new with this release and to learn how to turn it on for testing, see our detailed blog post.

Azure Active Directory (Azure AD) | Password protection in preview

One weak password is all a hacker needs to get access to a corporation’s resources. With Azure AD password protection, you can now secure against this vulnerability. This security feature within Azure AD has capabilities such as banned passwords and smart lockout, and delivers on a hybrid promise by extending the protection to identities in the cloud and on-premises.

Banned passwords enables you to both restrict users from setting common passwords such as “password123”, and also to define a custom set of passwords such as “companyname123”.

Additionally, you can also set policies to define password complexity that they want to enforce from a security or compliance standpoint. Also a part of password protection, smart lockout enables you to set policies on the number of times a user gets to fail authentication and subsequently locked out.

With Azure AD password protection, you can bring together the power of cloud-powered protection and flexible policy definition, as well as protect against password spray attacks on your corporate resources.

To learn more, view the full blog post.

Get started today by trying out this preview for yourself.

Azure AD conditional access VPN connectivity | GA

Announcing the general availability of the support of Azure AD conditional access for Windows 10 VPN clients. With this feature, the VPN client is now able to integrate with the cloud-based Conditional Access Platform to provide a device compliance option for remote clients. This allows conditional access to be used to restrict access to your VPN in addition to your other policies that control access based on conditions of user, location, device, apps and data.

Get started today and learn more by visiting our documentation website.

Azure AD conditional access | What If GA

Announcing the general availability of the Azure AD conditional access What If tool. As you continue to create multiple policies within conditional access, the What If policy tool allows you to understand the impact of your conditional access policies on your environment and users. Instead of test driving your policies by performing multiple sign-ins manually, this tool now enables you to evaluate a simulated sign-in of a user. The simulation estimates the impact this sign-in has on your policies and generates a simulation report. The report does not only list the applied conditional access policies, but also classic policies if they exist. This tool is also handy to troubleshoot when a particular user will be affected by a policy.

Get started today with this tool and visit our documentation site to learn more.

Configuration Manager ReportServer データベースのトランザクションログ肥大化への対策

$
0
0

こんにちは。System Center サポートチームです。

本ポストでは、Configuration Manager の ReportServer データベースのトランザクションログ肥大化への対策をご紹介いたします。

 

Configuration Manager でレポート機能を利用されている環境で、ReportServer データベースのトランザクションログファイルが肥大化し、HDD が逼迫したというお問い合わせよくいただきます。

これは SQL Server Reporting Services (SSRS) の ReportServer データベースを完全復旧モデルで運用されていて、トランザクションログの切り捨てを実行していない環境で発生します。

 

Configuration Manager で使用する SSRS のデータベースは、破損しても基本的にレポーティングサービスの役割を再構築することで復旧はできますので、普段の運用ではバックアップもあまり必要なく復旧モデルも単純で問題ありません。(カスタムレポートを利用している場合は、バックアップが重要です。文末の参考情報をご参照ください。)

 

もしも、ご利用の環境で ReportServer データベースが肥大化して HDD が逼迫する問題が生じた場合には、以下にご紹介する SQL Server Management Studio を用いた、単純モデルへの変更および、圧縮方法をご利用ください。

 

=====================================================

- データベース復旧モデルの変更手順復旧モデルの変更およびトランザクションログの縮小

=====================================================

 

1) SQL Server Management Studio で 対象の SQL Server に接続します。

 

2) 左上の "新しいクエリ" ボタンを押し、クエリ入力画面を開きます。

 

3) 現在のトランザクションログサイズと使用率を確認します。

------------

dbcc sqlperf(logspace);

go

------------

* 肥大化していると対象のデータベースは、トランザクションログファイルが大きく使用率が高いことが確認いただけます。

 

4) データベース復旧モデルを「単純」に変更します。

------------

use master

go

alter database <データベース名> set recovery simple;

------------

 

例: データベース名が ReportServer の場合

------------

use master;

go

alter database ReportServer set recovery simple; go

------------

* 上記の実行により、復旧モデルの変更とチェックポイントが実行されます。単純復旧モデルとなりチェックポイントが実行されますと、トランザクションレコードを切り捨て、空き領域となります。

復旧モデルの変更は、対象データベースがトランザクション実行中でも、使用中のユーザーがいても実行できます。変更に伴い、再起動なども必要ありません。

ここで再び、上記 3) の dbcc sqlperf(logspace) を実行しますと、ファイルサイズは同じでも使用率が減少していると思います。

 

5) トランザクションログファイルの論理名とサイズを確認します。

 

------------

Use <データベース名>

Go

select file_id,name [論理名] ,CAST(size as BIGINT)*8192/1024/1024 [LogSize(MB)],physical_name from sys.database_files where type_desc = 'log';

------------

 

論理名の欄には、ReportServer_log と出力されます。

 

6) トランザクションログを圧縮します。

 

トランザクションログファイルの論理名、圧縮後のサイズ(ターゲットサイズ)を指定し、圧縮を実行します。

------------

Use <データベース名>;

Go

dbcc shrinkfile('論理名',ターゲットサイズ(MB));

go

------------

 

例: 論理名が ReportServer_log、圧縮後のサイズを 1 GB とする場合

------------

use ReportServer;

go

dbcc shrinkfile('ReportServer_log', 1024); go

------------

 

※トランザクションログに空きがあるにも関わらず指定したターゲットサイズまで圧縮されないことがあります。

トランザクションログファイルは、内部的に仮想ログ ファイルに分割されています。並び順が後ろの方の仮想ログを現在使用中の場合には、その位置までしか圧縮できず、十分に圧縮されない場合があるためです。

この場合には、しばらく時間を空け、対象データベースに対してチェックポイントを実行後に、圧縮を再度実行して下さい。

 

チェックポイントの実行

--------------------------

use <データベース名>;

go

checkpoint;

go

--------------------------

 

手順は以上となります。

参考情報

================

復旧モデルと、ログ管理の詳細については、以下の技術文書もよろしければご参照ください。

 

単純復旧モデルでのバックアップ

http://msdn.microsoft.com/ja-jp/library/ms191164(v=sql.100).aspx

 

完全復旧モデルでのバックアップ

http://msdn.microsoft.com/ja-jp/library/ms190217(v=sql.100).aspx

 

トランザクション ログの管理

http://msdn.microsoft.com/ja-jp/library/ms345583(v=sql.100).aspx

 

トランザクション ログのバックアップ

http://msdn.microsoft.com/ja-jp/library/ms190440(v=sql.100).aspx

 

HowTo: Management Studio を使ってトランザクションログファイル (ldf) のサイズを小さくする方法
https://blogs.msdn.microsoft.com/jpsql/2017/09/29/howto-management-studio-ldf/

 

Reporting Services の定義済みのレポートを変更した場合や、カスタム レポートを作成した場合は、レポート サーバー データベース ファイルのバックアップを取ることが重要です。以下のページもご確認ください。

 

Reporting Services のカスタム レポートのバックアップ
https://docs.microsoft.com/ja-jp/sccm/protect/understand/backup-and-recovery#back-up-custom-reporting-services-reports

 

 

Configuration Manager ReportServer データベースのトランザクションログ肥大化への対策

$
0
0

こんにちは。System Center サポートチームです。

本ポストでは、Configuration Manager の ReportServer データベースのトランザクションログ肥大化への対策をご紹介いたします。

Configuration Manager でレポート機能を利用されている環境で、ReportServer データベースのトランザクションログファイルが肥大化し、HDD が逼迫したというお問い合わせよくいただきます。

これは SQL Server Reporting Services (SSRS) の ReportServer データベースを完全復旧モデルで運用されていて、トランザクションログの切り捨てを実行していない環境で発生します。

Configuration Manager で使用する SSRS のデータベースは、破損しても基本的にレポーティングサービスの役割を再構築することで復旧はできますので、普段の運用ではバックアップもあまり必要なく復旧モデルも単純で問題ありません。(カスタムレポートを利用している場合は、バックアップが重要です。文末の参考情報をご参照ください。)

もしも、ご利用の環境で ReportServer データベースが肥大化して HDD が逼迫する問題が生じた場合には、以下にご紹介する SQL Server Management Studio を用いた、単純モデルへの変更および、圧縮方法をご利用ください。

=====================================================

- データベース復旧モデルの変更手順

復旧モデルの変更およびトランザクションログの縮小

=====================================================

1) SQL Server Management Studio で 対象の SQL Server に接続します。

2) 左上の "新しいクエリ" ボタンを押し、クエリ入力画面を開きます。

3) 現在のトランザクションログサイズと使用率を確認します。

------------

dbcc sqlperf(logspace);

go

------------

* 肥大化していると対象のデータベースは、トランザクションログファイルが大きく使用率が高いことが確認いただけます。

4) データベース復旧モデルを「単純」に変更します。

------------

use master

go

alter database <データベース名> set recovery simple;

------------

例: データベース名が ReportServer の場合

------------

use master;

go

alter database ReportServer set recovery simple; go

------------

* 上記の実行により、復旧モデルの変更とチェックポイントが実行されます。単純復旧モデルとなりチェックポイントが実行されますと、トランザクションレコードを切り捨て、空き領域となります。

復旧モデルの変更は、対象データベースがトランザクション実行中でも、使用中のユーザーがいても実行できます。変更に伴い、再起動なども必要ありません。

ここで再び、上記 3) の dbcc sqlperf(logspace) を実行しますと、ファイルサイズは同じでも使用率が減少していると思います。

5) トランザクションログファイルの論理名とサイズを確認します。

------------

Use <データベース名>

Go

select file_id,name [論理名] ,CAST(size as BIGINT)*8192/1024/1024 [LogSize(MB)],physical_name from sys.database_files where type_desc = 'log';

------------

論理名の欄には、ReportServer_log と出力されます。

6) トランザクションログを圧縮します。

トランザクションログファイルの論理名、圧縮後のサイズ(ターゲットサイズ)を指定し、圧縮を実行します。

------------

Use <データベース名>;

Go

dbcc shrinkfile('論理名',ターゲットサイズ(MB));

go

------------

例: 論理名が ReportServer_log、圧縮後のサイズを 1 GB とする場合

------------

use ReportServer;

go

dbcc shrinkfile('ReportServer_log', 1024); go

------------

※トランザクションログに空きがあるにも関わらず指定したターゲットサイズまで圧縮されないことがあります。

トランザクションログファイルは、内部的に仮想ログ ファイルに分割されています。並び順が後ろの方の仮想ログを現在使用中の場合には、その位置までしか圧縮できず、十分に圧縮されない場合があるためです。

この場合には、しばらく時間を空け、対象データベースに対してチェックポイントを実行後に、圧縮を再度実行して下さい。

チェックポイントの実行

--------------------------

use <データベース名>;

go

checkpoint;

go

--------------------------

手順は以上となります。

 

================
参考情報
================

復旧モデルと、ログ管理の詳細については、以下の技術文書もよろしければご参照ください。

単純復旧モデルでのバックアップ

http://msdn.microsoft.com/ja-jp/library/ms191164(v=sql.100).aspx

完全復旧モデルでのバックアップ

http://msdn.microsoft.com/ja-jp/library/ms190217(v=sql.100).aspx

トランザクション ログの管理

http://msdn.microsoft.com/ja-jp/library/ms345583(v=sql.100).aspx

トランザクション ログのバックアップ

http://msdn.microsoft.com/ja-jp/library/ms190440(v=sql.100).aspx

HowTo: Management Studio を使ってトランザクションログファイル (ldf) のサイズを小さくする方法
https://blogs.msdn.microsoft.com/jpsql/2017/09/29/howto-management-studio-ldf/


Reporting Services の定義済みのレポートを変更した場合や、カスタム レポートを作成した場合は、
レポート サーバー データベース ファイルのバックアップを取ることが重要です。以下のページもご確認ください。

Reporting Services のカスタム レポートのバックアップ
https://docs.microsoft.com/ja-jp/sccm/protect/understand/backup-and-recovery#back-up-custom-reporting-services-reports

Viewing all 34890 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>