How to call a Microsoft Graph API from Azure Logic App

Microsoft Graph is the gateway to data and intelligence in Microsoft 365.

You can use the Microsoft Graph API to build apps for organizations and consumers that interact with the data of millions of users. With Microsoft Graph, you can connect to a wealth of resources, relationships, and intelligence, all through a single endpoint: https://graph.microsoft.com.

For additional information refer to https://docs.microsoft.com/en-us/graph/overview link.

Before using the APIs from the Microsoft Logic App, some configurations must be made within Microsoft Azure Active Directory.

Azure Active Directory Setup

Sign in to the Azure portal.

In the left-hand navigation pane, select the Azure Active Directory service.

Select “App registrations” (in preview at this moment) and then select “New registration”.

Enter an application name, select the account type you want to enable and click on “Register”

When finished, take note of “Application (client) Id” and “Directory (tenant) Id”.

To give the capability of calling Microsoft Graph API to your Logic App, you have to select the API permissions.

Click on “Add a permission”.

Select “Microsoft Graph”.

Now you have to choose the permission type, Delegated or Application.

In our case, we need to call the API listed below. Microsoft Graph API documentation specifies the permission required for each method.

List All Teams

https://docs.microsoft.com/en-us/graph/teams-list-all-teams?toc=./ref/toc.json&view=graph-rest-1.0

Clone a Team

https://docs.microsoft.com/en-us/graph/api/team-clone?view=graph-rest-1.0

Add Group Owner

https://docs.microsoft.com/en-us/graph/api/group-post-owners?view=graph-rest-1.0

Add Member

https://docs.microsoft.com/en-us/graph/api/group-post-members?view=graph-rest-1.0

So, we have granted the following permission.

Permission Type
Directory.AccessAsUser.All Delegated
Directory.ReadWrite.All Delegated
Directory.ReadWrite.All Application
Group.ReadWrite.All Delegated
Group.ReadWrite.All Application
User.Read Delegated
User.Read.All Delegated
User.Read.All Application
User.ReadWrite.All Delegated
User.ReadWrite.All Application

Once the necessary permissions have been added, you will see the following result. Now you have to delegate the permission. Click on “Grant admin consent for…”

Click on “Yes” to confirm.

As a last step, you need to generate a secret key. Select “Certificates & secret” and then “new client secret”.

Specify a description and choose the expiration time.

Once the secret has been generated, it is very important you take note of the secret value. You won’t able to retrieve it after you leave the page.

Conclusion

What you need to successfully call an API from a Logic App are:

  • Application (client) Id
  • Directory (tenant) Id
  • Client secret key

Logic App Development

Once the Azure Active Directory setup is complete, you can use the HTTP connector to make calls to the Graph API.

My requirement was to clone a team, so I had to first look for the source MS Teams id and then proceed with cloning operation.

The following picture shows the search operation.

This API return a json inside which you can find the MS Teams id, then you can call the cloning operation.

The URI is the follow:

@concat(https://graph.microsoft.com/v1.0/teams/’, body(‘Get_Teams_Template_Id’), ‘/clone’)

Additional notes

Keep in mind that some of the Graph API operations are not synchronous but may have a delay, especially when working with application permissions, you may won’t see the right permission in Teams until the permission synchronization has occurred (https://docs.microsoft.com/en-us/graph/api/group-post-owners?view=graph-rest-1.0).

Upload Flat File on Azure SQL Database using Azure Logic App

In this article, I’m going to describe how to upload on-premises flat files on Azure SQL Database using Azure Logic Apps.

Logic Apps provide integration and workflows capabilities in the cloud. It provides a visual designer to model and automated processes as a series of steps.

In this sample, I also use Azure Service Bus as an entry point to the cloud because the file size is less than 1024 KB. Consider to use Azure Storage if you plan to upload file greater than that size.

Initial Consideration

Before you start, consider installing Microsoft Azure Logic Apps Enterprise Integration Tools for Visual Studio 2015 (https://www.microsoft.com/en-us/download/details.aspx?id=53016) in order to facilitate the creation of schemas and maps.

In addition, you can install Azure Logic Apps Tools for Visual Studio (https://visualstudiogallery.msdn.microsoft.com/e25ad307-46cf-412e-8ba5-5b555d53d2d9) to develop Logic App integration using visual studio.

At this link (https://azure.microsoft.com/en-us/tools/), you can find several tools for the development of solutions based on Microsoft Azure.

The following picture shows the data flow that I am going to describe.

clip_image002

Following the check list

  • Create Resource Group
  • Create Service Bus Namespace
  • Create Service Bus Queue
  • Create SQL Server
  • Create SQL Database
  • Create SQL Objects (Table, Schema and Procedures)
  • Configure API Connection to Service Bus
  • Configure API Connection to SQL Database
  • Create an Integration Account
  • Upload Schemas and Maps on Integration Account
  • Configure Transform Function (this step should be unnecessary in the next future)
  • Create and Test Logic App integration flow

To complete each step, you must first access the Microsoft Azure portal (https://portal.azure.com).

I think it is unnecessary to describe each step in detail, so I will just provide some screen shots.

Create Resource Group

clip_image004

Create Service Bus Namespace

clip_image006

Create Service Bus Queue

clip_image008

Create SQL Server (Logical Server)

clip_image010

Connect to SQL Server and create database objects

Before you can connect to SQL Database virtual server, you have to configure Azure firewall to allow the communication.

clip_image012

Use SQL Server Management Studio to create database and tables

clip_image014

Create database
   1: /*

   2: For additional information refer to address https://msdn.microsoft.com/en-us/library/dn268335.aspx

   3: */

   4: CREATE DATABASE [int-fftosql-sqldb]

   5: (

   6:     EDITION = 'basic',

   7:     SERVICE_OBJECTIVE='basic'

   8: );

   9:

  10: GO

Create table and procedures

 

   1: CREATE XML SCHEMA COLLECTION [dbo].[OrderXml] AS

   2: N'<xsd:schema xmlns:xsd="http://www.w3.org/2001/XMLSchema">

   3:   <xsd:element name="Root">

   4:     <xsd:complexType mixed="true">

   5:       <xsd:complexContent mixed="true">

   6:         <xsd:restriction base="xsd:anyType">

   7:           <xsd:sequence>

   8:             <xsd:element name="Order" maxOccurs="unbounded">

   9:               <xsd:complexType mixed="true">

  10:                 <xsd:complexContent mixed="true">

  11:                   <xsd:restriction base="xsd:anyType">

  12:                     <xsd:sequence>

  13:                       <xsd:element name="Lines" type="OrderDetailType" maxOccurs="unbounded" />

  14:                     </xsd:sequence>

  15:                     <xsd:attribute name="orderDate" type="xsd:string" use="required" />

  16:                     <xsd:attribute name="orderNumber" type="xsd:string" use="required" />

  17:                     <xsd:attribute name="channel" type="xsd:string" use="required" />

  18:                   </xsd:restriction>

  19:                 </xsd:complexContent>

  20:               </xsd:complexType>

  21:             </xsd:element>

  22:           </xsd:sequence>

  23:         </xsd:restriction>

  24:       </xsd:complexContent>

  25:     </xsd:complexType>

  26:   </xsd:element>

  27:   <xsd:complexType name="OrderDetailType" mixed="true">

  28:     <xsd:complexContent mixed="true">

  29:       <xsd:restriction base="xsd:anyType">

  30:         <xsd:choice maxOccurs="unbounded">

  31:           <xsd:element name="productName" type="xsd:string" />

  32:           <xsd:element name="quantity" type="xsd:string" />

  33:           <xsd:element name="value" type="xsd:string" />

  34:         </xsd:choice>

  35:       </xsd:restriction>

  36:     </xsd:complexContent>

  37:   </xsd:complexType>

  38: </xsd:schema>'

  39: GO

  40:

  41: CREATE TABLE [dbo].[OrderHeader]

  42: (

  43:     [orderId] [uniqueidentifier] NOT NULL,

  44:     [orderNumber] [varchar](20) NOT NULL,

  45:     [orderDate] [datetime] NOT NULL,

  46:     [channel] [varchar](50) NULL,

  47:     CONSTRAINT [PK_OrderHeader] PRIMARY KEY CLUSTERED

  48:     (

  49:         [orderId] ASC

  50:     )

  51: )

  52: GO

  53:

  54: CREATE TABLE [dbo].[OrderLine]

  55: (

  56:     [orderLineId] [int] IDENTITY(1,1) NOT NULL,

  57:     [orderId] [uniqueidentifier] NOT NULL,

  58:     [productName] [varchar](100) NOT NULL,

  59:     [quantity] [int] NOT NULL,

  60:     [value] [numeric](18, 3) NOT NULL,

  61:     CONSTRAINT [PK_OrderLine] PRIMARY KEY CLUSTERED

  62:     (

  63:         [orderLineId] ASC

  64:     )

  65: )

  66: GO

  67:

  68: /*

  69: TEST STATEMENT:

  70: -------------------------------

  71: EXECUTE OrderInsert N'<Root>

  72:   <Order orderDate="20160101" orderNumber="000001" channel="Internet">

  73:     <Lines>

  74:       <productName>Portable Radio</productName>

  75:       <quantity>10</quantity>

  76:       <value>23.5</value>

  77:     </Lines>

  78:     <Lines>

  79:       <productName>6 red wine bottles</productName>

  80:       <quantity>20</quantity>

  81:       <value>30.9</value>

  82:     </Lines>  

  83:   </Order>

  84: </Root>'

  85: -------------------------------

  86: */

  87: CREATE PROCEDURE [dbo].[OrderInsert]

  88:     @orderStream VARCHAR(MAX)

  89: AS

  90: BEGIN

  91:

  92:     -- Fix Incoming XML: Logic App add a "?" char at the beginning of the string

  93:     SET @orderStream = REPLACE(@orderStream, '?<', '<');

  94:

  95:     DECLARE @orderXml XML(OrderXml) = CAST(@orderStream AS XML(OrderXml));

  96:     DECLARE @orderId UNIQUEIDENTIFIER = NEWID();

  97:

  98:     INSERT INTO OrderHeader

  99:     (

 100:         orderId,

 101:         orderNumber,

 102:         orderDate,

 103:         channel

 104:     )

 105:     SELECT

 106:         @orderId,

 107:         orderNumber = OrderObject.Header.value('./@orderNumber', 'varchar(20)'),

 108:         orderDate = OrderObject.Header.value('./@orderDate', 'datetime'),

 109:         channel = OrderObject.Header.value('./@channel', 'varchar(50)')

 110:     FROM

 111:         @orderXml.nodes('/Root/Order') AS OrderObject(Header);

 112:

 113:     INSERT INTO OrderLine

 114:     (

 115:         orderId,

 116:         productName,

 117:         quantity,

 118:         value

 119:     )

 120:     SELECT

 121:         @orderId,

 122:         productName = OrderObject.Lines.value('./productName[1]', 'varchar(100)'),

 123:         quantity = OrderObject.Lines.value('./quantity[1]', 'int'),

 124:         value = OrderObject.Lines.value('./value[1]', 'numeric(18,3)')

 125:     FROM

 126:         @orderXml.nodes('/Root/Order/Lines') AS OrderObject(Lines);

 127:

 128: END

 129: GO

You should see something like that.

clip_image002[4]

Create API Connection to Service Bus queue

Before to configure the connection, you need to know the service bus connection string.

clip_image004[4]

clip_image006[4]

Create API Connection to SQL Server

clip_image008[4]

Create Integration Account

Integration accounts allow you to manage artifacts such as schemas, maps, certificates, partners and agreements in order to build Integration Application cloud based apps.

For additional information, refer to https://azure.microsoft.com/en-us/documentation/articles/app-service-logic-enterprise-integration-create-integration-account/.

clip_image010[4]

Configure Integration Account

clip_image012[4]

Add source schema (PurchaseOrderFF.xsd)

clip_image014[4]

   1: <?xml version="1.0" encoding="utf-16"?>

   2: <xs:schema xmlns="http://int1demoa1project.Order" xmlns:b="http://schemas.microsoft.com/BizTalk/2003" targetNamespace="http://int1demoa1project.Order" xmlns:xs="http://www.w3.org/2001/XMLSchema">

   3:   <xs:annotation>

   4:     <xs:appinfo>

   5:       <schemaEditorExtension:schemaInfo namespaceAlias="b" extensionClass="Microsoft.Azure.Integration.DesignTools.FlatFileExtension.FlatFileExtension" standardName="Flat File" xmlns:schemaEditorExtension="http://schemas.microsoft.com/BizTalk/2003/SchemaEditorExtensions" />

   6:       <b:schemaInfo standard="Flat File" codepage="1250" xmlns:b="http://schemas.microsoft.com/BizTalk/2003" root_reference="Order" />

   7:     </xs:appinfo>

   8:   </xs:annotation>

   9:   <xs:element name="Order">

  10:     <xs:annotation>

  11:       <xs:appinfo>

  12:         <b:recordInfo structure="delimited" child_delimiter_type="hex" child_delimiter="0xD 0xA" child_order="postfix" />

  13:       </xs:appinfo>

  14:     </xs:annotation>

  15:     <xs:complexType>

  16:       <xs:sequence>

  17:         <xs:element name="Header">

  18:           <xs:annotation>

  19:             <xs:appinfo>

  20:               <b:recordInfo tag_name="H0" structure="delimited" child_delimiter_type="char" child_delimiter=";" child_order="prefix" />

  21:             </xs:appinfo>

  22:           </xs:annotation>

  23:           <xs:complexType>

  24:             <xs:sequence>

  25:               <xs:element name="orderDate" type="xs:string">

  26:                 <xs:annotation>

  27:                   <xs:appinfo>

  28:                     <b:fieldInfo justification="left" />

  29:                   </xs:appinfo>

  30:                 </xs:annotation>

  31:               </xs:element>

  32:               <xs:element name="orderNumber" type="xs:string">

  33:                 <xs:annotation>

  34:                   <xs:appinfo>

  35:                     <b:fieldInfo justification="left" />

  36:                   </xs:appinfo>

  37:                 </xs:annotation>

  38:               </xs:element>

  39:               <xs:element name="channel" type="xs:string">

  40:                 <xs:annotation>

  41:                   <xs:appinfo>

  42:                     <b:fieldInfo justification="left" />

  43:                   </xs:appinfo>

  44:                 </xs:annotation>

  45:               </xs:element>

  46:             </xs:sequence>

  47:           </xs:complexType>

  48:         </xs:element>

  49:         <xs:element maxOccurs="unbounded" name="Detail">

  50:           <xs:annotation>

  51:             <xs:appinfo>

  52:               <b:recordInfo tag_name="D0" structure="delimited" child_delimiter_type="char" child_delimiter=";" child_order="prefix" />

  53:             </xs:appinfo>

  54:           </xs:annotation>

  55:           <xs:complexType>

  56:             <xs:sequence>

  57:               <xs:element name="productName" type="xs:string">

  58:                 <xs:annotation>

  59:                   <xs:appinfo>

  60:                     <b:fieldInfo justification="left" />

  61:                   </xs:appinfo>

  62:                 </xs:annotation>

  63:               </xs:element>

  64:               <xs:element name="quantity" type="xs:string">

  65:                 <xs:annotation>

  66:                   <xs:appinfo>

  67:                     <b:fieldInfo justification="left" />

  68:                   </xs:appinfo>

  69:                 </xs:annotation>

  70:               </xs:element>

  71:               <xs:element name="value" type="xs:string">

  72:                 <xs:annotation>

  73:                   <xs:appinfo>

  74:                     <b:fieldInfo justification="left" />

  75:                   </xs:appinfo>

  76:                 </xs:annotation>

  77:               </xs:element>

  78:             </xs:sequence>

  79:           </xs:complexType>

  80:         </xs:element>

  81:       </xs:sequence>

  82:     </xs:complexType>

  83:   </xs:element>

  84: </xs:schema>

Add SQL Server stored procedure schema (PurchaseOrderDB.xsd)

clip_image002[6]

   1: <?xml version="1.0" encoding="utf-16"?>

   2: <xsd:schema xmlns:b="http://schemas.microsoft.com/BizTalk/2003" attributeFormDefault="unqualified" elementFormDefault="unqualified" xmlns:xsd="http://www.w3.org/2001/XMLSchema">

   3:   <xsd:complexType name="OrderDetailType" mixed="true">

   4:     <xsd:choice minOccurs="1" maxOccurs="unbounded">

   5:       <xsd:element minOccurs="1" maxOccurs="1" name="productName" type="xsd:string" />

   6:       <xsd:element minOccurs="1" maxOccurs="1" name="quantity" type="xsd:string" />

   7:       <xsd:element minOccurs="1" maxOccurs="1" name="value" type="xsd:string" />

   8:     </xsd:choice>

   9:   </xsd:complexType>

  10:   <xsd:element name="Root">

  11:     <xsd:complexType mixed="true">

  12:       <xsd:sequence>

  13:         <xsd:element minOccurs="1" maxOccurs="unbounded" name="Order">

  14:           <xsd:complexType mixed="true">

  15:             <xsd:sequence>

  16:               <xsd:element minOccurs="1" maxOccurs="unbounded" name="Lines" type="OrderDetailType" />

  17:             </xsd:sequence>

  18:             <xsd:attribute name="orderDate" type="xsd:string" use="required" />

  19:             <xsd:attribute name="orderNumber" type="xsd:string" use="required" />

  20:             <xsd:attribute name="channel" type="xsd:string" use="required" />

  21:           </xsd:complexType>

  22:         </xsd:element>

  23:       </xsd:sequence>

  24:     </xsd:complexType>

  25:   </xsd:element>

  26: </xsd:schema>

Add XML Transformation Map

clip_image002[8]

clip_image004[6]

   1: <?xml version="1.0" encoding="utf-16"?>

   2: <xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:msxsl="urn:schemas-microsoft-com:xslt" xmlns:var="http://schemas.microsoft.com/BizTalk/2003/var" exclude-result-prefixes="msxsl var s0" version="1.0" xmlns:s0="http://int1demoa1project.Order">

   3:   <xsl:output omit-xml-declaration="yes" encoding="windows-1250" version="1.0" indent="no" method="xml" />

   4:   <xsl:template match="/">

   5:     <xsl:apply-templates select="/s0:Order" />

   6:   </xsl:template>

   7:   <xsl:template match="/s0:Order">

   8:     <Root>

   9:       <Order>

  10:         <xsl:attribute name="orderDate">

  11:           <xsl:value-of select="Header/orderDate/text()" />

  12:         </xsl:attribute>

  13:         <xsl:attribute name="orderNumber">

  14:           <xsl:value-of select="Header/orderNumber/text()" />

  15:         </xsl:attribute>

  16:         <xsl:attribute name="channel">

  17:           <xsl:value-of select="Header/channel/text()" />

  18:         </xsl:attribute>

  19:         <xsl:for-each select="Detail">

  20:           <Lines>

  21:             <productName>

  22:               <xsl:value-of select="productName/text()" />

  23:             </productName>

  24:             <quantity>

  25:               <xsl:value-of select="quantity/text()" />

  26:             </quantity>

  27:             <value>

  28:               <xsl:value-of select="value/text()" />

  29:             </value>

  30:             <xsl:value-of select="./text()" />

  31:           </Lines>

  32:         </xsl:for-each>

  33:       </Order>

  34:     </Root>

  35:   </xsl:template>

  36: </xsl:stylesheet>

 

Temporary operations

Create Storage Account

clip_image002[10]

Create App Service Plan

clip_image004[8]

Create Function App

clip_image006[6]

Add the transformation function

To add the transformation Function for Logic App run the following link

https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2F201-logic-app-transform-function%2Fazuredeploy.json

clip_image008[6]

Create Logic App

clip_image010[6]

Associate Integration Account to the Logic App

clip_image012[6]

Create a blank solution

clip_image014[6]

Configure trigger

clip_image016

clip_image018

Modify the polling frequency if you need to process the file in near real time.

clip_image020

Add Flat-File decoding to parse flat file and fill into XSD schema message.

clip_image022

clip_image024

Add the map to transform flat file message to SQL Database XML schema type.

clip_image026

Configure the call to the stored procedure.

clip_image028

clip_image030

Write and run client application

Now you able to run the client application. To create a sample application, follow the steps below.

1 – Install WindowsAzure.ServiceBus NuGet Package

clip_image032

2 – Get service bus Connection String

clip_image034

3 – Write your code

   1:

   2: using Microsoft.ServiceBus.Messaging;

   3:

   4: ...

   5: using (FileStream file = File.Open(<FilePath>, FileMode.Open, FileAccess.Read, FileShare.Delete))

   6: {

   7:

   8:   // Send

   9:   file.Seek(0, SeekOrigin.Begin);

  10:

  11:   var queueClient = QueueClient.CreateFromConnectionString(<ConnectionString>, <QueueName>);

  12:   BrokeredMessage serviceBusMessage;

  13:

  14:   serviceBusMessage = new BrokeredMessage(file);

  15:

  16:   queueClient.Send(serviceBusMessage);

  17:

  18: }

Upload Dataset on Azure SQL Database using Azure Event Hub and Azure Stream Analytics

There are several ways to upload data to SQL Database, including SSIS, BCP, API and several other ways.

For those who work with SQL Server it is natural to use tools that historically belong to the SQL platform, so I wanted to write this article to expose an alternative and to talk about different techniques that apply to different scenarios.

When designing an architecture, you may find yourselves at a crossroads, and often you do not know what to expect once made a decision. How requirements will change over time.

Azure Event hub allows us to decouple who collects the data (the client) from who receives the data, allow you to change kind of receivers or add additional targets.

For this reason, I consider it the entry point to Microsoft Azure.

The following picture shows the data flow that I am going to describe.

Following the check list

  • Create Resource Group
  • Create Event Hubs Namespace
  • Create Event Hub Queue
  • Create SQL Server
  • Create SQL Database
  • Create SQL Table
  • Create Stream Analytics Job
  • Configure Input
  • Configure Output
  • Configure Query
  • Start Stream Analytics Job

To complete each step, you must first access the Microsoft Azure portal (https://portal.azure.com).

I think it is unnecessary to describe each step in detail, so I will just provide some screen shots.

Create Resource Group

Create Event Hubs Namespace

Create Event Hub

Create SQL Server (logical server)

Connect to SQL Server and create database objects

Before you can connect to SQL Database virtual server, you have to configure Azure firewall to allow the communication.

Use SQL Server Management Studio to create database and tables

Create database

/*

For additional information refer to address https://msdn.microsoft.com/en-us/library/dn268335.aspx

*/

CREATE
DATABASE [int-dstosql-sqldb]

(

    EDITION ‘basic’,

    SERVICE_OBJECTIVE=‘basic’

);

GO

Create table

CREATE TABLE OrdersByCity

(

    entryId INT NOT NULL IDENTITY(1,1) PRIMARY KEY,

    city VARCHAR(50NOT NULL,

    orderValue NUMERIC(10,2NOT NULL,

    orderDate DATETIME NOT NULL

);

Create Stream Analytics Job

Create Stream Analytics Job Input

Create Stream Analytics Job output

Define Stream Analytics Job Query

Refer to the link https://msdn.microsoft.com/en-us/library/azure/dn834998.aspx for additional information about Azure Stream Analytics query language.

Save and start stream analytics job.

Write and run client application

Now you able to run the client application. To create a sample application, follow the steps below.

1 – Install WindowsAzure.ServiceBus NuGet Package

2 – Get Event Hubs Connection String

3 – Write your code

using Microsoft.ServiceBus.Messaging;

using System.Threading;


DataTable dataTable = <Put here your code to retrieve data>;

var data = JsonConvert.SerializeObject(dataTable);

var eventHubClient = EventHubClient.CreateFromConnectionString(<Event Hub Conn String>, <Queue Name>);

eventHubClient.Send(new EventData(Encoding.UTF8.GetBytes(data)));

eventHubClient.Close();

Refer to the link https://azure.microsoft.com/en-us/documentation/articles/event-hubs-csharp-ephcs-getstarted/ to additional information.

BizTalk CRM Lookup functoid is now ready for CRM Online

In recent years I often integrate Microsoft Dynamics CRM by using BizTalk Server, and as many know, BizTalk Server does not have a native connector for MS CRM.

For this reason, we must work hard to generate canonical schemas, to resolve lookup type fields, Guid type fields and generally to develop maps.

To increase the productivity of my team I have always tried to develop tools or helpers to automate the development of long and repetitive tasks, reducing the possibility of generating errors and standardize approaches to problems.

In recent months, with the cooperation of Nino Crudele and Sandro Pereira we published several functoids that simplify integration with Dynamics CRM and in this article I want to describe the work that has been done to also integrate the CRM Online.

When you integrate MS CRM using BizTalk Server, typically you have at least two problems:

  • Transcode the Lookup and Guid type fields
  • Mapping the input information with the generic schema exposed by CRM

The BizTalk Mapper Extension Utility pack simplifies both tasks by providing specialized functoids.

Today we released a new version where you can find the CRM Lookup functoid with two new features:

  • The ability to define more than one configuration in the SSO. This allows of integrating more than one CRM instance with a single BizTalk Farm
  • Now you can integrate CRM On-Line. CRM Lookup, for performance reasons, performs lookups operation by accessing the CRM database in read only. However, the CRM Online does not expose his database, so we added the ability to do lookups using the native CRM web services that are present in both the online and on-premises versions.

The functoid now takes five mandatory input parameters:

  1. SSO Application Name: the connection parameters are stored inside SSO. Here you have to specify the application name where you decide to store the settings. This allow you to have multiple CRM environments that you can integrate;
  2. CRM Guid Field Name: key field name of which you want to get the value;
  3. CRM Entity Name: CRM entity name on which the resolution will be performed
  4. CRM Input Field Name: imagining having a “code” as input and you want to know the CRM physical key, in this attribute should specify the name of the “code” field.
  5. CRM Input Value: value from source system to resolve


The output of the functoid is a string, Example: 6erg5r-sdrf56-4dfgt5-6ty5r4-456trt 24

Installation

Download and install BizTalk Mapper Extensions UtilityPack from https://btsmapextutilitypack.codeplex.com/.

Download CRM Sdk and register in GAC the following assemblies:

  • Microsoft.Xrm.Sdk.dll (version 5.0.9910.15)

Create a SSO application by using SSO Configuration Application MMC Snap-In available at https://www.microsoft.com/en-us/download/details.aspx?id=14524.

SQL Server Lookup

Create the following SSO configuration parameters

  • CrmConnectionType: “sql”
  • CrmDatabaseConnectionString: specify the CRM database connection string

Open SQL Server management console, connect to CRM database and execute the script under CRMLookup.sql file in project folder.

Local CRM web services lookup

Create the following SSO configuration parameters

CRM Online web services lookup

Create the following SSO configuration parameters

How-to create an Azure Service Bus namespace in a specific Resource Group

At this moment, the Azure portal does not allow the creation of a Service Bus namespace in a specific Resource Group. You can do it via REST or PowerShell but not from the portal.

In this short article, I will show how to do it without writing lines of code.

  • Create the resource group from the portal (for example “sampleResources”)
  • From https://resources.azure.com app navigate “subscription”, “your subscription name”, “resourceGroups” and then choose the resource group previously created.
  • Expand “providers” and click on “Show all”
  • Expand “Microsoft.ServiceBus” and then select “namespaces”
  • On the right panel click on “Read\Write” and then on “Create” buttons
  • Specify the name of your Service Bus Namespace (for example “sampleServiceBus”)
  • Paste the following json code

 

{
  “id”: “/subscriptions/SUBSCRIPTION_GUID/resourceGroups/sampleResources/providers/Microsoft.ServiceBus/namespaces/sampleServiceBus,
  “name”: sampleServiceBus,
  “type”: “Microsoft.ServiceBus/namespaces”,
  “location”: West US,
  “kind”: “Messaging”,
  “tags”: {},
  “properties”: {
    “provisioningState”: “Succeeded”,
    “status”: “Active”,
    “createdAt”: “2016-05-10T12:00:00.000Z”,
    “serviceBusEndpoint”: “https://sampleServiceBus.servicebus.windows.net:443/”,
    “enabled”: true,
    “critical”: false,
    “updatedAt”: “2016-05-10T12:00:00.000Z”,
    “eventHubEnabled”: true,
    “namespaceType”: “Messaging”,
    “messagingSku”: 2
  }
}

You should have the following

  • Click on “PUT” button and that’s it!

Integrate SQL Server with Power BI

One of the most common scenarios for those who use Microsoft technologies and intend to publish its data into Power BI typically use SQL Server as a data source.

Power BI provides several ways to access SQL Server information, among them there are some that need simple configurations and others that require developments. Below we summarize the different possibilities that you have available:

Power BI has a wide range of connector to access to several types of data sources, these include a connector for Azure SQL Database and one for Azure SQL Data Warehouse.

To access to a SQL Server, whether it be an instance of Azure IaaS or On-Premises instance you can use Power BI Gateways. There is a Power BI Enterprise Gateway for enterprise deployments and a Power BI Personal Gateway for personal use. You can download it directly from Power BI portal and install in a server near your SQL Server.

For additional information about Power BI Gateways refer to https://powerbi.microsoft.com/en-us/gateway/.

Both these features support direct query option, that is SQL Server is queried when the user accesses the report or dashboard. Typically you prefer to schedule the frequency of the update. You can schedule updates every few minutes.

If you want to get a dashboard that receives continuous updates getting the effect of a live dashboard you need to perform simple developments.

To update a Power BI dataset you can use different Microsoft Azure functionality:

Windows Azure Event Hubs + Windows Azure Stream Analytics

Power BI REST API

Both techniques need configuration on Microsoft Azure side and a Windows service that resides in the same network of your SQL Server that is responsible for reading the information and send them to the Cloud.

This approach allows you to make updates with a frequency close to the second.

Use Windows Azure Event Hubs and Microsoft Azure Stream Analytics is simpler and the Power BI dataset is created automatically, but with this technique, you can only insert new information.

The Power BI APIs are a bit more complex but allow several types of operations. For example it is possible to empty the dataset before inserting new information.

However, you must manually create the data set.

You can download a Windows service sample at the address http://db2powerbi.codeplex.com/. This simple solution it able to use both techniques. It is enough to apply a configuration to the QueryConfig.json file and write a query file. The project includes a console (DbToP Svc.Power Bi.UI.exe) that allows you to create the dataset and retrieve its id (required in the configuration file).

Below is an example of Query Config.json file.

As I said, before starting the service you need to apply some Microsoft Azure side configurations.

You can follow the article https://azure.microsoft.com/en-us/documentation/articles/stream-analytics-power-bi-dashboard/ to configure Windows Azure Event Hubs and a Stream Analytics Job.

You have to follow the article https://powerbi.microsoft.com/en-us/documentation/powerbi-developer-register-a-client-app/ before use Power BI REST API.

Integrate Azure SQL Database and OneDrive with Logic Apps

This sample shows how to read information from an Azure SQL Database and send them to OneDrive folder using a simple Logic Apps.

I will use a polling statement in order to intercept new data available on SQL Database, transform to a new information schema and then write a XML formatted file in an OneDrive folder.

Following I will demonstrate how to setup the sample environment, how to configure the connection to Azure SQL Database and then how to create the integration flow using Azure Logic Apps.

Setup Environment

First of all, you need to create a resource group that will contains the SQL Database instance and all the other items needed for Logic Apps.

To create a resource group, navigate the new azure portal, click on Resource groups and click on Add.

Specify a Resource group name and then click on Create.

Now you are able to create a SQL Database instance. To do that select SQL Databases from the portal and select Add.

You can choose to add the database to an existing server or create a new server. In this demonstration, I have created a new server.

Once you have specified the server name, the user name and password you will use to connect to the database, the resource group previously created and the database name click on the create button to complete the operation.

Now you need to open the Azure firewall in order to connect to the database using Microsoft SQL Server Management Studio. So click on server name link.

Then click on Show firewall settings link.

The interface will show you the current IP address of your machine. So click on Ass client IP.

Save the setting.

After a few seconds, the firewall will be updated.

Now you are ready to connect to the SQL database instance. Open SQL Server Management Studio console and connect to the database using the SQL Server Authentication. Provide the credential you have chosen in the previous step.

Now you are connected to the Azure database.

Now, we have to create the table that contains the information that the Logic Apps will intercepts and will sent to OneDrive.

Click on new query and run the following statement.

— Source Table

CREATE
TABLE [dbo].[SourceTable]

(

    [sourceTableId] [int] NOT
NULL,

    [transactionStatus] [varchar](10)
NOT
NULL,

    [transactionId] [uniqueidentifier] NULL,

    [sampleContent] [varchar](100)
NOT
NULL,

    CONSTRAINT [PK_SourceTable] PRIMARY
KEY
CLUSTERED

    (

    [sourceTableId] ASC

    )

)

GO

— Available Statement

CREATE
PROCEDURE [dbo].[AvailableStmt]

AS

BEGIN

    SELECT

        COUNT(*)

    FROM

        SourceTable

    WHERE

        transactionStatus =
‘To Process’;

END

GO

— Polling Statement

CREATE
PROCEDURE [dbo].[PollingStmt]

AS

BEGIN

    DECLARE @transactionId UNIQUEIDENTIFIER
=
NEWID();

    UPDATE SourceTable

    SET

        transactionStatus =
‘Processed’,

        transactionId = @transactionId

    WHERE

        transactionStatus =
‘To Process’;


    SELECT

        sourceTableId,

        sampleContent

    FROM

        SourceTable WITH (NOLOCK)

    WHERE

        transactionId = @transactionId;

END

GO

Create the Microsoft SQL Connector

Once you have created all you need on SQL Server, you can define the database connection. From the portal home page click on Marketplace.

Find for Microsoft SQL Connector.

Select the connector and click on Create.

Specify a Name, a Logic App Plan (or create a new one) a resource group and then click on Create.

Now you have to specify the server name, the user name and password, the database name.

Scrolling down you have to specify the most important things: The Available Statement and the Polling Statement. The first one is the statement that checks for new information on the database. The second one, gets the information and make them available for the process.

Create the Artifact

In this demonstration, we will transform the information coming from the SQL database to a different information scheme. To do this we will need a map.

You can define a map using Visual Studio and Microsoft Azure BizTalk Services SDK available at http://www.microsoft.com/en-us/download/details.aspx?id=39087.

First, we need to download the schema that will act as a source schema.

To do that, navigate the portal in the all resources list and select the connection to SQL Server previously created. Then select the Download Schema link.

After installed Microsoft Azure BizTalk SDK, open Visual Studio and create a new project of type BizTalk Service Artifact.

Delete al unneeded objects created by the project template.

Then import the schema downloaded in the previous step.

Now, create a new schema that will acts as the target schema. Following just a simple schema.

Finally you can create a new map to transform the source schema.

Define the mapping rules.

Close and save the project.

Now we will define a transformation service on Azure that will contains the map.

From the Azure Marketplace find for BizTalk Transform Service.

Select and specify a name, an App Service Plan (this must be the same used by the SQL Connector) and click on Create.

Navigate the Transform Service and select Maps.

In the right panel will compare a new form with a list of maps. Clock on Add.

Select the “trfm” file created by Visual Studio Solution.

Create the Logic App

Now you are ready to create the Logic App.

Navigate the Azure Portal, select Logic Apps and click on Add.

Specify Logic App name, the resources group and click on create.

Once the Logic App is created click on Edit.

There are many Logic App templates. In this demonstration I choose a blank Logic App selecting “Create from Scratch”.

On the right panel select the “Microsoft SQL Connector” created previously.

Specify “Poll Data (XML)”.

Specify the polling interval. Depending on the Pricing Tier you can specify different measures as polling interval.

Confirm your selection and select “BizTalk Transform Service”.

Click on Arrow to start the configuration.

Then click on Tranform.

Choose the output of Microsoft SQL Connector as map input transform.

Then specify (manually) the map name.

Finally select the OneDrive connector.

You have to autorize the access to your OneDrive subscription.

Then I chosen “Upload a file”.

Specify a File Path and the Transform output as file content.

In this case no content transfer encoding is needed so specify “none”.

Once the configuration is completed, you should see the following integration flow.

Test the sample

To test the sample is enough to insert a row into SQL database table. You can use the following statement to do that.

insert
into SourceTable

(

    sourceTableId,

    transactionStatus,

    sampleContent

)

values

(

    1,

    ‘To Process’,

    ‘Content to transfer’

);

After a few seconds, you should see the file into your OneDrive subscription.

In the following picture, you can see the content of the result file.

Integration Monday – High available BizTalk infrastructure on Azure IaaS

I’m really pleased to announce that on 4 May Massimo Crippa and me will present my work about BizTalk HA on Microsoft Azure to the integration Monday (http://www.eventbrite.com/e/integration-monday-2015-05-04-massimo-salvatore-tickets-16540234286?aff=estw).

I am very honored that Michael Stephenson gave me this opportunity.

I hope to meet his expectations.

It is a pleasure to work with Massimo who was a colleague of mine at Microsys for 10 years, and now is doing a great experience of working and living at Codit.

Testing BizTalk Server Infrastructure in HA on Azure IaaS

After writing the article Configuring BizTalk Server Infrastructure in HA on Azure IaaS, I wanted to conduct some tests to check communication between servers in the BizTalk farm.

In particular, I wanted to be certain that the infrastructure of Distributed Transaction Coordinator (MSDTC) responds correctly even in case of stress.

To generate more transactions as possible, I created a flow that receives in input an xml file with a high number of elements, each of which generates the execution of an instance of an orchestration that performs a transaction on SQL Server.

Following the schema that I used:

 

 

To create multiple instances of a single orchestration I configured the debatching.

 

 

Below is a simple orchestration that transforms the incoming message in a call to a stored procedure that inserts the information into a table.

 

 

The first test was performed by submitting a file with 1000 elements that has generated 1000 instances of orchestration with their distributed transactions.

 

 <ns0:Transactions xmlns:ns0=http://MsDtcTest.MsDtcTestInputFile>

<Transaction rowNumber=0001 rowDescription=Row number 0001 />

<Transaction rowNumber=0002 rowDescription=Row number 0002 />

<Transaction rowNumber=0003 rowDescription=Row number 0003 />

<Transaction rowNumber=0004 rowDescription=Row number 0004 />

 

<Transaction  rowNumber=0998 rowDescription=Row number 0998 />

<Transaction rowNumber=0999 rowDescription=Row number 0999 />

<Transaction rowNumber=1000 rowDescription=Row number 1000 />

</ns0:Transactions>

The second test was performed by submitting a file with 10000 elements that has generated 10000 instances of orchestration with their distributed transactions.

<ns0:Transactions xmlns:ns0=http://MsDtcTest.MsDtcTestInputFile>

<Transaction rowNumber=0001 rowDescription=Row number 0001 />

<Transaction rowNumber=0002 rowDescription=Row number 0002 />

<Transaction rowNumber=0003 rowDescription=Row number 0003 />

<Transaction rowNumber=0004 rowDescription=Row number 0004 />

 

<Transaction rowNumber=9998 rowDescription=Row number 9998 />

<Transaction rowNumber=9999 rowDescription=Row number 9999 />

</ns0:Transactions>

 

To increase the number of active transaction I configured database connection pooling to 250 on the send port that invokes the stored procedure.

Also, I have reduced the latency time to 250 milliseconds for both message and orchestration polling Operations.

 

The host instance used by the flow was configured on both BizTalk nodes. Each BizTalk Server and SQL Server has 7 GB of RAM and 4 cores.

 

Test Results

With 1.000 simultaneous orchestration instances, I am able to generate an average of 28.2 distributed transactions per second and about 313 active distributed transactions.

 

Following some additional details.

 

Counter Average Minimum Maximum Duration (sec)
Active Transaction

313,1

136,0

458,0

29

Committed Transaction / sec.

28,2

0,0

38,0

29

 

With 10.000 simultaneous orchestration instances, I am able to generate an average of 26.7 distributed transactions per second and about 462 active distributed transactions.

 

Following some additional details.

 

Counter Average Minimum Maximum Duration
Active Transaction

462,1

208,0

500,0

5,56

Committed Transaction / sec.

26,7

0,0

75,0

5,56

 

10,000 active instances generate an intensive dehydration activity that cause average execution times quite high.

 

 

Conclusion

The configuration on the TCP layer that has limited the number of ports that RPC can activate on SQL server has not created any side effect. We generated up to 500 active transactions. This limit was established by the configuration of BizTalk server-side connection pooling.

The modest number of transactions is due to the infrastructure of the flow that does not allow a high scalability but that merely stress the throttling system and biztalk infrastructure in general.

 

 

Configuring BizTalk Server infrastructure in High Availability on Microsoft Azure IaaS

 

The big difficulty of implementing a BizTalk infrastructure in high availability on Microsoft Azure, has always been about both SQL Server and Distributed Transaction Coordinator (MSDTC).

In this article, I will show the results of some tests with the use of a SAN less solution that enables the creation of SQL Server in a failover cluster configuration.

In this context, we can identify two scenarios of HA. The first involves a single cluster where are running both SQL Server and BizTalk Server services.

 

 

The second one consist of a cluster where are running SQL Server and SSO services and two additional servers for BizTalk server services.

 

 

The first scenario is undoubtedly easier to implement, especially because it requires very little configuration on the network layer. However, the second scenario is certainly the most interesting, for that reason will be what I will illustrate.

This document contains instructions for setting up the Windows Server 2012 R2 test lab by deploying five server computers running Windows Server 2012 R2 as Azure virtual machines in order to implement high availability for a Microsoft BizTalk infrastructure on the cloud.

The Windows Server 2012 R2 configuration in Azure test lab consists of the subnet in a cloud-only Azure Virtual Network named BizTalkLab that simulates a simplified intranet.

 

 

Because Microsoft Azure does not have a clustered storage option, I will use the third party solution called DataKeeper Cluster Edition for cluster storage.

To realize this post I have collected the official documentation and several other posts that I have reported in the following.

 

Step 1: Create the Azure Virtual Network

First, you create the BizTalkLab Azure Virtual Network that will host the BizTalkNet subnet of the base configuration. Then, you create an Azure cloud service. Lastly, you configure a storage account that will contain the disks for your virtual machines and extra data disks.

 

Create a virtual network

  1. In the task bar of the Azure Management Portal, click NEW > NETWORK SERVICES > VIRTUAL NETWORK > CUSTOM CREATE.
  2. On the Virtual Network Details page, type BtLabNet or the name of your choice in NAME.
  3. In LOCATION, select the appropriate region.
  4. Click the Next arrow.
  5. On the DNS Servers and VPN Connectivity page, in DNS servers, type BTLABDC1 in Select or enter name, type 10.0.0.4 in IP address, and then click the Next arrow.
  6. On the Virtual Network Address Spaces page, in Subnets, click Subnet-1 and replace the name with BtLabSubnet.
  7. In the STARTING IP column for the BtLabSubnet subnet, select 10.0.0.4.
  8. In the CIDR (ADDRESS COUNT) column for the BtLabSubnet subnet, click /24 (256).
  9. On the Virtual Network Address Spaces page, click add Subnet
  10. Specify BtLabSubnet2 as subnet name
  11. In the STARTING IP column for the BtLabSubnet2 subnet, select 10.0.1.0.
  12. In the CIDR (ADDRESS COUNT) column for the BtLabSubnet2 subnet, click /24 (256).
  13. Click the Complete icon. Wait until the virtual network is created before continuing.

 

 

The second subnet is not essential, but can be used to separate the network traffic related to the replication of the disks rather than internal communications of the cluster service.

 

Create a new cloud service for the BtLabNet virtual network

 

  1. In the task bar of the Azure Management Portal, click NEW > COMPUTE > CLOUD SERVICE > QUICK CREATE.
  2. In URL, type a unique name. For example, you could name it BtLabNet-UniqueSequence, in which UniqueSequence is an abbreviation of your organization.
  3. In REGION OR AFFINITY GROUP, select the same region as your BtLabNet virtual network.

 

 

  1. Click the CREATE CLOUD SERVICE complete icon. Wait until the cloud service is created before continuing.

 

Create a new storage account

  1. In the task bar of the Azure Management Portal, click NEW > DATA SERVICES > STORAGE > QUICK CREATE.
  2. In URL, type a unique name.
  3. In REGION OR AFFINITY GROUP, select the same region as your BtLabNet virtual network.

 

 

  1. Click the CREATE STORAGE ACCOUNT complete icon. Wait until the storage account is created before continuing.

 

Step 2: Configure Domain and Domain Controller

 

Following the procedure to create BTLABDC1 server.

BTLABDC1 provides the following services:

  • A domain controller for the btlab.pellitterisbiztalkblog.com Active Directory Domain Services (AD DS) domain.
  • A DNS server for the virtual machines of the BtLabNet virtual network.

 

BTLABDC1 configuration consists of the following:

  • Create an Azure Virtual Machine for BTLABDC1.
  • Add a data disk.
  • Configure BTLABDC1 as a domain controller and DNS server.
  • Create a user account in Active Directory.

 

Create an Azure Virtual Machine

 

  1. In the task bar of the Azure Management Portal, click NEW > COMPUTE > VIRTUAL MACHINE > FROM GALLERY.
  2. On the Choose an Image page, click Windows Server 2012 R2 Datacenter, and then click the Next arrow.
  3. On the Virtual machine configuration page:
  • In VIRTUAL MACHINE NAME, type BTLABDC1.
  • Select a size.
  • In NEW USER NAME, type the name of a local administrator account. You will use this account when initially connecting and as an alternate set of credentials when BTLABDC1 becomes a domain controller. Choose a name that is not easy to guess.
  • In NEW PASSWORD and CONFIRM, type a strong password for the local administrator account.
  • Record the local administrator account name and password in a secured location.
  1. Click the Next arrow.
  2. On the second Virtual machine configuration page:
  • In CLOUD SERVICE, select the cloud service that you created in “Step 1: Create the Azure Virtual Network.”
  • In REGION/AFFINITY GROUP/VIRTUAL NETWORK, select BtLabNet.
  • In STORAGE ACCOUNT, select the storage account that you created in “Step 1: Create the Azure Virtual Network.”
  • In AVAILABILITY SET specify Create an availability set and specify its name.
  • In ENDPOINTS, in the PowerShell row, delete the default port number in the PUBLIC PORT column.

 

 

  1. Click the Next arrow.
  2. On the third Virtual machine configuration page, click the Complete icon. Wait until Azure creates the virtual machine.
  3. On the virtual machines page of the Azure Management Portal, click Running in the STATUS column for the BTLABDC1 virtual machine.
  4. On the command bar, click Attach, and then select Attach Empty Disk. The Attach Empty Disk dialog box appears. The Storage Location and File Name boxes contain automatically generated names that do not need to be altered.
  5. In the Size box, type 20.
  6. Leave the Host Cache Preference set to the default value of NONE.
  7. Click the Complete icon to attach the empty data disk. Wait until the status is Running before continuing (a few minutes).
  8. In the task bar, click CONNECT.
  9. When prompted to open BTLABDC1.rdp, click Open.
  10. When prompted with a Remote Desktop Connection message box, click Connect.
  11. When prompted for credentials, use the following:
  • Name: BTLABDC1\[Local administrator account name] (from step 3)
  • Password: [Local administrator account password] (from step 3)
  1. When prompted with a Remote Desktop Connection message box referring to certificates, click Yes.

 

Add a data disk

Add an extra data disk as a new volume with the drive letter F:.

  1. In the left pane of Server Manager, click File and Storage Services, and then click Disks.
  2. In the contents pane, in the DISKS group, click disk 2 (with the Partition set to Unknown).
  3. Click Tasks, and then click New Volume.
  4. On the Before you begin page of the New Volume Wizard, click Next.
  5. On the Select the server and disk, click Disk 2, and then click Next. When prompted, click OK.
  6. On the Specify the size of the volume page, click Next.
  7. On the Assign to a drive letter or folder page, click Next.
  8. On the Select file system settings page, click Next.
  9. On the Confirm selections page, click Create.
  10. When complete, click Close.

 

Configure BTLABDC1 as a domain controller and DNS server

To configure BTLABDC1 as a domain controller and DNS server for the corp.pellitterisbiztalkblog.com domain, run the following commands at a Windows PowerShell command prompt:

 

Install-WindowsFeature AD-Domain-Services -IncludeManagementTools

Install-ADDSForest -DomainName btlab.pellitterisbiztalkblog.com -DatabasePath “F:\NTDS” -SysvolPath “F:\SYSVOL” -LogPath “F:\Logs”

 

Note that you will be prompted to supply a Directory Services Restore Mode (DSRM) password and to restart BTLABDC1.

 

High Availability Considerations

This document shows an installation of a test environment. In a production environment, to ensure high availability of services you need to add a second domain controller.

 

Prepare User and Groups

 

Sample Script

 

New-ADOrganizationalUnit -Name BizTalk -Path “dc=btlab,dc=pellitterisbiztalkblog,dc=com”
 

New-ADUser -Name DataKeeperService -SamAccountName “DataKeeperService” -UserPrincipalName “DataKeeperService@btlab.pellitterisbiztalkblog.com” -AccountPassword (Read-Host -AsSecureString “AccountPassword”) -CannotChangePassword $true -Enabled $true -PasswordNeverExpires $true -Path “ou=BizTalk,dc=btlab,dc=pellitterisbiztalkblog,dc=com”

New-ADUser -Name BizTalkService -SamAccountName “BizTalkService” -UserPrincipalName “BizTalkService@btlab.pellitterisbiztalkblog.com” -AccountPassword (Read-Host -AsSecureString “AccountPassword”) -CannotChangePassword $true -Enabled $true -PasswordNeverExpires $true -Path “ou=BizTalk,dc=btlab,dc=pellitterisbiztalkblog,dc=com”

New-ADUser -Name SqlService -SamAccountName “SqlService” -UserPrincipalName “SqlService@btlab.pellitterisbiztalkblog.com” -AccountPassword (Read-Host -AsSecureString “AccountPassword”) -CannotChangePassword $true -Enabled $true -PasswordNeverExpires $true -Path “ou=BizTalk,dc=btlab,dc=pellitterisbiztalkblog,dc=com”

 

New-ADGroup -Name “SSO Administrators” -Path “ou=BizTalk,dc=btlab,dc=pellitterisbiztalkblog,dc=com” -groupScope Global

New-ADGroup -Name “SSO Affiliate Administrators” -Path “ou=BizTalk,dc=btlab,dc=pellitterisbiztalkblog,dc=com” -groupScope Global

New-ADGroup -Name “BizTalk Administrators” -Path “ou=BizTalk,dc=btlab,dc=pellitterisbiztalkblog,dc=com” -groupScope Global

New-ADGroup -Name “BizTalk Operators” -Path “ou=BizTalk,dc=btlab,dc=pellitterisbiztalkblog,dc=com” -groupScope Global

New-ADGroup -Name “BizTalk Application Users” -Path “ou=BizTalk,dc=btlab,dc=pellitterisbiztalkblog,dc=com” -groupScope Global

New-ADGroup -Name “BizTalk Isolated Application Users” -Path “ou=BizTalk,dc=btlab,dc=pellitterisbiztalkblog,dc=com” -groupScope Global

Add-ADGroupMember -Identity “SSO Administrators” -Members “BizTalkService”

Add-ADGroupMember -Identity “SSO Administrators” -Members “pellitteris”

 

Add-ADGroupMember -Identity “SSO Affiliate Administrators” -Members “BizTalkService”

Add-ADGroupMember -Identity “SSO Affiliate Administrators” -Members “pellitteris”

 

Add-ADGroupMember -Identity “BizTalk Administrators” -Members “BizTalkService”

Add-ADGroupMember -Identity “BizTalk Administrators” -Members “pellitteris”

 

Add-ADGroupMember -Identity “BizTalk Operators” -Members “BizTalkService”

Add-ADGroupMember -Identity “BizTalk Operators” -Members “pellitteris”

 

Add-ADGroupMember -Identity “BizTalk Application Users” -Members “BizTalkService”

Add-ADGroupMember -Identity “BizTalk Application Users” -Members “pellitteris”

 

Add-ADGroupMember -Identity “BizTalk Isolated Application Users” -Members “BizTalkService”

Add-ADGroupMember -Identity “BizTalk Isolated Application Users” -Members “pellitteris”

 

Step 3: Configure two SQL Server nodes

 

To create SQL Server nodes with two NICs you have to use PowerShell interface. To do that, download the latest version of the azure SDK available at the following link http://azure.microsoft.com/en-us/documentation/articles/install-configure-powershell/

 

Run the following commands to create the first SQL Server node:

 

Add-AzureAccount
Set-AzureSubscription –SubscriptionName “Visual Studio Ultimate with MSDN” -CurrentStorageAccount “btlabstorage” 

$image = Get-AzureVMImage -ImageName “a699494373c04fc0bc8f2bb1389d6106__Windows-Server-2012-R2-201412.01-en.us-127GB.vhd”

$vm = New-AzureVMConfig -Name “BTLABSQL1” -InstanceSize “Large” -Image $image.ImageName –AvailabilitySetName “btlabavail” 

Add-AzureProvisioningConfig –VM $vm -Windows -AdminUserName “[USERNAME]” -Password “[PASSWORD]”

Set-AzureSubnet -SubnetNames “BtLabSubnet” -VM $vm

Set-AzureStaticVNetIP -IPAddress “10.0.0.5” -VM $vm

Add-AzureNetworkInterfaceConfig -Name “NIC2” -SubnetName “BtLabSubnet2” -StaticVNetIPAddress “10.0.1.5” -VM $vm 

New-AzureVM -ServiceName “BtLabNet-pellitteris” –VNetName “BtLabNet” –VM $vm

 

Then follow the step below:

 

  1. On the virtual machines page of the Azure Management Portal, click Running in the STATUS column for the BTLABSQL1 virtual machine.
  2. On the command bar, click Attach, and then select Attach Empty Disk. The Attach Empty Disk dialog box appears. The Storage Location and File Name boxes contain automatically generated names that do not need to be altered.
  3. In the Size box, type 10.
  4. Leave the Host Cache Preference set to the default value of NONE.
  5. Click the Complete icon to attach the empty data disk. Wait until the status is Running before continuing (a few minutes).
  6. In the task bar, click CONNECT.
  7. When prompted to open BTLABDC1.rdp, click Open.
  8. When prompted with a Remote Desktop Connection message box, click Connect.
  9. When prompted for credentials, use the following:
  • Name: BTLABSQL1\[Local administrator account name]
  • Password: [Local administrator account password]
  1. When prompted with a Remote Desktop Connection message box referring to certificates, click Yes.

 

Add a data disk

 

Add an extra data disk as a new volume with the drive letter F:.

  1. In the left pane of Server Manager, click File and Storage Services, and then click Disks.
  2. In the contents pane, in the DISKS group, click disk 2 (with the Partition set to Unknown).
  3. Click Tasks, and then click New Volume.
  4. On the Before you begin page of the New Volume Wizard, click Next.
  5. On the Select the server and disk, click Disk 2, and then click Next. When prompted, click OK.
  6. On the Specify the size of the volume page, click Next.
  7. On the Assign to a drive letter or folder page, click Next.
  8. On the Select file system settings page, click Next.
  9. On the Confirm selections page, click Create.
  10. When complete, click Close.

 

Configuring Network

Assign static IP to the network interfaces and disable Firewall or configure it to open SQL Server TCP port, SSO TCP port and RPC (for both SSO and MSDTC services).

 

Join SQL Server node to Active Directory Domain

 

  1. In the left pane of Server Manager, click Local Server.
  2. In the contents pane, in the PROPERTIES group, click WORKGROUP and then Change
  3. Select Domain and specify btlab.pellitterisbiztalkblog.com
  4. Specify the administrator credentials and confirm.
  5. Click OK on Welcome to domain dialog box and then restart the machine.

 

Configuring second node

 

Repeat this step for the second SQL Server node.

 

Step 4: Create Cluster

 

To install the Failover Clustering feature

 

  1. Start Server Manager.
  2. On the Manage menu, click Add Roles and Features.
  3. On the Before you begin page, click Next.
  4. On the Select installation type page, click Role-based or feature-based installation, and then click Next.
  5. On the Select destination server page, click the server where you want to install the feature, and then click Next.
  6. On the Select server roles page, click Next.
  7. On the Select features page, select the Failover Clustering check box.
  8. To install the failover cluster management tools, click Add Features, and then click Next.
  9. On the Confirm installation selections page, click Install.
  10. When the installation is completed, click Close.
  11. Repeat this procedure on every server that you want to add as a failover cluster node.

 

To run cluster validation tests

 

  1. On a computer that has the Failover Cluster Management Tools installed from the Remote Server Administration Tools, or on a server where you installed the Failover Clustering feature, start Failover Cluster Manager. To do this on a server, start Server Manager, and then on the Tools menu, click Failover Cluster Manager.
  2. In the Failover Cluster Manager pane, under Management, click Validate Configuration.
  3. On the Before You Begin page, click Next.
  4. On the Select Servers or a Cluster page, in the Enter name box, enter the NetBIOS name or the fully qualified domain name of a server that you plan to add as a failover cluster node, and then click Add. Repeat this step for each server that you want to add. To add multiple servers at the same time, separate the names by a comma or by a semicolon. For example, enter the names in the format server1.contoso.com, server2.contoso.com. When you are finished, click Next.
  5. On the Testing Options page, click Run all tests (recommended), and then click Next.
  6. On the Confirmation page, click Next.

 

The Validating page displays the status of the running tests.

 

  1. The test should succeed with some warnings. Deselect Create the cluster now using the validated nodes and then click Finish.

 

To create the failover cluster

 

  1. Start Server Manager.
  2. On the Tools menu, click Failover Cluster Manager.
  3. In the Failover Cluster Manager pane, under Management, click Create Cluster.

 

The Create Cluster Wizard opens.

 

  1. On the Before You Begin page, click Next.
  2. In the Select Servers page appears, in the Enter name box, enter the NetBIOS name or the fully qualified domain name of the first on SQL Server nodes, and then click Add. When you are finished, click Next.

 

Note: We will start by creating a single node cluster, this allow you to make the necessary adjustment to the cluster name resource before we add the second node to the cluster.

 

  1. Select No, to skip the validation test and then click Next.
  2. On the Access Point for Administering the Cluster page specify the cluster NetBIOS name and then click Next.

 

 

 

  1. On the Confirmation page, review the settings and click Next to create the failover cluster.
  2. On the Summary page, confirm that the failover cluster was successfully created and click Finish.

 

Once the cluster wizard creation process completes, you may notice that the cluster name resource fails to go online, this is normal.

If it happens, follow steps 10, 11 and 12, otherwise ignore them.

 

 

  1. Right click on IP Address resource and select Properties.
  2. On IP Address properties page select Static IP Address and specify the higher and of the subnet range.

 

 

  1. Confirm the change and try to bring on line the resource.

 

 

To add second SQL Server node

 

  1. On a computer that has the Failover Cluster Management Tools installed from the Remote Server Administration Tools, or on a server where you installed the Failover Clustering feature, start Failover Cluster Manager. To do this on a server, start Server Manager, and then on the Tools menu, click Failover Cluster Manager.
  2. In the Failover Cluster Manager click Add Node.

 

 

  1. On the Before You Begin page, click Next.
  2. In the Select Servers page appears, in the Enter name box, enter the NetBIOS name or the fully qualified domain name of the second SQL Server nodeBTLABSQL2, and then click Add. When you are finished, click Next.
  3. On the Confirmation page, review the settings and click Next.
  4. On the Summary page click Finish.

 

Create a network file share for witness configuration

 

Connect to BTLABDC1 virtual machine.

  1. On the virtual machines page of the Azure Management Portal, click Running in the STATUS column for the BTLABDC1 virtual machine.
  2. In the task bar, click CONNECT.
  3. When prompted to open BTLABDC1.rdp, click Open.
  4. When prompted with a Remote Desktop Connection message box, click Connect.
  5. When prompted for credentials, use the following:
  • Name: BTLAB\[Local administrator account name] (from step 3 of the “Create an Azure Virtual Machine for BTLABDC1” procedure)
  • Password: [Local administrator account password] (from step 3 of the “Create an Azure Virtual Machine for BTLABDC1” procedure)
  1. When prompted by a Remote Desktop Connection message box referring to certificates, click Yes.
  2. In the left pane of Server Manager, click File and Storage Services, and then click Shares.
  3. In the contents pane, in the SHARES group click Tasks, and then click New Share.
  4. In the New Share Wizard leave the proposed profile and click Next.
  5. Select Type a custom path and browse for create a folder to share then click Next.

 

 

  1. Specify a share name and click Next.
  2. Deselect all settings and Click Next.
  3. On permission page select Customize permissions.
  4. Select the principal BTLAB\BTLABCLUSTER$ and provide Modify Permission

 

 

Confirm the permission and complete the wizard.

 

Configure cluster quorum

 

On a computer that has the Failover Cluster Management Tools installed from the Remote Server Administration Tools, or on a server where you installed the Failover Clustering feature, start Failover Cluster Manager. To do this on a server, start Server Manager, and then on the Tools menu, click Failover Cluster Manager.

 

  1. In the Failover Cluster Manager right click the cluster node, select More Actions and then Configure Cluster Quorum Settings.

 

 

  1. On the Before You Begin page, click Next.
  2. Select “Select the quorum witness” and click Next.
  3. Select Configure a file share witness and click Next
  4. Specify the share path “\\BTLABDC1\SqlWitness” and click Next
  5. On the Confirmation page review all settings and click Next
  6. On the Summary page click Finish

 

Step 5: Create Replicated Volume Cluster Resource with DataKeeper Cluster Edition

 

Before starting SIOS setup, you must install the .NET Framework 3.5.

 

To install the .NET Framework 3.5 feature

 

  1. Start Server Manager.
  2. On the Manage menu, click Add Roles and Features.
  3. On the Before you begin page, click Next.
  4. On the Select installation type page, click Role-based or feature-based installation, and then click Next.
  5. On the Select destination server page, click the server where you want to install the feature, and then click Next.
  6. On the Select server roles page, click Next.
  7. On the Select features page, select the .NET Framework 3.5 Feature check box and then click Next.
  8. On the Confirm installation selections page, click Install.
  9. When the installation is completed, click Close.
  10. Repeat this procedure on every server that you want to add as a failover cluster node.

 

Other Prerequisites

 

  1. Create a domain account for DataKeeper service account
  2. Make this account member of local Administrators account of both SQL Server nodes.

 

To install DataKeeper cluster edition

In this lab I used a 14 day trial license that is generally available for testing upon request.

 

 

On the welcome page Click Next

 

 

On the License Agreement click Yes

 

 

Leave the selection and click Next

 

 

Choose the destination and click Next

 

 

Confirm the changes

 

 

Select Domain or Server account and click Next

 

 

Specify the domain service user and click Next

 

 

 

Click on Install License File to choose the License provided with demo setup and click Exit

 

 

Click Finish to confirm the restart.

Repeat the procedure on the second node.

 

After the reboot, connect to the first SQL Server node, run the DataKeeper UI and complete the steps below.

 

 

Click on Connect to Server and specify the first node

 

 

Make same thing with the second node

 

 

Then, create a job specifying name and description.

 

 

If you have prepared a server with two interfaces you can specify the second subnet to implement the disk replica.

 

 

Choose the network interface for the second node

 

 

Then specify Synchronous and click Done

 

 

Confirm to create disk cluster resource

 

 

Then exit. From the Failover Cluster Manager you can see the disk resource.

 

 

Step 6: Install SQL Server 2014

 

Following the screenshot that I used to install SQL Server 2014 cluster nodes.

 

 

 

 

 

 

 

 

 

 

 

 

Repeat the previous steps on the second node. Then follow the step below.

 

 

 

 

 

 

 

 

 

 

 

 

 

To cluster Distributed Transaction Coordinator (MSDTC)

 

From Failover Cluster Manager select SQL Server Role and click Add Resource on the right panel.

 

 

 

 

You can now bring on line DTC cluster resource.

From Administrative Tools, open the Component Services to configure MSDTC cluster resource security settings.

 

 

Step 7: Configure client access

 

Create an Internal Load Balancer

Once the cluster is configured, you will need to create the internal load balancer (ILB) which will be used for all client access. Clients that connect to SQL Server, Enterprise SSO and MSDTC will need to connect to the ILB instead of connecting directly to the cluster IP address.

The following commands can take even a few hours.

 

$ip=”10.0.0.250″ # IP address you want your Internal Load Balancer to use, this should be the same address as your SQL Server Cluster IP Address

$svc=”BtLabNet-pellitteris” # The name of your cloud service

 

$vmname1=”BTLABSQL1″ #The name of the VM that is your first cluster node

$vmname2=”BTLABSQL2″ #The name of the VM that is your second cluster node

 

$prot=”tcp”

$probeport=59999

 

$ilbname=”BTLABSQLCLU” #this is the name your clients connect to, it should coincide with you SQL cluster Name Resource

$subnetname=”BtLabSubnet” #the name of the Azure subnet where you want the internal load balancer to live

 

Add-AzureAccount

 

Set-AzureSubscription –SubscriptionName “Visual Studio Ultimate with MSDN” -CurrentStorageAccount “btlabstorage”

 

# Add Internal Load Balancer to the service

Add-AzureInternalLoadBalancer -InternalLoadBalancerName $ilbname -SubnetName $subnetname -ServiceName $svc –StaticVNetIPAddress $IP

 

# Add load balanced endpoint

# SQL Server Port

Get-AzureVM -ServiceName $svc -Name $vmname1 | Add-AzureEndpoint -Name “BTLABSQL1-SQL1433” -LBSetName “BtLabSql1433Ilb” -Protocol $prot -LocalPort 1433 -PublicPort 1433 -ProbePort $probeport -ProbeProtocol tcp -ProbeIntervalInSeconds 10 –DirectServerReturn $true -InternalLoadBalancerName $ilbname | Update-AzureVM

Get-AzureVM -ServiceName $svc -Name $vmname2 | Add-AzureEndpoint -Name “BTLABSQL1-SQL1433” -LBSetName “BtLabSql1433Ilb” -Protocol $prot -LocalPort 1433 -PublicPort 1433 -ProbePort $probeport -ProbeProtocol tcp -ProbeIntervalInSeconds 10 –DirectServerReturn $true -InternalLoadBalancerName $ilbname | Update-AzureVM

 

# RPC

 

Get-AzureVM -ServiceName $svc -Name $vmname1 | Add-AzureEndpoint -Name “BTLABSQL1-RPC135” -LBSetName “BtLabRpc135Ilb” -Protocol $prot -LocalPort $locport -PublicPort $pubport -ProbePort $probeport -ProbeProtocol tcp -ProbeIntervalInSeconds 10 –DirectServerReturn $true -InternalLoadBalancerName $ilbname | Update-AzureVM

Get-AzureVM -ServiceName $svc -Name $vmname2 | Add-AzureEndpoint -Name “BTLABSQL2-RPC135” -LBSetName “BtLabRpc135Ilb” -Protocol $prot -LocalPort $locport -PublicPort $pubport -ProbePort $probeport -ProbeProtocol tcp -ProbeIntervalInSeconds 10 –DirectServerReturn $true -InternalLoadBalancerName $ilbname | Update-AzureVM

 

# RPC Dynamic Port

Get-AzureVM -ServiceName $svc -Name $vmname1 | Add-AzureEndpoint -Name “BTLABSQL1-RPC5000” -LBSetName “BtLabRpc5000Ilb” -Protocol $prot -LocalPort 5000 -PublicPort 5000 -ProbePort $probeport -ProbeProtocol tcp -ProbeIntervalInSeconds 10 –DirectServerReturn $true -InternalLoadBalancerName $ilbname | Update-AzureVM

Get-AzureVM -ServiceName $svc -Name $vmname2 | Add-AzureEndpoint -Name “BTLABSQL2-RPC5000” -LBSetName “BtLabRpc5000Ilb” -Protocol $prot -LocalPort 5000 -PublicPort 5000 -ProbePort $probeport -ProbeProtocol tcp -ProbeIntervalInSeconds 10 –DirectServerReturn $true -InternalLoadBalancerName $ilbname | Update-AzureVM

Get-AzureVM -ServiceName $svc -Name $vmname1 | Add-AzureEndpoint -Name “BTLABSQL1-RPC5001” -LBSetName “BtLabRpc5001Ilb” -Protocol $prot -LocalPort 5001 -PublicPort 5001 -ProbePort $probeport -ProbeProtocol tcp -ProbeIntervalInSeconds 10 –DirectServerReturn $true -InternalLoadBalancerName $ilbname | Update-AzureVM

Get-AzureVM -ServiceName $svc -Name $vmname2 | Add-AzureEndpoint -Name “BTLABSQL2-RPC5001” -LBSetName “BtLabRpc5001Ilb” -Protocol $prot -LocalPort 5001 -PublicPort 5001 -ProbePort $probeport -ProbeProtocol tcp -ProbeIntervalInSeconds 10 –DirectServerReturn $true -InternalLoadBalancerName $ilbname | Update-AzureVM

 

[DO THE SAME UP TO TCP PORT 5065]

 

Get-AzureVM -ServiceName $svc -Name $vmname1 | Add-AzureEndpoint -Name “BTLABSQL1-RPC5065” -LBSetName “BtLabRpc5065Ilb” -Protocol $prot -LocalPort 5065 -PublicPort 5065 -ProbePort $probeport -ProbeProtocol tcp -ProbeIntervalInSeconds 10 –DirectServerReturn $true -InternalLoadBalancerName $ilbname | Update-AzureVM

Get-AzureVM -ServiceName $svc -Name $vmname2 | Add-AzureEndpoint -Name “BTLABSQL2-RPC5065” -LBSetName “BtLabRpc5065Ilb” -Protocol $prot -LocalPort 5065 -PublicPort 5065 -ProbePort $probeport -ProbeProtocol tcp -ProbeIntervalInSeconds 10 –DirectServerReturn $true -InternalLoadBalancerName $ilbname | Update-AzureVM

 

Update the Client Listener

 

Once the internal load balancer is created you will need to run a Powershell script on first SQL node to update the SQL Server Cluster IP address. The script references the Cluster Network name and the IP Resource Name. The pictures below show you were to find both of these names in Failover Cluster Manager.

 


 


 

The script below should be run on one of the cluster nodes. Make sure to launch Powershell ISE using Run as Administrator.

 


 

# This script should be run on the primary cluster node after the internal load balancer is created
# Define variables

 

$ClusterNetworkName = “Cluster Network 1” # the cluster network name

$IPResourceName = “SQL IP Address 1 (BTLABSQLCLU)” # the IP Address resource name

$CloudServiceIP = “10.0.0.250” # IP address of your Internal Load Balancer

 

Import-Module FailoverClusters

 

Get-ClusterResource $IPResourceName | Set-ClusterParameter -Multiple @{“Address”=”$CloudServiceIP”;”ProbePort”=”59999″;SubnetMask=”255.255.255.255″;”Network”=”$ClusterNetworkName”;”OverrideAddressMatch”=1;”EnableDhcp”=0}

 

You will need to bring your cluster resource offline once and then bring it online.

 

Configure MSDTC to Use a Specific Port

 

  1. Start Component Services MMC, right click My Computer and select Properties
  2. Select the Default Protocols tab
  3. Click Properties button
  4. Click Add
  5. Type in the port range that is above the port MSDTC will use. In this case, I will use ports 5000-5065.

 

 

  1. Click OK back to My Computer properties window and click OK

 

Note: The number of specified ports for dynamic RPC is not casual but is due to the limitation of maximum number of configurable endpoints of ILB that at this moment is 150. So you can have, 65 dynamic RPC endpoints for the first node, 65 endpoint for the second node and two additional tcp ports endpoint for SQL Server (1433) and RPC (135) for both servers.

 

  1. Start Regedt32.exe
  2. Locate HKEY_LOCAL_MACHINE\Cluster\Resources\{Unique_DTC_ResourceID_GUID}\MSDTCPRIVATE\MSDTC
  3. Right click the MSDTC key, select New and DWord (32-bit) Value
  4. Type ServerTcpPort for the key name
  5. Right click ServerTcpPort key and select Modify
  6. Change radio button to Decimal and type 5021 in the value data, click OK.

 

Note: Leave free the TCP ports between 5000 and 5020 for SSO (for additional information see the article https://msdn.microsoft.com/en-us/library/aa559472.aspx).

 

Restart the MSDTC taking the MSDTC Resource offline/online in Failover Cluster Manager.

 

To confirm MSDTC is using the correct port:

  1. Open an Administrative command prompt and run Netstat –ano to get the port and the Process Identifier (PID)
  2. Start Task Manager and select Details tab
  3. Find MSDTC.exe and get the PID
  4. Review the output for the PID to show it is MSDTC

 

Apply the same configuration to the second node.

 

Step 8: Cluster the Master Secret Server

 

From this time, the installation is pretty standard so I will not go into details because I think it was enough documented.

To cluster enterprise SSO follow the article at this link https://msdn.microsoft.com/en-us/library/aa561823.aspx.

After the installation, you need to set “Use Network Name for computer name” setting on cluster resource property and make the resource dependent from SQL Server cluster resource.

Following some relevant step of my configuration.

 

 

 

 

 

I have completed the configuration of the second node after a failover, so to work always on the active node.

 

 

 

 

Step 9: Install BizTalk nodes

 

  1. In the task bar of the Azure Management Portal, click NEW > COMPUTE > VIRTUAL MACHINE > FROM GALLERY.
  2. On the Choose an Image page, click BizTalk Server 2013 R2 Eenterprise, and then click the Next arrow.
  3. On the Virtual machine configuration page:
  • In VIRTUAL MACHINE NAME, type BTLABBT1.
  • Select a size.
  • In NEW USER NAME, type the name of a local administrator account.
  • In NEW PASSWORD and CONFIRM, type a strong password for the local administrator account.
  • Record the local administrator account name and password in a secured location.
  1. Click the Next arrow.
  2. On the second Virtual machine configuration page:
  • In CLOUD SERVICE, select the cloud service that you created in “Step 1: Create the Azure Virtual Network.”
  • In REGION/AFFINITY GROUP/VIRTUAL NETWORK, select BtLabNet.
  • In STORAGE ACCOUNT, select the storage account that you created in “Step 1: Create the Azure Virtual Network.”
  • In AVAILABILITY SET specify Create an availability set and specify its name.
  • In ENDPOINTS, in the PowerShell row, delete the default port number in the PUBLIC PORT column.

 

 

 

Join BizTalk Server to Active Directory Domain

 

  1. In the left pane of Server Manager, click Local Server.
  2. In the contents pane, in the PROPERTIES group, click WORKGROUP and then Change
  3. Select Domain and specify btlab.pellitterisbiztalkblog.com
  4. Specify the administrator credentials and confirm.
  5. Click OK on Welcome to domain dialog box and then restart the machine.

 

Installing and configuring BizTalk Server

 

Following some relevant picture about my BizTalk configuration.

 

 

 

 

 

 

 

Configuring second node

 

Repeat this step for the second BizTalk server.

Following some relevant step of the configuration I applied.

 

 

 

 

 

 

That’s all folks.

 

 

Considerations

 

The experience that I did opens discussion points. I have done some tests trying to stress much as possible the communication with SQL Server and the MSDTC. During my tests I hadn’t particular problems, however, the limited number of available dynamic RPC ports, lead me to think that a limit, this infrastructure has it.

The scenario 1, exposed at the beginning of this article, is definitely easier and does not have this kind of problem, however, limits the performance as it concentrates all services in a single host with no ability to scale.