Quick Reference – Ionic CLI Commands

In this post we will see some common Ionic CLI commands for Ionic app development.

Install Ionic CLI globally

Install Ionic CLI globally with npm.

npm install -g @ionic/cli

Ionic help

Get Ionic help.

ionic --help
ionic <command> --help
ionic <command> <subcommand> --help

Create a new Ionic Project

ionic start <name> <template> [options]

This command will create a new Ionic project.

Examples

ionic start
ionic start myapp
ionic start myapp tabs --type=angular

Build Ionic Project

ionic build [options]

ionic build will generate web assets for the Ionic project.

Examples

ionic build
ionic build --prod
ionic build --watch

Add Capacitor platform(s) to Ionic project

ionic capacitor add <platform>

ionic capacitor add will add Android or iOS Capacitor platform to the Ionic project.

Examples

Add Android

ionic capacitor add android

Add Ios

ionic capacitor add ios

Ionic Build for Android or iOS

ionic capacitor build <platform> [options]

ionic capacitor build will do the following:

  • Perform ionic build
  • Copy web assets into specified native platform.
  • Open IDE for native project, Xcode for iOS, Android Studio for Android.

Following command will ask user to select platform Android or iOS.

ionic capacitor build

Build for android

ionic capacitor build android

Build for iOS

ionic capacitor build ios

Open project in Android Studio

ionic capacitor open android

Copy web assets to native platforms

ionic capacitor copy [<platform>] [options]

ionic capacitor copy will do the following:

  • Perform ionic build, which compile web assets.
  • Copy web assets to Capacitor native platform(s).
  • This command will not open the IDE for respective platform.

Open IDE for a given native platform

ionic capacitor open <platform> [options]

This command will open the IDE for native project (Xcode for iOS, Android Studio for iOS).

Run Ionic project for platform

ionic capacitor run <platform> [options]

ionic capacitor run will do the following:

  • Build the ionic project.
  • Run the project in emulator (tested for Android).

Build and Copy project and then update native platform

ionic capacitor sync [<platform>] [options]

ionic capacitor sync will do the following:

  • Perform ionic build, which will generate web assets.
  • Copy web assets to Capacitor native platform(s).
  • Update Capacitor native platform(s) and dependecies.
  • Install and discovered Capacitor or Cordova plugins.

Examples:

For android

ionic capacitor sync android

For iOS

ionic capacitor sync ios

Update Capacitor native platforms, install Capacitor/Cordova plugins

ionic capacitor update [<platform>] [options]

ionic capacitor update will do the following:

  • Update Capacitor native platform(s) and dependencies.
  • Install any discovered Capacitor or Cordova plugins.

Generate Icons for Capacitor project

cordova-res <platform> --skip-config --copy

Reference npm package

https://www.npmjs.com/package/cordova-res

Expected project structure

resources/
├── icon.png
└── splash.png
config.xml
  • resources/icon.(png|jpg) must be at least 1024×1024px
  • resources/splash.(png|jpg) must be at least 2732×2732px
  • config.xml is optional. If present, the generated images are registered accordingly

Conclusion

In this tutorial we learned what are the common commands for Ionic project development.

Debug NodeJS TypeScript Using Visual Studio Code

Let’s see how we can configure our NodeJs project with TypeScript in Visual Studio Code for debugging.

Following are the steps:

Create project folder

Create project folder from the command prompt.

mkdir counter

Switch to project.

cd counter

Initialize project with npm package.json

npm init -y

Install TypeScript

npm i typescript --save-dev

Create tsconfig.json file

npx tsc --init --sourceMap --rootDir src --outDir lib

Here our source folder is src and output folder is lib.

Open the project in VS Code.

code .

You will see the project structure like this in VS Code.

and tsconfig.json file with specified options.

Add TypeScript source file

Create source code folder src, we specified the src as root folder in configurations.

And add file index.ts, with some code.

Create tasks.json file

Press Ctrl+Shift+P

Select Tasks: Configure Default Build Task

Then select tsc:watch – tsconfig.json

This will create a tasks.json file in .vscode folder.

Now build project, press Ctrl+Shift+B

This will generate the lib folder, which is our output directory.

and because of watch configuration build will keep running in the background and watch file changes.

Create launch.json file

Click on debug > create a launch.json file.

Choose node.js from options.

This will create a launch.json will

Finally Debug

Now go to your source file you want to debug, and press F9 on line you want to add a breakpoint.

Hit F5, and execution will pause at the breakpoint.

Congrats! you are able to debug your TypeScript now in VS Code.

Shortcut keys for debugging in VS Code.

  • Continue (F5)
  • Step Over (F10)
  • Step Into (F11)
  • Step Out (Shift+F11)
  • Restart (Ctrl+Shift+F5)
  • Stop (Shift + F5)

Conclusion

In this tutorial we learned how we can configure NodeJs TypeScript project in VS Code for easy debugging.

We have also seen what are some of the shortcut keys for quick debugging.

Please share your feedback or any query you have.

Firebase | Testing Firebase Cloud Function Locally with Cloud Function Emulator

Firebase CLI provides a cloud function emulator which can be used to run the Firebase Cloud Function locally before deploying to production. Following are the types of functions which can be emulated.

  • HTTP functions
  • Callable functions
  • Background functions triggered from Authentication, Realtime Database, Cloud Firestore, and Pub/Sub.

We will test a simple http function which we created in the last post.

Creating a Cloud Function Using Firebase CLI and TypeScript.

Following are the steps we will follow:

Install or Update the Firebase CLI

Firebase emulator is included in Firebase CLI, so we need to install it, or update it to the latest version.

npm install -g firebase-tools

Setting Up Admin Credentials for Emulated Functions

If your cloud function requires interaction with Google API or Firebase API via the Firebase Admin SDK then you may need to setup the admin credentials.

Go to the Service Accounts in Google Cloud Platform, under your project.

Select the service account row with Name “App Engine default service account“, and click on Actions button at the right end.

Select Manage Keys.

Click on Add Key drop down button.

Select Create new key.

This will open a modal window.

Choose JSON, from the available key types. This will generate and download the key in json file with key details.

JSON key file will look like following, values are omitted here.

{
  "type": "",
  "project_id": "",
  "private_key_id": "",
  "private_key": "",
  "client_email": "",
  "client_id": "",
  "auth_uri": "",
  "token_uri": "",
  "auth_provider_x509_cert_url": "",
  "client_x509_cert_url": ""
}

Set Google Application Credentials to the JSON file path.

This is required to authenticate to Google API or Firebase API from Firebase Admin SDK.

Execute following from project’s root directory.

set GOOGLE_APPLICATION_CREDENTIALS=path\to\key.json

Run the Firebase Emulator

Now we are ready to start the emulator.

Start the emulator by following command

firebase emulators:start

Emulator will be started and will provide you the function URL.

Call Function URL

Click on provide function URL, and you will receive the response from your function.

Great! We are able to run function locally.

Firebase Emulator UI

You will also be presented with Emulator UI URL.

Open the emulator UI URL in the browser.

In the Emulator screen you will be able to see all the different types of emulators.

You can navigate to Logs, to see any logging by function.

Modify and Test Local Cloud Function

Now let’s update the function.

Change the function response message.

Build the function again, and execute the following from functions folder.

npm run build

Reload the function URL provide by the running emulator and you will see updated response from local function.

When you are happy, you can deploy the function to Firebase as explained in this post.

Firebase | Creating a Cloud Function Using Firebase CLI and TypeScript

Conclusion

The Firebase emulator, which is included in the Firebase CLI, let’s us test the function locally before deploying to the cloud.

In this post we started with installing Firebase CLI and then we setup the Google Account Credential. Google Account Credential requires a JSON file with private key, which was generated in the Google Cloud Platform. Then we started the Firebase emulator and browsed the local function URL and received response from the local function.

Hope you liked this, please share your feedback or any query.

Firebase | Creating a Cloud Function Using Firebase CLI and TypeScript

Overview

Firebase cloud functions let’s you run a piece of code in cloud without managing the servers. This is quite helpful when you just want to manage your code and not to worry about the servers executing the code. This pattern is also known as serverless architecture.

These cloud functions can be triggered by multiple events such as an http request, scheduled time or in response to changes in Realtime Database and perform the intended job.

In this post we will see how we can create a simple Firebase cloud function using Firebase CLI and then deploy it to Firebase. We will use TypeScript to write the function.

Steps for creating the Cloud Function

Let’s create and deploy a Firebase cloud function.

Installing Firebase CLI

Firebase cloud functions are created and deployed using Firebase CLI, so let’s install the Firebase CLI globally. Type following command on command prompt.

npm install -g firebase-tools

Login to Firebase CLI

We need to authenticate to Firebase to create and deploy cloud functions, authenticate to Firebase using following command.

firebase login

This will open a browser window to authenticate.

If you are logged in with another account then you can logout first using following.

firebase logout

Choose the account to login.

Allow, to grant permissions to Firebase CLI.

On Allow following success screen will be presented.

And in command prompt message like following will be logged.

Creating a Firebase Cloud Function Project

Create a directory for the project.

mkdir firebase-function-demo

Change to project directory.

cd firebase-function-demo

Open the directory with Visual Studio code or any other editor.

code .

Initialize Firebase functions project

firebase init functions

Accept the confirmation.

Choose the appropriate option for you. In my case I chose “Use an existing project“, because I have already created the Firebase project.

Next I chose the project from the presented list.

For this example we are going to use the TypeScript, so choose the TypeScript.

Choose Y if you want to use ESLint.

Select Y to install the dependencies.

Your project structure should appear like this so far.

Creating a Cloud Function

We will use the sample helloworld cloud function created by the Firebase CLI for this example.

Open selected index.ts.

index.ts contains commented sample function.

Uncomment the code, and save the file.

File contains one sample http request based cloud function. Which will log following.

"Hello logs!", {structuredData: true}

and return following response.

"Hello from Firebase!"

In the same file more functions can be added, for example we can add another scheduled function like following.

export const scheduledJob = functions.pubsub.schedule("every 5 minutes")
    .onRun(()=>{
      console.log("This will be run every 5 minutes.");
      return null;
    });

This scheduled function will run every five minutes,

cron expression can be added like following to define trigger time for scheduled functions.

export const  scheduledFunctionCrontab = functions.pubsub.schedule('5 11 * * *')
  .timeZone('America/New_York') // Users can choose timezone - default is America/Los_Angeles
  .onRun((context) => {
  console.log('This will be run every day at 11:05 AM Eastern!');
  return null;
});

Deploying cloud function

Let’s deploy the sample code in index.ts to cloud.

Execute the following command on command prompt.

firebase deploy

On successful function deployment, function url will be provided.

Paste the URL in the browser.

and you will get the response like below from cloud function.

Great! Our cloud function is successfully deployed and responding to http request.

Navigate to Firebase project and select functions.

and you should be able to see your cloud function.

You can switch to Logs tab to see the logs of the cloud function.

Congratulations! We have just deployed our first simple cloud function.

Conclusion

In this post we learned how we can create a Firebase Cloud Function and deploy to Firebase. We started with how to install the Firebase CLI globally, and how to authenticate to it. Then we created a template project for cloud function using Firebase CLI and then deployed and tested the cloud function.

Hope you like this!

Please leave your feedback or any query you have.

Authenticating to ASP.NET Core Web Application Using Azure AD B2C

Azure Active Directory B2C enables users to authenticate themselves to applications using local accounts or social accounts such as Google, Facebook or LinkedIn.

The applications leveraging the Azure AD B2C, don’t have to maintain the authentication mechanism. All of that complexity will be handled by Azure AD B2C.

In this post we will see how to configure Azure AD B2C from scratch for ASP.NET Core applications.

We will go through the following topics:

  • Creating a new Azure AD B2C Tenant
  • App registration in Azure AD B2C Tenant
  • Configuring User Flows
  • Creating ASP.NET Core Web application Using Visual Studio 2019 and Configuring Azure AD B2C
  • Creating ASP.NET Core Web application Using .NET CLI and Configuring Azure AD B2C

To configure the Azure AD B2C for the applications first we need to create the Azure AD B2C tenant. Let’s create a new Azure AD B2C tenant.

Creating a New Azure AD B2C Tenant

To create a new Azure AD B2C tenant, login to your Azure portal and click on Create a resource.

In Create a resource page search for Azure Active Directory B2C in the search box, and select the first option available as shown below.

You will be navigated to Azure Active Directory B2C resource page.

Click on Create to create the Azure AD B2C tenant. This will create a separate tenant from your Azure AD tenant.

In the next screen you will be presented with following two options:

  • Create a new Azure AD B2C Tenant.
  • Link an existing Azure AD B2C Tenant to my Azure subscription.

Let’s choose Create a new Azure AD B2C Tenant to create a new tenant.

Next enter the following details for the new tenant.

  • Organization name: provide an organization name.
  • Initial domain name: provide a unique initial subdomain name.
    • Note down this initial domain name, we will need this to update in ASP.NET appsettings.json.
  • Country/Region: Select the country/region closest to your customers.
  • Subscription: Select subscription for your Azure AD B2C tenant.
    • Resource group: Choose resource group to contain Azure AD B2C tenant.

Then click on Review + Create

Validation screen will be presented.

Click on Create

You may encounter the following error.

The subscription is not registered to use namespace ‘Microsoft.AzureActiveDirectory’. See https://aka.ms/rps-not-found for how to register subscriptions. 

If you get this error then check the following post how to fix this error and then continue from here.

The subscription is not registered to use namespace ‘Microsoft.AzureActiveDirectory’

After fixing the error try creating Azure AD B2C Tenant again, and should be successful this time. You should be able to see the status under notifications.

Go to your newly created Azure AD B2C Demo tenant.

App Registration

Any application which needs to authenticate using Azure AD B2C, requires app registration in Azure.

Let’s register an app for our ASP.NET core application.

Click on New registration button inside the App registrations under our newly created Azure AD B2C tenant.

Fill in the details for the app registration

  • The display name: Enter any display name to easily identify the app.
  • Supported account types: choose “Accounts in any identity provider or organizational directory (for authenticating users with user flows)”
  • Redirect URI: Select Web in dropdown and enter following url in text box
https://localhost:5001/signin-oidc

Note the port no. we have provided 5001, you may need to update according to your application port no., we will see later in this post where in ASP.NET core application port can be configured.

Register the application.

In few seconds the app will be registered and you will be redirected to your App.

Note down the Application (client) ID, we will need this to update in ASP.NET core web application’s appsettings.json file.

Authentication options

Next we need to configure the authentications options.

Select the Authentication blade from left menu in registered app and check the two authentication options as shown below and click on Save.

We have successfully configured the app now.

Next we need to configure the User Flows for Sign up/ Sign in forms.

Configuring User Flows

User flows defines the sign up / sign in process for the applications but managed in Azure AD B2C. We can configure what data will be collected from the user and passed to the application.

To configure the User Flows, go back to the Azure AD B2C tenant and click on User Flows in left navigation.

Then click on New user flow

Selecting User Flow Type

In the new User Flow we get multiple options to configure, such as Sign up, Sign in, Profile editing.

Select the Sign up and sign in for this tutorial, this will enable the sign up and sign in experience for our web application.

On selecting Sign up and Sign in, it will enable another two options. Keep default and Click on Create

In the next screen provide a name for the User Flow and select email signup.

We can configure the Multifactor authentication as well, but leave default for now.

Scroll down a bit and select Show more…

On clicking show more, multiple fields will be presented to choose from in two columns Collect Column and Return Column.

Collect Column: Collect column shows what data will be collected from the user.

Return Claim: Return claim show what claims will be passed back to the application.

Select following options for our application and then click OK.

Note we have selected Display Name in collect attribute, during sign up user will be asked for Display Name.

Click on Create.

User Flow will be created and will be available under User flows.

Creating ASP.NET Core Web Application Using Visual Studio 2019 and Configuring Azure AD B2C

Let’s create an ASP.NET core application, and then we will configure it for Azure AD B2C authentication.

Open Visual Studio and select on Create a new project.

Select ASP.NET Core Web App (Model-View-Controller) for this example and click on next.

Provide project name and location, and select Next.

Select Microsoft Identity Platform from Authentication Type and click on create.

You may be presented with following screen to install required components (dotnet msidentity pool), select Finish to install.

ASP.NET application will be created.

Updating Azure AD B2C configurations

Open the appsettings.json you will see the following configurations.

{
/*
The following identity settings need to be configured
before the project can be successfully executed.
For more info see https://aka.ms/dotnet-template-ms-identity-platform 
*/
  "AzureAd": {
    "Instance": "https://login.microsoftonline.com/",
    "Domain": "qualified.domain.name",
    "TenantId": "22222222-2222-2222-2222-222222222222",
    "ClientId": "11111111-1111-1111-11111111111111111",
    "CallbackPath": "/signin-oidc"
  },
  "Logging": {
    "LogLevel": {
      "Default": "Information",
      "Microsoft": "Warning",
      "Microsoft.Hosting.Lifetime": "Information"
    }
  },
  "AllowedHosts": "*"
}

Replace it with following and replace {Initial Domain Name} and {Application (Client) Id} with values we noted earlier.

{
/*
The following identity settings need to be configured
before the project can be successfully executed.
For more info see https://aka.ms/dotnet-template-ms-identity-platform 
*/
  "AzureAd": {
    "Instance": "https://{Initial Domain Name}.b2clogin.com/",
    "ClientId": "{Application (Client) Id}",
    "Domain": "{Initial Domain name}.onmicrosoft.com",
    "SignedOutCallbackPath": "/signout/B2C_1_susi",
    "SignUpSignInPolicyId": "b2c_1_susi",
    "ResetPasswordPolicyId": "b2c_1_reset",
    "EditProfilePolicyId": "b2c_1_edit_profile",
    "CallbackPath": "/signin-oidc"
  },
  "Logging": {
    "LogLevel": {
      "Default": "Information",
      "Microsoft": "Warning",
      "Microsoft.Hosting.Lifetime": "Information"
    }
  },
  "AllowedHosts": "*"
}

Updating application Port no.

Remember earlier we specified port no. 5001 in Azure for callback Url?

Let’s update this in our ASP.NET project.

Open Properties > launchSettings.json

Update sslPort no. to 5001.

 "sslPort": 5001

If you want to keep another port no. then make sure you update in Azure AD B2C app registration too.

Testing Azure AD B2C

Press F5 and you will be redirected to {orgname}.b2clogin.com for authentication, If not then click on Sign in link on web portal.

Click on Sign up, and you will be asked for sing up details along with Display name we configured earlier in User Flow.

Creating ASP.NET Core Web application Using .NET CLI and Configuring Azure AD B2C

Open command prompt.

Create a directory for the web application.

mkdir AzureADB2CCodeDemo

Switch to the directory just created.

cd AzureADB2CCodeDemo

Create new dotnet mvc application project with Azure AD B2C option, i.e. IndividualB2C

dotnet new mvc --auth IndividualB2C

Open the project with Visual Studio Code.

Code .

Open appsettings.json, you will see similar options as we saw previously with Visual Studio 2019.

{
  "AzureAdB2C": {
    "Instance": "https://login.microsoftonline.com/tfp/",
    "ClientId": "11111111-1111-1111-11111111111111111",
    "Domain": "qualified.domain.name",
    "SignedOutCallbackPath": "/signout/B2C_1_susi",
    "SignUpSignInPolicyId": "b2c_1_susi",
    "ResetPasswordPolicyId": "b2c_1_reset",
    "EditProfilePolicyId": "b2c_1_edit_profile",
    "CallbackPath": "/signin-oidc"
  },
  "Logging": {
    "LogLevel": {
      "Default": "Information",
      "Microsoft": "Warning",
      "Microsoft.Hosting.Lifetime": "Information"
    }
  },
  "AllowedHosts": "*"
}

Replace it with following and replace {Initial Domain Name} and {Application (Client) Id} with values we noted earlier.

{
/*
The following identity settings need to be configured
before the project can be successfully executed.
For more info see https://aka.ms/dotnet-template-ms-identity-platform 
*/
  "AzureAd": {
    "Instance": "https://{Initial Domain Name}.b2clogin.com/",
    "ClientId": "{Application (Client) Id}",
    "Domain": "{Initial Domain name}.onmicrosoft.com",
    "SignedOutCallbackPath": "/signout/B2C_1_susi",
    "SignUpSignInPolicyId": "b2c_1_susi",
    "ResetPasswordPolicyId": "b2c_1_reset",
    "EditProfilePolicyId": "b2c_1_edit_profile",
    "CallbackPath": "/signin-oidc"
  },
  "Logging": {
    "LogLevel": {
      "Default": "Information",
      "Microsoft": "Warning",
      "Microsoft.Hosting.Lifetime": "Information"
    }
  },
  "AllowedHosts": "*"
}

Updating application Port no.

Let’s update the port for project generated from .NET CLI.

Open Properties > launchSettings.json

Update sslPort no. to 5001. as highlighted below.

{
  "iisSettings": {
    "windowsAuthentication": false,
    "anonymousAuthentication": true,
    "iisExpress": {
      "applicationUrl": "http://localhost:51430",
      "sslPort": 44315
    }
  },
  "profiles": {
    "AzureADB2CCodeDemo": {
      "commandName": "Project",
      "dotnetRunMessages": true,
      "launchBrowser": true,
      "applicationUrl": "https://localhost:5001;http://localhost:5000",
      "environmentVariables": {
        "ASPNETCORE_ENVIRONMENT": "Development"
      }
    },
    "IIS Express": {
      "commandName": "IISExpress",
      "launchBrowser": true,
      "environmentVariables": {
        "ASPNETCORE_ENVIRONMENT": "Development"
      }
    }
  }
}

Run project

dotnet run

And you should see following from b2clogin.com

Conclusion

In this post we learned how to configure Azure AD B2C for ASP.NET Core applications to allow users to authenticate themselves leveraging Azure AD B2C capabilities. Azure AD B2C let’s users to use local account or social accounts such as Google, Facebook or LinkedIn for authentication.

We started by setting up the new Azure AD B2C tenant and registered app for ASP.NET core application. Then we configured User Flow for Sign up and Sign In. Next we configured Azure AD B2C settings in ASP.NET core applications created using Visual Studio 2019 UI and .NET CLI.

Please leave your feedback or any queries you have.

Thanks!

The subscription is not registered to use namespace ‘Microsoft.AzureActiveDirectory’

Did you encounter the following error while creating a new tenant for Azure AD B2C?

The subscription is not registered to use namespace ‘Microsoft.AzureActiveDirectory’. See https://aka.ms/rps-not-found for how to register subscriptions. 

The error root cause

The reason for the error is Microsoft.AzureActiveDirectory namespace provider is not enabled by default for the subscription.

Similarly there are many namespace providers which we don’t need in many cases so they are disabled by default.

Let’s fix the error

We will try to fix the error using following two approaches:

Enabling the Microsoft.AzureActiveDirectory namespace provider from Azure Portal UI

In the Azure Portal, navigate to your subscription and select the Resource Providers as shown below.

In the Resource Providers you will see lot of namespace providers such as following, and many of them will be disabled.

In the search box search for Microsoft.AzureActiveDirectory.

Then select Microsoft.AzureActiveDirectory from the filtered option and click on Register.

It will start registering namespace provider, it will take some time to register.

Try refreshing to see the Registered status.

Try again creating the Azure AD B2C tenant, the error should be fixed now.

Enabling the Microsoft.AzureActiveDirectory namespace provider using Azure CLI

In this section we will see how we can enable the required namespace provider using Azure CLI.

If you have not already installed, then install Azure CLI from here.

When Azure CLI is installed you need to login to azure.

az login

This will open a new browser window to authenticate user.

We can list all subscriptions using following. We will need right subscription in next step.

az account list

Now set the subscription you are interested in. “Pay-As-You-Go” in this case.

az account set --subscription "Pay-As-You-Go"

Then finally register the required namespace provider.

az provider register --namespace Microsoft.AzureActiveDirectory

Once this is complete, the error will be fixed and you should be able to create Azure AD B2C tenant.

Summary

In this post we learned how to fix the following error.

The subscription is not registered to use namespace ‘Microsoft.AzureActiveDirectory’

We took two approaches to fix the error, from Azure portal UI and Azure CLI.

Ionic 5 | Performing Firebase CRUD Operations Using Angular 12

In this blog post we will learn how to create a simple Notes application using Ionic 5 and Angular 12.

The Note application will be able to perform the read, create, update and delete operations, and Firestore database in the Firebase will be used as database.

The Note Application

We will cover following topics:

Setting up the Firebase Project to save notes

For our Notes app we are going to save notes in the Firebase, and for that we need to create a new Firebase project. If you have existing Firebase project you can also use that.

Creating a Firebase project

To create a Firebase project head over to Firebase console and click on Add Project.

Firebase Console

Give your project a name, it’s Notes in my case.

You can change your project’s unique name here, although that doesn’t matters.

Project name

Enable Google Analytics to your project (optionally).

Enable Google Analytics

If you have chosen to enable Google Analytics then select or create the account for the Google Analytics.

Account for Google Analytics

In the following I have selected a new Google Analytics account.

New Google Analytics account

Then finally click on create project.

This will take a few minutes to setup the new Firebase project.

Setting up Firebase project.
Firebase project setup done.

Click on continue.

and now we are inside the our brand new Firebase project.

Firebase project overview

Getting Firestore Database settings.

To store the Notes inside Firebase, we need some configuration settings from the Firebase project.

To get the settings, go to the Project settings.

Project settings option

Select the web app options from available platform options.

Web app option

Click on Register app.

App name

Copy the firebaseConfig from the next screen. We are going to need this in our Ionic app.

Then click on Continue to console at the bottom of the screen.

In the firebaseConfig one setting is missing in my case i.e. databaseURL.

The databaseURL specifies to which database data will be saved.

If this is missing in your case too, then add like following.

  var firebaseConfig = {
    apiKey: "AIzaSyDv6oqHgHBOV60BpXa0W_JSxKDOrYuT7_M",
    authDomain: "notes-3b077.firebaseapp.com",
    projectId: "notes-3b077",
    databaseURL: "https://notes-3b077.firebaseio.com/",
    storageBucket: "notes-3b077.appspot.com",
    messagingSenderId: "435123651850",
    appId: "1:435123651850:web:6f2e0f5f60decf031166f3",
    measurementId: "G-ZBDCPDDNFX"
  };

databaseUrl format is as following.

https://{your-app-unique-name}.firebaseio.com/

We have now created the Firebase project, and we have the required configurations.

Let’s now create our Ionic application for saving the notes.

Creating a new Ionic Application of Angular Type

At the time of writing the latest version available for Ionic is Ionic 5. and Angular has the latest version of 12.

To create an Ionic application we need the Ionic CLI.

Install Ionic CLI globally or update it to the latest version if you have already installed.

npm install -g @ionic/cli

Create a new Ionic application of Angular type with blank template.

ionic start ionic-firebase-crud blank --type=angular

Move to the ionic app you have just created.

cd ionic-firebase-crud

Configuring the Ionic application to support Firebase

Before we can start saving notes in the Firebase from Notes app, we need to add configurations.

Adding Firebase Configurations in the Application

Add the firebaseConfig that we copied earlier in the environment.ts file.

Firebase config in environment.ts

Installing Required Firebase packages in the Ionic Application

To work with Firebase we need to add few npm packages.

  • firebase
  • @angular/fire

Install the firebase and @angualr/fire using npm and save it to the package.json file.

npm install firebase @angular/fire --save

This will add the following highlighted dependencies in the package.json

  "dependencies": {
    "@angular/common": "~12.0.1",
    "@angular/core": "~12.0.1",
    "@angular/fire": "^6.1.5",
    "@angular/forms": "~12.0.1",
    "@angular/platform-browser": "~12.0.1",
    "@angular/platform-browser-dynamic": "~12.0.1",
    "@angular/router": "~12.0.1",
    "@ionic/angular": "^5.5.2",
    "firebase": "^8.6.8",
    "rxjs": "~6.6.0",
    "tslib": "^2.0.0",
    "zone.js": "~0.11.4"
  },

Adding Firebase modules in the application

Import AngularFireModule and AngularFirestoreModule modules in the app.module.ts.

import { NgModule } from '@angular/core';
import { BrowserModule } from '@angular/platform-browser';
import { RouteReuseStrategy } from '@angular/router';
import { IonicModule, IonicRouteStrategy } from '@ionic/angular';
import { AppComponent } from './app.component';
import { AppRoutingModule } from './app-routing.module';
// Firebase
import { AngularFireModule } from '@angular/fire';
import { AngularFirestoreModule} from '@angular/fire/firestore';
//Environment
import { environment } from 'src/environments/environment';

@NgModule({
  declarations: [AppComponent],
  entryComponents: [],
  imports: [
    BrowserModule, 
    IonicModule.forRoot(), 
    AppRoutingModule,
    AngularFireModule.initializeApp(environment.firebaseConfig),
    AngularFirestoreModule
  ],
  providers: [{ provide: RouteReuseStrategy, useClass: IonicRouteStrategy }],
  bootstrap: [AppComponent],
})
export class AppModule {}

The configuration for the Firebase is complete now.

Adding Note Model class

Note will be our domain model for the application.

Add a model class for Notes.

export class Note{
    id: string;
    title: string;
    content: string;
}

Let’s now create a service to handle the Firebase CRUD operations.

Creating Firebase Service to perform CRUD operations with Firestore

Create a new service class FirebaseService for handling Firebase CRUD operations.

ionic generate service services/firebase

or

ionic g s services/firebase

Create and initialize collectionName field to store Note’s collection name.

  private collectionName: string = "notes";

Pass AngularFirestore Dependency Injection in Service class constructor.

  constructor(private firestore: AngularFirestore) { }

Add addNote method for creating a note.

 addNote(note: Note) {
    return this.firestore.collection(this.collectionName).add({...note});
  }

Add getNote method to get a single note.

  getNote(id: string): Observable<Note>
  {
    return this.firestore.collection(this.collectionName).doc<Note>(id).snapshotChanges()
    .pipe(
      map(a => {
        const id = a.payload.id;
        const data = a.payload.data();
        return { id, ...data };
      })
    );
  }

Add getNotes method to get all the notes.

  getNotes(): Observable<Note[]> {
    return this.firestore.collection<Note>(this.collectionName).snapshotChanges().pipe(
      map(actions => {
        return actions.map(a => {
          const id = a.payload.doc.id;
          const data = a.payload.doc.data();
          return { id, ...data };
        });
      })
    );
  }

Add updateNote method to update the note.

  updateNote(id: string, note: Note): Promise<void> {
   return this.firestore.collection(this.collectionName).doc<Note>(id).update(note);
  }

Add deleteNote method to delete a note.

  deleteNote(id: string): Promise<void> {
    return this.firestore.collection(this.collectionName).doc(id).delete();
  }

Completed FirebaseService service class should look as following.

import { Injectable } from '@angular/core';
import { AngularFirestore } from '@angular/fire/firestore';
import { Observable } from 'rxjs';
import { map } from 'rxjs/operators';
import { Note } from '../models/note';
@Injectable({
  providedIn: 'root'
})
export class FirebaseService {
  private collectionName: string = "notes";
  constructor(private firestore: AngularFirestore) { }
  addNote(note: Note) {
    return this.firestore.collection(this.collectionName).add({...note});
  }
  getNote(id: string): Observable<Note>
  {
    return this.firestore.collection(this.collectionName).doc<Note>(id).snapshotChanges()
    .pipe(
      map(a => {
        const id = a.payload.id;
        const data = a.payload.data();
        return { id, ...data };
      })
    );
  }
  getNotes(): Observable<Note[]> {
    return this.firestore.collection<Note>(this.collectionName).snapshotChanges().pipe(
      map(actions => {
        return actions.map(a => {
          const id = a.payload.doc.id;
          const data = a.payload.doc.data();
          return { id, ...data };
        });
      })
    );
  }
  updateNote(id: string, note: Note): Promise<void> {
   return this.firestore.collection(this.collectionName).doc<Note>(id).update(note);
  }
  deleteNote(id: string): Promise<void> {
    return this.firestore.collection(this.collectionName).doc(id).delete();
  }
}

We are now ready to use this firebase service to perform CRUD operations from different ionic pages.

Creating a List Page to list all the Notes

Let’s create a new Ionic Page to list all the notes from Firebase.

ionic g page list-note

Add notes collection field to hold notes.

  notes: Observable<Note[]>;

Add FirebaseService as dependency injection to the List note component class

  constructor(private noteService: FirebaseService) {
  }

Initialize notes field.

  ngOnInit() {
    this.notes = this.noteService.getNotes();
  }

Following is the complete List Note Page component class.

import { Component, OnInit } from '@angular/core';
import { FirebaseService } from '../services/firebase.service';
import { Note } from '../models/note';
import { Observable } from 'rxjs';
@Component({
  selector: 'app-list-note',
  templateUrl: './list-note.page.html',
  styleUrls: ['./list-note.page.scss'],
})
export class ListNotePage implements OnInit {
  notes: Observable<Note[]>;
  constructor(private noteService: FirebaseService) {
  }
  ngOnInit() {
    this.notes = this.noteService.getNotes();
  }
}

Update List Note template with following.

<ion-list>
  <ion-item [routerLink]="['/display-note', item.id]" *ngFor="let item of (notes | async)">
    <ion-label>
      <h2>{{item.title}}</h2>
      <p>{{item.content}}</p>
    </ion-label>
  </ion-item>
</ion-list>

Add List Note Component to home page template. i.e. home.page.html

<ion-header [translucent]="true">
  <ion-toolbar color="primary">
    <ion-title>
      Notes
    </ion-title>
  </ion-toolbar>
</ion-header>
<ion-content [fullscreen]="true">
  <app-list-note></app-list-note>
</ion-content>

Similarly we will create other pages for Get, Create, Update and Delete operations.

Creating Display Page to show single record from Firebase

Create a Display Page component.

ionic g page list-note

Display Note component class

import { Component, OnInit } from '@angular/core';
import { ActivatedRoute, Router } from '@angular/router';
import { FirebaseService } from '../services/firebase.service';
import { Note } from '../models/note';
@Component({
  selector: 'app-display-note',
  templateUrl: './display-note.page.html',
  styleUrls: ['./display-note.page.scss'],
})
export class DisplayNotePage implements OnInit {
  note: Note;
  constructor(private router: Router, private activatedRoute: ActivatedRoute, private noteService: FirebaseService) {
    this.note = new Note();
  }
  ngOnInit() {
    this.noteService.getNote(this.activatedRoute.snapshot.params.id)
      .subscribe(data => {
        this.note = data;
      });
  }
}

Display note template

<ion-header>
  <ion-toolbar>
    <ion-buttons slot="start">
      <ion-back-button></ion-back-button>
    </ion-buttons>
    <ion-title>{{note.title}}</ion-title>
    <ion-buttons slot="end">
      <ion-button color="primary" [routerLink]="['/edit-note', note.id]">
        <ion-icon name="create-outline"></ion-icon>
      </ion-button>
    </ion-buttons>
  </ion-toolbar>
</ion-header>
<ion-content class="ion-padding">
  <p>{{note.content}}</p>
</ion-content>

Creating Create Page to add a new record in Firebase

Create a Create Note Component.

ionic g page create-note

Update Create Note Component class with the following.

import { Component, OnInit } from '@angular/core';
import { FormBuilder, FormControl, FormGroup, Validators } from '@angular/forms';
import { Router } from '@angular/router';
import { FirebaseService } from '../services/firebase.service';
import { Note } from '../models/note';
@Component({
  selector: 'app-create-note',
  templateUrl: './create-note.page.html',
  styleUrls: ['./create-note.page.scss'],
})
export class CreateNotePage implements OnInit {
  noteForm : FormGroup;
  constructor(private formBuilder: FormBuilder, private router: Router, private noteService: FirebaseService) {
    this.noteForm = this.formBuilder.group({
      title: new FormControl('', Validators.required),
      content: new FormControl('', Validators.required)
    });
  }
  ngOnInit() {
  }
  onSubmit() {
    const note: Note = Object.assign({}, this.noteForm.value);
    this.noteService.addNote(note)
      .then(_ => {
        this.router.navigate(['/home']);
      });
  }
  onReset(){
    this.noteForm.reset();
  }
}

Create Note Template

<ion-header>
  <ion-toolbar color="primary">
    <ion-buttons slot="start">
      <ion-back-button></ion-back-button>
    </ion-buttons>
    <ion-title>Add Note</ion-title>
    <ion-buttons slot="end" tooltip="Reset form" (click)="onReset()">
      <ion-icon name="refresh-outline"></ion-icon>
    </ion-buttons>
  </ion-toolbar>
</ion-header>
<ion-content class="ion-padding">
  <form [formGroup]="noteForm" (ngSubmit)="onSubmit()" novalidate>
    <ion-item>
      <ion-input placeholder="Enter notes title" autofocus=true formControlName="title"></ion-input>
    </ion-item>
  
    <ion-item>
      <ion-textarea placeholder="Enter your notes here..." rows="10" formControlName="content"></ion-textarea>
    </ion-item>
  
    <ion-button color="primary" class="ion-float-left" type="submit" [disabled] = !noteForm.valid>Save</ion-button>
  </form>
</ion-content>

Update home.page.html to add a Fab Add button.

<ion-header [translucent]="true">
  <ion-toolbar color="primary">
    <ion-title>
      Notes
    </ion-title>
  </ion-toolbar>
</ion-header>
<ion-content [fullscreen]="true">
  <app-list-note></app-list-note>
  <!-- fab placed to the top end -->
  <ion-fab vertical="bottom" horizontal="end" slot="fixed">
    <ion-fab-button [routerLink]="['/create-note']">
      <ion-icon name="add"></ion-icon>
    </ion-fab-button>
  </ion-fab>
</ion-content>

Update create-note.module.ts to add ReactiveFormsModule.

We are using Reactive Forms for forms in Angular.

import { NgModule } from '@angular/core';
import { CommonModule } from '@angular/common';
import { FormsModule, ReactiveFormsModule } from '@angular/forms';
import { IonicModule } from '@ionic/angular';
import { CreateNotePageRoutingModule } from './create-note-routing.module';
import { CreateNotePage } from './create-note.page';
@NgModule({
  imports: [
    CommonModule,
    FormsModule,
    ReactiveFormsModule,
    IonicModule,
    CreateNotePageRoutingModule
  ],
  declarations: [CreateNotePage]
})
export class CreateNotePageModule {}

Creating Update Page to update a record in the Firebase.

Create Update Note page component.

ionic g page update-note

Update Note Component Class

import { Component, OnInit } from '@angular/core';
import { FormBuilder, FormControl, FormGroup, Validators } from '@angular/forms';
import { ActivatedRoute, Router } from '@angular/router';
import { FirebaseService } from '../services/firebase.service';
import { Note } from '../models/note';
@Component({
  selector: 'app-edit-note',
  templateUrl: './edit-note.page.html',
  styleUrls: ['./edit-note.page.scss'],
})
export class EditNotePage implements OnInit {
  id: string='';
  noteForm : FormGroup;
  constructor(private formBuilder: FormBuilder, private router: Router, private activatedRoute: ActivatedRoute, private noteService: FirebaseService) {
    this.noteForm = this.formBuilder.group({
      title: new FormControl('', Validators.required),
      content: new FormControl('', Validators.required)
    });
  }
  ngOnInit() {
    this.id= this.activatedRoute.snapshot.paramMap.get("id");
    this.noteService.getNote(this.activatedRoute.snapshot.paramMap.get("id"))
      .subscribe(data => {
        this.noteForm = this.formBuilder.group({
          title: new FormControl(data.title, Validators.required),
          content: new FormControl(data.content, Validators.required)
        });
      });
  }
  onSubmit() {
    const note: Note = Object.assign({}, this.noteForm.value);
    this.noteService.updateNote(this.id, note)
    .then(()=>{
      this.router.navigate(['/home']);
    });
  }
}

Update Note Template

<ion-header>
  <ion-toolbar>
    <ion-buttons slot="start">
      <ion-back-button></ion-back-button>
    </ion-buttons>
    <ion-title>Edit Note</ion-title>
  </ion-toolbar>
</ion-header>
<ion-content class="ion-padding">
  <form [formGroup]="noteForm" (ngSubmit)="onSubmit()" novalidate>
    <ion-item>
      <ion-input placeholder="Enter notes title" autofocus=true formControlName="title"></ion-input>
    </ion-item>
  
    <ion-item>
      <ion-textarea placeholder="Enter your notes here..." rows="10" formControlName="content"></ion-textarea>
    </ion-item>
  
    <ion-button color="primary" class="ion-float-left" type="submit" [disabled] = !noteForm.valid>Save</ion-button>
  </form>
</ion-content>

Update create-note.module.ts to add ReactiveFormsModule

import { NgModule } from '@angular/core';
import { CommonModule } from '@angular/common';
import { FormsModule, ReactiveFormsModule } from '@angular/forms';
import { IonicModule } from '@ionic/angular';
import { EditNotePageRoutingModule } from './edit-note-routing.module';
import { EditNotePage } from './edit-note.page';
@NgModule({
  imports: [
    CommonModule,
    FormsModule,
    ReactiveFormsModule,
    IonicModule,
    EditNotePageRoutingModule
  ],
  declarations: [EditNotePage]
})
export class EditNotePageModule {}

Deleting a Record Using Delete Operation

To Delete note we will add a Delete button on Display Note Page.

<ion-header>
  <ion-toolbar>
    <ion-buttons slot="start">
      <ion-back-button></ion-back-button>
    </ion-buttons>
    <ion-title>{{note.title}}</ion-title>
    <ion-buttons slot="end">
      <ion-button color="primary" [routerLink]="['/edit-note', note.id]">
        <ion-icon name="create-outline"></ion-icon>
      </ion-button>
      <ion-button color="danger" (click)="onDelete()">
        <ion-icon name="trash-outline"></ion-icon>
      </ion-button>
    </ion-buttons>
  </ion-toolbar>
</ion-header>
<ion-content class="ion-padding">
<p>{{note.content}}</p>
</ion-content>

Update display-note.page.ts Component Class for delete method.

import { Component, OnInit } from '@angular/core';
import { ActivatedRoute, Router } from '@angular/router';
import { FirebaseService } from '../services/firebase.service';
import { Note } from '../models/note';
@Component({
  selector: 'app-display-note',
  templateUrl: './display-note.page.html',
  styleUrls: ['./display-note.page.scss'],
})
export class DisplayNotePage implements OnInit {
  note: Note;
  constructor(private router: Router, private activatedRoute: ActivatedRoute, private noteService: FirebaseService) {
    this.note = new Note();
  }
  ngOnInit() {
    this.noteService.getNote(this.activatedRoute.snapshot.params.id)
      .subscribe(data => {
        this.note = data;
      });
  }
  onDelete(){
    this.noteService.deleteNote(this.note.id)
    .then(()=>{
      this.router.navigate(['/home']);
    });
  }
}

Conclusion

In this post we learned how to perform the CRUD operations in Firestore database inside Firebase using Ionic 5 application which was created using Angular 12.

We started with setting up the Firebase project to get firestore database settings. Then we created the ionic 5 application using ionic CLI and then added support for all of the Read, Create, Update and Delete operations.

Please leave your feedback or comment if you are facing any issue.

Record Merging and Merge Tracking in Depth in Microsoft Dynamics 365

In Dynamics 365 we capture data and data can easily become duplicate for multiple reasons such as bulk record import.

Microsoft Dynamics 365 provides OOB merge functionality which is quite helpful in deduplication of the records and cleaning up the data.

Merging in Dynamics 365

In Dynamics 365 we can merge Account, Contact, Leads or Incidents.

Merging process in Dynamics 365 takes two records of same the entity, the Master record and the Subordinate record, and you specify which is the master record out of two.

On merging the subordinate record gets deactivated and linked to the master record any related records from subordinate record moves to the master record.

Let’s explore more about the merging process and extend the merge tracking in Dynamics 365

In this blog we will cover:

  • Merge process in Dynamic 365 for accounts, contacts and leads.
  • Merge process in Dynamics 365 for incidents.
  • Merge triggers in Dynamics 365.
    • Manual selection of records in grid
    • Duplicate detection rule configuration
    • Calling MergeRequet SDK Message
    • Calling Web API Merge Action
  • The new enhanced merging experience and options in Dynamics 365.
    • New options
      • Merge records by choosing fields with data
      • View fields with conflicting data
      • Enable Parent Check
      • Select all fields in this section
  • Limitations of merging in Dynamics 365.
  • Hidden merge tracking fields in Dynamics 365.
  • Extending merge tracking functionality in Dynamics 365.
    • Create a system view to show Master and Subordinate record in a view.
    • Create visualization for merged records.
    • Record merged fields and subordinate record reference on master record for tracking.
  • Security considerations for Merging in Dynamics 365.

Merge process in Dynamic 365 for account, contacts and leads.

When you choose to merge Accounts, Contacts or Leads you are presented with following screen.

On Merging screen you specify which is the Master (primary) record, and the other one becomes the Subordinate record. Optionally you can specify any subordinate records fields to override the master records fields, by default all the master records fields will be selected.

On the merge screen you are also presented with following options which we will discuss later in this blog.

  • Merge records by choosing fields with data
  • View fields with conflicting data
  • Enable Parent Check
  • Select all fields in this section

When you have chosen the master record and fields, you can click on Ok to merge.

Merge process in Dynamics 365 for incidents.

Merging process for Incidents is a bit different from Account, Contact and Leads.

For merging incidents you notice few differences such as:

  • You have to select mater record from the grid to which other record will merge into.
  • You don’t get option to choose fields from the subordinate record to move onto the master record.
  • There are no new option as they are available for account, contact and lead merging.

Different merge triggers in Dynamics 365

There are multiple triggers to initiate merging process such as:

  • Manual selection of records in grid
  • Duplicate detection rule configuration
  • Calling MergeRequet SDK Message
  • Calling Web API Merge Action

Manual selection of records in grid

In manual merging you select two records in grid and then click on Merge button to launch merge screen.

Duplicate detection rule configuration

We can set up duplicate detection rule, which will launch the merge screen if duplicate records are detected as per the rule.

Calling MergeRequet SDK Message

You can merge records programmatically by calling MergeRequet message available in the SDK.

The following example in C# shows how we can merge two record using MergeRequest message.

// Create the target for the request.
              var target = new EntityReference();
// Id is the GUID of the account that is being merged into.
      // LogicalName is the type of the entity being merged to, as a string
             target.Id = _account1Id;
             target.LogicalName = Account.EntityLogicalName;
// Create the request.
      var merge = new MergeRequest();
      // SubordinateId is the GUID of the account merging.
      merge.SubordinateId = _account2Id;
      merge.Target = target;
      merge.PerformParentingChecks = false;
Console.WriteLine("\nMerging account2 into account1 and adding " + "\"test\" as Address 1 Line 1");
// Create another account to hold new data to merge into the entity.
      // If you use the subordinate account object, its data will be merged.
      var updateContent = new Account();
      updateContent.Address1_Line1 = "test";
// Set the content you want updated on the merged account
      merge.UpdateContent = updateContent;
// Execute the request.
      var merged = (MergeResponse)svc.Execute(merge);

Note: UpdateContent property will not be applicable to incidents and will be ignored.

Calling Merge Action using Web API

The following example in TypeScript shows how we can call the Merge Action available in the Web API to merge records.

    export async function ContactMerge() {
        debugger;
        const targetId: string= "71a17064-1ae7-e611-80f4-e0071b661f01"; // replace with target Id
        const subordinateId: string= "73a17064-1ae7-e611-80f4-e0071b661f01"; // replace with subordinate Id
        const contactMergeRequest: any = {};
        contactMergeRequest.Target = {
            entityType: "contact",
            id: targetId
        }
        contactMergeRequest.Subordinate = {
            entityType: "contact",
            id: subordinateId
        }
        contactMergeRequest.UpdateContent = {
            jobtitle: "{Upated Job title}",
             "@odata.type": "Microsoft.Dynamics.CRM.contact"
        }
        contactMergeRequest.PerformParentingChecks = false;
        contactMergeRequest.getMetadata = function () {
            return {
                boundParameter: null,
                parameterTypes: {
                    "Target": {
                        "typeName": "mscrm.contact",
                        "structuralProperty": 5
                    },
                    "Subordinate": {
                        "typeName": "mscrm.contact",
                        "structuralProperty": 5
                    },
                    "PerformParentingChecks":{
                        "typeName": "Edm.Boolean",
                        "structuralProperty": 1
                    },
                    "UpdateContent":{
                        "typeName": "mscrm.contact",
                        "structuralProperty": 5
                    }
                },
                operationType: 0,
                operationName: "Merge"
            }
        }
        const response = await Xrm.WebApi.online.execute(contactMergeRequest)
        console.log(response);
    }

The new enhanced merging experience and options in Dynamics 365.

When you select two records (Contacts in this case) in grid and click on Merge button following screen appears which has a few new options.

New Merge screen

The merge screen provides few new options, and provides opportunity to choose fields from master or subordinate records which will be saved finally on master record.

When you click Ok, merging begins.

and on completion success message appears as following.

The selected records are merged and subordinate record is deactivated.

As you can see the new enhanced merging experience provides four new options as following:

  • Merge records by choosing fields with data
  • View fields with conflicting data 
  • Select all fields with conflicting data
  • Enable Parent Check

Let’s discuss each of them.

Merge records by choosing fields with data

If you select this option, fields from master or subordinate which has the data in same field is selected, if both the records have the data in same field then master record field is selected.

View fields with conflicting data 

If you select this option then only fields with different data will be shown. Fields with same data will be hidden.

Select all fields in this section

If you select this, then all the fields under this section will get selected. This is helpful if you have many fields under a section to select.

Enable Parent Check

This is interesting.

If this option is checked and the records have different parent Accounts, then on merging following error will be thrown.

Error

Unable to merge because sub-entity will be parented differently. You can disable the parent check prior to execution as part of Merge dialog.

Limitations of merge functionality in Dynamics 365.

In Microsoft Dynamics 365 merging feature is great but has some limitations too.

Let’s check some of the important merging limitations you should be aware of.

  • Merging is available only for Account, Contact and Lead system entities.
  • Mering is not available for custom entities yet, there are some suggestion though to Microsoft, to support merging on custom entities.
  • Once records are merged you don’t know which data was merged from subordinate to master record, unless you have enabled auditing on master, secondary and related entities.

Hidden merge tracking fields in Dynamics 365.

After merging, subordinate record shows the notifications similar to the following.

The record was merged with {record name}, and the deactivated.

The notification on the subordinate record provides a link to the master record.

But on master record there is no such link to subordinate record.

So there’s no quick way to navigate to subordinate record from the master record.

How merged notification is displayed on subordinate record?

Entities contain two hidden fields for merge tracking as shown in following screenshot:

  • Merged: Boolean, Shows whether the account has been merged with a master contact.
  • Master ID: Lookup, Unique identifier of the master record.

For internal merge tracking Dynamics 365 updates values of these two fields on subordinate record. These fields are then used to display notification on the subordinate record.

Please note, these fields will not be updated on the master record and because merge is not available on other entities you will not find these fields on entities other than Account, Contact, Lead and Case.

When merge happens, fields on subordinate record will be updated:

Merged: Will be set to true, to indicate this record has been merged with another record.

Master ID: Will be assigned reference to the master record.

Querying merged records

Let’s try now to find the merged records from the advanced find.

As you can see these fields are not available in advanced find, so we cannot filter records to find merged records through advanced find.

These fields are not available even through Add Columns in view, so we cannot see the fields them in grid.

We know if fields are not available through advance find, they may have searchable option set to false in field editor. Let’s check that.

Enable Searchable option on fields

Searchable option on fields is used to show/hide fields from advanced find, if searchable is false then field will not appear in advanced find.

Merged field, searchable is disabled in the classic field editor.

MasterI ID field, searchable is disabled in the classic field editor.

Let’s check through Power Apps portal, as some features are available there.

Merged field

Merged field is editable so checked and saved.

Master ID field

Master ID field is also enabled, so checked and saved.

In Power Apps portal both the fields were editable, so now we should be able to query in advanced find. Let’s check again in advanced find.

Hmmmm….

Merged and MasterId fields are still not available in advanced find, why?

Because Dataverse didn’t saved those fields. Let’s verify.

So, that means we cannot find merged records using views with filter on merged field.

Visualisations options

Since we don’t have views with merged field filter we cannot create visualization for the merged records.

Let’s try to add fields on Form.

In classic editor

We cannot find merged and masteid in classic form editor.

let’s check the modern editor.

You can see fields are not available even in modern forms to add.

Querying Dataverse for Merged and MasterId

Let’s try now to query Dataverse.

Query by FetchXML

Let’s see if we can query the Dataverse using following FetchXML.

FetchXML is filtering records based on merged field.

<fetch top="50" >
  <entity name="contact" >
    <attribute name="fullname" />
    <attribute name="masterid" />
    <attribute name="merged" />
    <filter>
      <condition attribute="merged" operator="eq" value="1" />
    </filter>
  </entity>
</fetch>

Result

Yes, we are able to query through FetchXML, and we can see

Merged is true for subordinate record.

MasterId is set to the Id of the master record.

Note, above result is showing subordinate record only and not the master record. That means merged field is true only on subordinate records.

If you need to find master record, you can do joins on masterid field.

Query by Web API

Let’s try the Web API now.

Xrm.WebApi.online.retrieveMultipleRecords("contact", "?$select=fullname,_masterid_value,merged&$filter=merged eq true").then(
    function success(results) {
        for (var i = 0; i < results.entities.length; i++) {
            var fullname = results.entities[i]["fullname"];
            var _masterid_value = results.entities[i]["_masterid_value"];
            var _masterid_value_formatted = results.entities[i]["_masterid_value@OData.Community.Display.V1.FormattedValue"];
            var _masterid_value_lookuplogicalname = results.entities[i]["_masterid_value@Microsoft.Dynamics.CRM.lookuplogicalname"];
            var merged = results.entities[i]["merged"];
            var merged_formatted = results.entities[i]["merged@OData.Community.Display.V1.FormattedValue"];
        }
    },
    function(error) {
        Xrm.Utility.alertDialog(error.message);
    }
);

Web API Query Result

So we are able to get the results with Web API as well.

Extending merge tracking functionality in Dynamics 365

So far we have learned OOB merge behaviour and some of the limitations of merging in Dynamics 365, let’s try to extend the merging functionality using some customizations.

We will try the following to extend the merging usability:

  • Create view for Merged Contacts by updating view FetchXML and Layout XML.
  • Create visualizations for merged contacts.
  • Adding fields on form by updating FormXML.
  • Subordinate Lookup and Change content tracking on master Contact record.
    • Add custom fields on Contact record.
    • Write plugin on merge to populate Subordinate lookup field on master record.

Create view for Merged Contacts by updating view FetchXML and Layout XML

As we don’t have OOB way of creating view for merged records, let’s try to create by editing FetchXML.

  • Create a View for example Merged Contacts in a solution and export as the unmanaged solution.
  • extract the files and update the FetchXML filter in the customizations.xml file as following.
              <fetch version="1.0" output-format="xml-platform" mapping="logical">
                <entity name="contact">
                  <attribute name="fullname" />
                  <attribute name="masterid" />
                  <attribute name="contactid" />
                  <filter>
                    <condition attribute="merged" operator="eq" value="1" />
                  </filter>
                </entity>
              </fetch>
  • Update Layout XML too to add masterid cell as following.
            <layoutxml>
              <grid name="resultset" jump="fullname" select="1" icon="1" preview="1">
                <row name="result" id="contactid">
                  <cell name="fullname" width="200" />
                  <cell name="masterid" width="200" />
                </row>
              </grid>
            </layoutxml>
  • Zip the files again.
  • Import the solution back in environment and publish.

And you will see the view with merged records.

From view we can observe the following:

  • We have a view now showing merged records, because of our applied filter in view fetchxml.
  • Both Subordinate and Master Lookups are available in same view which enables us to find which records were merged together.

Activating inactive subordinate records

Let’s try to activate the inactive subordinate record in above view and let’s see what happens.

Subordinate record is activated again, let’s go to Merged view.

And the activated record has gone from the view.

Subordinate record is removed from the Merged Contacts view because it’s no longer Subordinate and has following effects on Merged and MasterId fields on activation.

  • Merged field: It is reset back to false.
  • MasterId field: This field is set to null.

Visualizations for merged contacts

Because we have Merged Contacts view available now, we can create nice visualization using that.

Adding fields on form by updating FormXML

I was not able to successfully display Merged and MasterId on forms, If you are able to please share.

Subordinate Lookup and Update Content tracking on master Contact record

As we have already seen subordinate record contains reference to the master record in MasterId field, but there is no reference on the Master record for the Subordinate record.

Let’s do the custom implementation for this.

Add Custom fields on Contact

I have added following two fields on contact, you may try as per your need:

  • Subordinate: Contact Lookup for Subordinate reference.
  • Merge Update Content: String Field to store selected fields content from Subordinate record, to track what values were migrated from subordinate record to the master record.
Write plugin on merge to populate Subordinate field on Master record.

Write and Register following plugin on Post Operation of Merge Message.

using Microsoft.Xrm.Sdk;
using System;
using System.Collections.Generic;
namespace SureshMaurya.Merging.Plugins
{
    public class PostMergeOperation : IPlugin
    {
        public void Execute(IServiceProvider serviceProvider)
        {
            var context = (IPluginExecutionContext)serviceProvider.GetService(typeof(IPluginExecutionContext));
            var tracingService = (ITracingService)serviceProvider.GetService(typeof(ITracingService));
            var organizationServiceFactory = (IOrganizationServiceFactory)serviceProvider.GetService(typeof(IOrganizationServiceFactory));
            var organizationService = organizationServiceFactory.CreateOrganizationService(context.UserId);
            var primaryEntityReference = ((EntityReference)context.InputParameters["Target"]);
            var subordinateId = (Guid)context.InputParameters["SubordinateId"];
            var updateContentEntity = (Entity)context.InputParameters["UpdateContent"];
            tracingService.Trace($"Primary Record Id {primaryEntityReference.Id}");
            tracingService.Trace($"Subordinate Record Id {subordinateId}");
            UpdateContentOnPrimaryRecord(organizationService, primaryEntityReference, subordinateId, updateContentEntity);
        }
        private void UpdateContentOnPrimaryRecord(IOrganizationService organizationService, EntityReference primaryEntityReference, Guid subordinateId, Entity updateContentEntity)
        {
            //Prepare Update Content String
            List<string> updateContentCollection = new List<string>();
            foreach (var attribute in updateContentEntity.Attributes)
            {
                updateContentCollection.Add($"{attribute.Key}: {GetAttributeValue(updateContentEntity, attribute.Key)}");
            }
            var updateContentString = string.Join("\n", updateContentCollection);
            //Update Primary Record with Update Content
            Entity entity = new Entity(primaryEntityReference.LogicalName, primaryEntityReference.Id);
            entity["msdc_subordinate"] = new EntityReference() { Id = subordinateId, LogicalName = primaryEntityReference.LogicalName};
            entity["msdc_mergeupdatecontent"] = updateContentString;
            organizationService.Update(entity);
        }
        private string GetAttributeValue(Entity entity, string key)
        {
            if (entity.Attributes.ContainsKey(key))
            {
                if (entity.Attributes[key] is EntityReference)
                {
                    return $"{{Id: {((EntityReference)entity[key]).Id}, Name: {{{((EntityReference)entity[key]).Name}}}}}"; ;
                }
                if (entity.Attributes[key] is OptionSetValue)
                {
                    return ((OptionSetValue)entity[key]).Value.ToString();
                }
                //Handle more field data types.
                return entity[key] as string;
            }
            return string.Empty;
        }
    }
}

When successfully plugin registered try to merge two contact records and select few fields from the subordinate records.

After merging open contact record and observe Subordinate and Merge update Content field. Should be similar to following.

Subordinate Lookup contains reference to the Subordinate record.

Merge Update Content field contains what field data is moved to master from the subordinate record.

Security considerations for Merging in Dynamics 365.

Merging in Dynamics 365 is very powerful but there might be unintended consequences, there are a few security consideration you should keep in mind while merging as explained in Microsoft docs:

Summary

In this blog we learned a lot about merging in Microsoft Dynamics 365. We started with what merging is in Dynamics 365 and what are the different triggers for merging. Then we explored the new enhanced experience for mering in Dynamics 365.

Merging is tracked by two hidden fields in Dynamics 365, we explored what are those two fields and how we can surface then into a view so that those fields are easily accessible and facilitates in creating the visualisation on top of that.

We went ahead and wrote a plugin to keep track of merged record and fields onto the master record.

I hope you liked this blog post about merging and how we can leverage the hidden features of merging, please share your thoughts or anything to improve.

Simplified Show/Hide Ribbon Button in Dynamics 365 based on Asynchronous Operation Result using TypeScript ES6

Recently I had a requirement to show/hide button based on some asynchronous operation result. In your case async operation could be anything such as getting data from some external API, or querying dynamics 365 using Xrm.WebApi.retrieveRecord which returns promise instead of the actual record value.

Toggling ribbon button’s visibility based on asynchronous operation is tricky because before the asynchronous operation completes and result is available function call exits, and true result is never returned and therefore button show/hide doesn’t works as expected.

With a synchronous operation call it works perfectly fine because the function doesn’t exits until we have the required value.

In search of the solution I found a nice article by Andrew Butenko, which actually solved the problem and works fine but the implementation is tricky and not straightforward.

The solution in the linked article has the following high level steps

  • Maintain two flags isAsyncOperationCompleted and isButtonEnabled both initialised to false, and are in outer scope of the function.
  • On page load, function defined on enable rule will be called.
  • Inside the function if isAsyncOperationCompleted is true then return the isButtonEnabled, which will never be the case in first function call.
  • The execution will continue and trigger async operation but will exit the function before the result is available.
  • When the async operation result is available it will set isAsyncOperationCompleted to true and update isButtonEnabled flag based on result, and if isButtonEnabled is true the then call the formContext.ui.refreshRibbon().
  • It will call the enable rule function again and this time because isAsyncOperationCompleted is true in outer scope, correct value of isButtonEnabled will be returned, which was set in previous function call.

Clearly it does the trick but it’s tricky, and there should be a better way of doing this.

And another potential issue could be, in subsequent clicks it will return the same value even if the value changes in the backend.

So, I gave it a try and did the similar thing but in a little different way and improved in following areas:

  • No outer scope level flags.
  • No extra call to formContext.ui.refreshRibbon(), resulting in better performance.
  • Every button click will return the latest value.
  • Easy to follow code logic.
  • Cleaner code.

In the following example I am using TypeScript for better type checking and intellisense support, and using @types/Xrm npm package for Xrm types intellisense.

I have used ES6 feature async/await, which makes it much easier and cleaner to implement the async calls.

namespace Contact {
    export async function ShowCreditButton(formContext: Xrm.FormContext) {
        const accountLookup = formContext.getAttribute<Xrm.Attributes.LookupAttribute>("parentcustomerid").getValue();
        if (accountLookup) {
            const accountId = accountLookup[0].id
            const account = await Xrm.WebApi.retrieveRecord("account", accountId, "?$select=creditonhold");
            return account.creditonhold != true;
        }
        return false;
    }
}

Clearly the above example is much concise and cleaner to implement asynchronous operation in enable rule functions of ribbon button.

If you know a better way of solving this problem, please do share. It’s always good to learn better ways of solving the problems.

Update

There is Microsoft documentation exists for handling asynchronous calls in enable rules using promises. Thanks to Andrew Butenko for sharing the link in comment.

If you want to understand this in plain JavaScript you can refer below, you just have to return promises and it will be handled by the platform.

// Old synchronous style
/*
function EnableRule() {
   const request = new XMLHttpRequest();
   request.open('GET', '/bar/foo', false);
   request.send(null);
   return request.status === 200 && request.responseText === "true";
}
*/

// New asynchronous style
function EnableRule() {
   const request = new XMLHttpRequest();
   request.open('GET', '/bar/foo');

   return new Promise(function(resolve, reject) {
       request.onload = function (e) {
           if (request.readyState === 4) {
               if (request.status === 200) {
                   resolve(request.responseText === "true");
               } else {
                   reject(request.statusText);
               }
           }
       };
       request.onerror = function (e) {
           reject(request.statusText);
       };

       request.send(null);
   });
}

Few points to note about the asynchronous calls in enable rule:

  • Async calls on enable rule are supported in Unified Interface only and not in the classic web clients.
  • There is time limit of 10 seconds, if promise not resolved in 10 seconds then false will be returned.

Summary

In this blog post we learned how we can toggle ribbon button’s visibility based on result of asynchronous operation call defined on enable rule of the ribbon button.

We also learned how we can use the latest ES6 features such as async/await in TypeScript/JavaScript to write much cleaner and concise code.

Quick Reference – Common Microsoft Power Platform CLI Commands for the Development of PCF Components

To create a PCF component we need to work with multiple Microsoft Power Platform CLI (earlier Power Apps CLI) commands.

This blog lists the common Power Platform CLI commands for quick reference which you may find helpful while working with Power Apps Component Framework.

Following are the high level steps in the development life cycle of PCF control.

  • Installing Power Platform CLI
  • Create a new PCF component project
  • Update node packages
  • Build PCF Project
  • Testing / Debugging PCF component
  • Package PCF code components
    • Create solution project
    • Add PCF component reference in the solution
    • Build the solution zip file
  • Manage authentication profiles
    • Create an authentication profile
    • Listing all authentication profiles
    • Switch between authentication profiles
    • Get information about selected environment
    • Delete an authentication profile from the system
    • Delete all authentication profiles from the system
  • Publishing PCF solution file to Dataverse

Installing Power Platform CLI

Before you can execute any Power Platform CLI command you need to install the Power Platform CLI tooling, which you can get from here.

Reference: Get Tooling for Power Apps component framework.

If you already have installed Power Platform CLI, you can update it to the latest version using the following command

pac install latest

Once PCF tooling is installed you are ready to use Power Apps CLI commands for creating PCF components.

Create a new PCF component project of field or dataset type

For any new PCF control you need to first create a project for it. Use the following commands to create a new PCF component project.

Create a PCF component project template for field component.

pac pcf init --namespace SampleNamespace --name SampleComponent --template field

Create a PCF component project template for dataset component.

pac pcf init --namespace SampleNamespace --name SampleComponent --template dataset

Update node packages

The newly created PCF component project contains the references for node packages in packages.json file but the packages are not installed yet.

Execute following commands in project root folder to install node packages. Note this is node CLI command not Power Apps CLI command.

npm install

Build PCF project

While working with PCF component you will need to build project for multiple reasons such as to re-generate ManifestTypes.d.ts for strong type references of new properties or testing your component.

Type the following to build your project

npm run pcf-scripts build

Or simply

npm run build

Multiple pcf-scripts commands are listed in package.json file while generating new project.

Following are all of the pcf-scripts commands listed in package.json file

"build": "pcf-scripts build"
"clean": "pcf-scripts clean"
"rebuild": "pcf-scripts rebuild"
"start": "pcf-scripts start"

Testing / Debugging PCF Component

When you are ready to test your PCF component you can launch local test harness using the following command. Test harness is handy when you want to test or troubleshoot your code component locally.

npm start

Package PCF code components

When PCF component is ready to be deployed, it needs be packaged into a solution zip file, which then can be imported into Dataverse.

Following are the steps to create a solution zip file from the code component

  • Create a solution project
  • Add reference to the PCF component in the solution
  • Build solution zip file

Create solution project

pac solution init --publisher-name developer --publisher-prefix dev

Add PCF component reference in the solution

pac solution add-reference --path c:\users\SampleComponent

Build the solution zip file

Restore packages

msbuild /t:restore

Build unmanaged solution

msbuild

Build managed release solution

msbuild /p:configuration=Release

Manage authentication profiles

Before you can import solution file using Power Apps CLI, you need to have a authentication profile on your system which points to the target Dataverse environment.

All of the authentication profiles gets saved in the authprofiles.json at following location.

C:\Users\{username}\AppData\Local\Microsoft\PowerAppsCLI\authprofiles.json

This files gets created with the first authentication profile creation and gets deleted with the delete of last authentication profile.

Authentication profiles lets you authenticate and import solutions through the command line. You can use the following commands to manage the authentication profiles on your system.

Create an authentication profile

Use the following command to create a new authentication profile on your system.

pac auth create --url https://{org}.crm.dynamics.com

Listing all authentication profiles

If you need to list all of the authentication profiles available on your system, you can use the following.

pac auth list

Switch between authentication profiles

If you have multiple authentication profiles on your system you may need to switch between profiles. You can switch using the following command.

pac auth select --index <index of the active profile>

Get information about selected environment

If you want to get information about selected authentication profile, use following.

pac org who

Delete an authentication profile from the system

If you want to delete an authentication profile from your system, you can delete using the following command. If this is the last authentication profile on your system then it will also delete authprofiles.json from your system.

pac auth delete --index <index of the profile>

Delete all authentication profiles from the system

If you want to delete all of the authentication profiles from your system, you can delete using the following.

pac auth clear

Publishing PCF solution file to Dataverse

Finally we are ready to publish PCF component solution zip file.

Use following command to publish zipped solution file.

pac pcf push --publisher-prefix dev

Summary

In this blog we went through all the commonly used Microsoft Power Platform CLI commands for the development, testing and publishing of PCF components.