Controlling Session Timeouts in the Web Client

Since moving to the Web Client, have you seen this at all?

timeout

You probably have, and has it popped up while you were running a long processing routine? Yup, me too.

What’s Happening?

When you kick off those long running routines, your Web Client basically sits idle while the Service Tier is doing the processing. Once that idle time reaches a particular limit, the Service Tier shuts down that connection.

Changing the Timeout

When using the Windows Client it’s easy to set a few settings (see here) in the Service Tier configuration in order to adjust the session timeout for a user. Moving forward it remains just as easy, but if you’ve tried using the same settings as in the past, you’ve certainly noticed that they don’t work for the Web Client.

The session timeout for the Web Client is now controlled by settings in the Service Tier and in the Web Client configuration. I wish there was just a single setting, but maybe once the Windows Client is gone we’ll see some of the old settings go away as well.

In order to change the timeout for the Web Client, you need to change two settings. Thankfully though, we can easily change these settings using a couple of PowerShell commands.

ClientServicesIdleClientTimeout

This setting is found in the Microsoft Business Central Server configuration.

Set-NAVServerConfiguration DynamicsNAV130 -KeyName ClientServicesIdleClientTimeout -KeyValue "00:20:00"

The timeout uses this format: [dd.]hh:mm:ss[.ff]

dd – number of days
hh – number of hours
mm – number of minutes
ss – number of seconds
ff – hundredths of a second

Note

  • You can also set the setting to “MaxValue” in order to indicate no timeout. This is also the default value for a new installation.
  • You must restart the service tier for the new setting to take affect.

SessionTimeout

This settings is found in the navsettings.json file on the Microsoft Business Central Web Server.

Set-NAVWebServerInstanceConfiguration -WebServerInstance DynamicsNAV -KeyName SessionTimeout -KeyValue "00:20:00"

The timeout uses this format: [dd.]hh:mm:ss[.ff]

dd – number of days
hh – number of hours
mm – number of minutes
ss – number of seconds
ff – hundredths of a second

Note

  • The default value is “00:20:00” (20 minutes) for a new installation.

How the Settings Work Together

The above settings are used in conjunction with each other. A session will be closed based on the setting that has the shortest time period.

By default, the ClientServicesIdleClientTimeout setting is set to “MaxValue”, which means that in a default installation, the SessionTimeout settings is what will be used.

This is also why configuring the Microsoft Business Central Server to a higher timeout does not work, since it is the setting that has the shortest time period that is used.

Summary

The ClientServicesIdleClientTimeout and SessionTimeout settings work together to determine the timeout of the Web Client. The setting that has the shortest time period is the one that is used to determine when a session is closed.

This means that in a default installation you can leave the ClientServicesIdleClientTimeout at its default value, and just change the SessionTimeout setting on the Microsoft Business Central Web Server to your desired timeout.

You can read more on this and other configurations you can set here.

Happy coding!

 

AL Extensions: Translate Your Solution With the Multilingual App Toolkit Editor

In this post, I showed you how you can use Microsoft Dynamics Lifecycle Services to translate your AL project into a variety of languages.

As with most things, there are multiple ways to go about doing things. This post will look at the Microsoft Multilingual App Toolkit. This toolkit is integrated into Visual Studio but there is also a standalone version, called the Multilingual App Toolkit Editor.

With this tool you can manually do your translation and/or you can connect it to the Microsoft Translator service via Azure, which is what I will describe in this post.

Here’s how…

Download and install the Multilingual App Toolkit Editor.
https://developer.microsoft.com/en-us/windows/develop/multilingual-app-toolkit

If all you want to do is work offline and manually do your translations, you can stop here. Continue on if you want to connect to the translation service in Azure, but note that you do need an active Azure subscription for this.

Enable the Translator Text API in Azure.
Using the Azure portal, do the following to add the Translator Text API to your Azure subscription:

  1. Choose “Create a Resource“.
  2. Search for “Translator Text API” and select it.
  3. On the Translator Text API blade, press the Create button to begin configuring the subscription.
  4. Fill in the fields accordingly for the API by giving it a name, pricing tier, etc. Note that there is a free tier option that lets you translate up to 2 million characters per month. Additional pricing info can be found here.
  5. Press the Create button to deploy the API to your Azure subscription. This might take a few minutes.

Get your Translator Text API authentication key.
Once the API has been deployed, you need to get your subscription key so that the Multilingual Tool can authenticate and connect to it.

  1. In the Azure portal, select “All Resources” and select the Azure subscription that you deployed the API to.
  2. In the list of resources, click on the Translator Text API service that you deployed.
  3. In the Translator Text API blade, select Keys.
  4. Copy one of the 2 keys that are associated with the service. You will need this key value in the next step.

Add Multilingual App Toolkit Editor credentials.
Now that we have the Translator Text API up and running, and the Multilingual App Toolkit Editor installed, we need to configure the authentication. We do that using the Windows Credential Manager.

  1. On your Windows machine, launch Credential Manager.
  2. Select Windows Credentials.
  3. Select Add a generic credential.
  4. Enter the following values:
    • Internet or network address: Multilingual/MicrosoftTranslator
    • User name: Multilingual App Toolkit
    • Password: <the Translator Text API key that you retrieved earlier>
  5. Click OK.
  6. Close Credential Manager.

Ok, now that we have everything installed, deployed, and configured, we can open up the Multilingual App Toolkit Editor (search Multilingual Editor in your Start menu) and translate the XLF file from our AL project. You can learn about generating this file here.

  1. Copy the auto-generated ‘.g.xlf‘ file to create a new file in the same folder. Rename the file based on the AL standards here.
  2. Edit the new file and update the ‘target-language‘ property to be the language that you are translating the file to (e.g. fr-CA).
  3. Close and save the file.
  4. Using the Multilingual App Toolkit Editor, select Open and select your new file.
  5. From the ribbon, select Translate > Translate All. The toolkit will now use the Translator Text API in Azure to translate the file based on the source and target languages. This might take a few minutes based on the numbers of items that need to be translated in your solution.
  6. Once the translation is done you can manually review and edit any if you wish.
  7. Close and save the file.

Now you have your new translation file. Simply repeat the steps to generate each translation that your AL solution requires!

Submitting to AppSource
If you are submitting your solution to AppSource, even if you do not need multi-language support in your solution, you still must provide (at a minimum) a translation file for the base language (e.g. en-US) in which your solution is published.

Note that the auto-generated ‘.g.xlf’ file is NOT a properly formatted translation file and your solution will not pass validation if you do not create at least the base language file.

In the pic below you have the raw ‘.g.xlf’ file as it gets created during the AL project build process. As you can see, there is only a ‘source‘ property for the message control even though the ‘target-language‘ of the file is set to ‘en-US’:

AutoGenTranslationFileFormat

In a properly formatted translation file, you will have both the ‘source‘ and the ‘target‘ properties:

ProperTranslationFileFormat

In addition to the formatting, you can’t rely on editing the ‘.g.xlf’ file because it gets overwritten each time you build your AL project.

In short, use the ‘.g.xlf’ file ONLY as the source for generating other translation files.

EDIT: I was just informed that fellow MVP Tobias Fenster‘s ALRunner VS Code extension can (among many things) convert the ‘.g.xlf’ translation file into a properly formatted file. Quick and easy! Check it out here!

Happy coding!

AL Extensions: Translate your solution using Microsoft Dynamics Lifecycle Services

Hi everyone!

Today’s post is about language translations. I’ve not really done much with language translations in the past, but it always seemed as though it was a very tedious thing to have to do.

This post though is going to show you how to do it quite quickly, and if all goes well you can translate your AL extension in a matter of minutes.

Pre-Requisite

Note that in order to perform the tasks in this post, you will need access to Microsoft Dynamics Lifecycle Services (LCS). If you do not have access contact your system administrator, or the person that manages your CustomerSource/PartnerSource account, as the systems are linked in some way.

Let’s Get Started

Assuming you have already developed your AL extension, you need to enable the new translation feature. You can do that by adding the following line to your app.json:

"features": ["TranslationFile"]

This feature will cause a new translation file to be generated when the extension is built. The translation files are created using the standard XLIFF format that is used to pass data between system for the purposes of language translation. For more info on XLIFF, check here. You will find the translation file at the root of your AL project in a Translations folder.

You’re also going to need to change all of your ML properties as they are being deprecated (as of this post though they are still available). You can read more about that here.

Ok, you have now enabled the new translation file feature, and you’ve updated all of your ML properties, you need to build your extension. Once that completes, look for the newly created XLF file in the Translations folder. This is the file that we need to submit to the translation service.

The Translation Service

Before we get into submitting the file, note that in my solution my base language is English (US). I’ll be translating that to Canadian French. Substitute the appropriate languages for your solution.

Here are the steps to perform the translation:

TranslationServiceTile

  • Select the ‘+’ button at the top left to make a new request and fill in the request based on your desired translation. Make sure to select Microsoft Dynamics NAV as the product and the appropriate product version.

TranslationRequest

  • Once you have filled in your request, press Create to submit the request. At this point you just wait. You should receive an email from LCS stating that “A new request has been created.”
  • After a short period of time (for me this was only about 2 minutes!!) you will receive another email from LCS stating “The request has been completed.”
  • Follow the link in the email and you will be brought to a page that shows you the details of your translation request.
  • Click Request output at the left side of the screen.
  • Click the DownloadOutputLink link to download your translated XLF file. and extract it to the Translations folder in your AL project. It will have a new file name so it should not overwrite your existing file. Do not remove the original XLF file as that still represents the base language of your extension!

That’s it!

Now all you have left to do is to rebuild your extension with the new translation file and voila…..you now have a multi-language extension! Test it out by changing to the appropriate language in the web client.

Ongoing Maintenance

Once you have non-base translation files in your AL project, they do not get updated when the extension is built. For example, if you add a new field with a caption and build your extension, the base language XLF will get updated to include the new field. Your non-base XLF files will not, so you will need to revisit LCS and submit a new base language file to get those updated.

Hopefully this service has works as well for you as it seems to for me. I was actually quite shocked at how fast I was able to get the translated file back.

That’s all for now, happy coding!

AL Extensions: Accessing the Device Camera

If you’ve been doing any V2 extension development, you like are aware that we cannot use any .Net interop in our code.

While on premise V2 development will eventually gain access to .Net variable types, if you’re coding your extension to run in AppSource, you will remain locked away from using .Net interop in your code because the risk of these external components is too large for shared cloud servers.

Unfortunately this means that we lost the ability to interact with the device camera, as it was accessed using .Net.

In C/AL, the code to take a picture with the device camera looked like this:

TakeNewPicture()
   IF NOT CameraAvailable THEN
      EXIT;
   CameraOptions := CameraOptions.CameraOptions;
   CameraOptions.Quality := 50;
   CameraProvider.RequestPictureAsync(CameraOptions);

It’s simple enough code, but the problem in the above example is that both CameraProvider and CameraOptions are .Net variables, and therefore cannot be used in V2 extension development.

I’m happy to say though that this problem has been resolved. Yes, the underlying architecture still uses .Net controls, but Microsoft has introduced a new Camera Interaction page which acts basically like an api layer. Through this api layer you can interact with the .Net camera components just as you did in C/AL.

Not a huge change to wrap your head around at all. In your extension you will code against the Camera Interaction page instead of against the .Net controls directly. Inside the page are all the original camera functions that you were used to using before.

This greatly simplifies our extension code and allows us now to use the camera from our extensions.

The code to take a picture would now look like this:

local procedure TakePicture();
    var
        CameraInteraction: Page "Camera Interaction";
        PictureStream: InStream;
    begin
        CameraInteraction.AllowEdit(true);
        CameraInteraction.Quality(100);
        CameraInteraction.EncodingType('PNG');
        CameraInteraction.RunModal;
        if(CameraInteraction.GetPicture(PictureStream)) then begin
            Picture.ImportStream(PictureStream, CameraInteraction.GetPictureName());
        end;
    end;

That’s it! Now you can use the device camera from your V2 extensions.

Happy coding!

Enable Personalization in the Dynamics NAV 2018 Web Client

The recent release of Microsoft Dynamics NAV 2018 has brought a lot of improvements to the Web Client, one of those being the ability for users to (finally!) do personalization directly in the client. No longer do they need to jump over to the Windows Client for that!

If you have installed NAV 2018 though, you might be wondering how you do the personalization. Well….it’s not enabled by default.

To enable it, you need to modify the Web Client configuration, which is done within the new navsettings.json file. Yes, out with the old web.config and in with the new json-based file! Read more about this here.

You have 2 options for changing the configuration:

Edit Configuration File Directly

To edit the Web Client configuration directly, open the navsettings.json file and add the following line:

"PersonalizationEnabled": "True"

The default location for the json file is here:
%systemroot%\inetpub\wwwroot\[WebServerInstanceName]

After you have changed the file, save it, and then restart the Web Client website via IIS, or by executing iisreset at the command prompt.

Using PowerShell

As I’m a huge fan of PowerShell, this is my preferred method of doing pretty much anything. Using the Dynamics NAV 2018 Development Shell (in admin mode of course), you can use the Set-NAVWebServerInstanceConfiguration commandlet to update Web Client configuration.

To enable personalization, you would run the commandlet like this:

Set-NAVWebServerInstanceConfiguration `
     -Server [MyComputer] `
     -ServerInstance [NAVServerInstanceName] `
     -WebServerInstance [MyNavWebServerInstance] `
     -KeyName PersonalizationEnabled `
     -KeyValue True

For more details on the full commandlet syntax, look here.

Performing Personalization

Once you do one of the above steps, you’ll be able to log into the Web Client and select the Personalize action, which is found at the top of the Web Client under the settings cog:

EnablePersonalization1

That’s all there is to it.

Happy coding!

AL Extensions: Importing and Exporting Media Sets

One of the things that has changed when you are building V2 extensions for a cloud environment is that you cannot access most functions that work with physical files.

This presents a bit of a challenge when it comes to working with the media and media set field types, as the typical approach is to have an import and export function so that a user can get pictures in and out of the field.

An example of this is the Customer Picture fact box that’s on the Customer Card:

WorkingWithMediaFields1

As you can see, the import and export functions in C/Side leverage the FileManagement codeunit in order to transfer the picture image to and from a physical picture file. These functions are now blocked.

So…..we have got to take another approach. Enter streams.

Using the in and out stream types we can recreate the import and export functions without using any of the file based functions.

An import function would look like the following. In this example, the Picture field is defined as a Media Set field.

local procedure ImportPicture();
var
   PicInStream: InStream;
   FromFileName: Text;
   OverrideImageQst: Label 'The existing picture will be replaced. Do you want to continue?', Locked = false, MaxLength = 250;
begin
   if Picture.Count > 0 then
     if not Confirm(OverrideImageQst) then
       exit;

  if UploadIntoStream('Import', '', 'All Files (*.*)|*.*', FromFileName, PicInStream) then begin
    Clear(Picture);
    Picture.ImportStream(PicInStream, FromFileName);
    Modify(true);
  end;
end;

The UploadIntoStream function will prompt the user to choose a local picture file, and from there we upload that into an instream. At no point do we ever put the physical file on the server. Also note that the above example will always override any existing picture. You do not have to do this as media sets allow for multiple pictures. I’m just recreating the original example taken from the Customer Picture page.

For the export we have to write a bit more code. When using a Media Set field, we do not have access to any system function that allows us to export to a stream. To deal with this all we need to do is loop through the media set and get each of the corresponding media records. Once we have that then we can export each of those to a stream.

That would look like this:

local procedure ExportPicture();
var
   PicInStream: InStream;
   Index: Integer;
   TenantMedia: Record "Tenant Media";
   FileName: Text;
begin
   if Picture.Count = 0 then
      exit;

   for Index := 1 to Picture.Count do begin
      if TenantMedia.Get(Picture.Item(Index)) then begin
         TenantMedia.calcfields(Content);
         if TenantMedia.Content.HasValue then begin
            FileName := TableCaption + '_Image' + format(Index) + GetTenantMediaFileExtension(TenantMedia);
            TenantMedia.Content.CreateInStream(PicInstream);
            DownloadFromStream(PicInstream, '', '', '', FileName);
         end;
      end;
   end;
end;

We use the DownloadFromStream function to prompt the user to save each of the pictures in the media set. As in our first example, there are no physical files ever created on the server, so we’re cloud friendly!

You may notice that I use the function GetTenantMediaFileExtension in the export example to populate the extension of the picture file. Since the user can upload a variety of picture file types, we need to make sure we create the file using the correct format.

The function to do this is quite simple, however there is no current function in the product to handle it, so you’ll have to build this yourself for now. Hopefully in the near future this function will be added by Microsoft.

local procedure GetTenantMediaFileExtension(var TenantMedia: Record "Tenant Media"): Text;
begin
   case TenantMedia."Mime Type" of
      'image/jpeg' : exit('.jpg');
      'image/png' : exit('.png');
      'image/bmp' : exit('.bmp');
      'image/gif' : exit('.gif');
      'image/tiff' : exit('.tiff');
      'image/wmf' : exit('.wmf');
   end;
end;

Until next time, happy coding!

 

Dynamics NAV 2018 Available Now

Microsoft Dynamics NAV 2018 is now available for download!! You can download it here, or visit the Get Ready For Dynamics NAV site for more information.

Other resources of interest:

Happy coding!

 

Dynamics NAV 2018 on the way!

After delivering a message at Directions NA 2017 that the typical October release if Dynamics NAV would be delayed until spring if 2018, an article posted by Alysa Taylor (GM of Global Marketing) confirms now that there will be a release of Dynamics NAV 2018 by the end of the current center year.

Check out the full article here.

This is a great response by Microsoft to address what was largely a negative reaction to the Directions announcement, and confirms that they are listening to our feedback!

Find Out if an Extension is Installed, part 2

“Hi, this is Library, is Extension 1 home? What about Extension 2, are they home too?”

In my previous post on this topic, I explained how you can use the NAV App Installed App system table to see if a specific extension is installed. I later found out though that the ability to access that table may not always be available in the Dynamics 365 for Financials platform, so back to square one. I want a stable long-lasting solution.

Alas…events!

First…some background on why I need this functionality. I’m developing 2 extensions that share a common library, but I want to develop these extensions so that they are independent, meaning that a customer is able to install only one of the extensions if they choose to, and not be forced to install both. I also need certain features in each extension to act differently depending on if the other extension is installed or not. I know….never simple. 🙂

For those that are not aware, when you submit an extension to AppSource, you are able to submit along with it a library extension. Multiple extensions can be dependent on the same library, which makes it easy to deliver foundation type functions that are shared amongst your extensions. The library extension will not be seen in AppSource, but it will be automatically installed when any of the dependent extensions are installed.

What this also means is that when I am developing in the context of one of my “functional” extensions, I am able to directly reference objects that are contained in my library because the library is guaranteed to exist when the functional extension is installed. What I cannot do though is the opposite, because although the library extension knows that at least one functional extension was installed, it does not know directly which one it was. Make sense!? Clear as mud I know. 🙂

In the example below, let’s assume that I have a “Library” extension, and two functional extensions, brilliantly named “Extension 1” and “Extension 2”.

First, I need to create a codeunit in my library extension that will act as the “extension handler” so to speak. In other words it will house the functions required to determine what extensions are installed. In my library codeunit, I’ll add the following functions:

CheckIsExtensionInstalled(ExtensionAppID : GUID) : Boolean
IsInstalled := FALSE;
IsExtensionInstalled(ExtensionAppID,IsInstalled);
EXIT(IsInstalled);

GetExtension1AppID() : GUID
EXIT('743ba26c-1f75-4a2b-9973-a0b77d2c77d3');

GetExtension2AppID() : GUID
EXIT('a618dfa7-3cec-463c-83f7-7d8f6f6d699b');

LOCAL [IntegrationEvent] IsExtensionInstalled(ExtensionAppID : GUID;VAR IsInstalled : Boolean)

The above functions allow me to call into the codeunit to determine if either of my functional extensions are installed. The GUIDs that I used are samples above, but you should use the same GUID that is used in the corresponding extension manifest file.

The piece that makes this all possible is the published event IsExtensionInstalled. What I can do now is from within each functional extension, I can subscribe to that event so that each extension can basically give “answer” the library when it asks if it’s installed.

To do that, I create 2 more codeunits, one in each functional extension. These codeunits will contain a subscriber to the event that we published in the library codeunit. This way, if the extension is installed, its subscriber will respond to the event and let the library know that it is installed. If the extension is not installed then there won’t be anyone home to answer that call.

Extension 1

LOCAL [EventSubscriber] OnCheckIsExtensionInstalled(ExtensionAppID : GUID;VAR IsInstalled : Boolean)
IF ExtensionAppID = ExtensionHandler.GetExtension1AppID THEN
  IsInstalled := TRUE;

Extension 2

LOCAL [EventSubscriber] OnCheckIsExtensionInstalled(ExtensionAppID : GUID;VAR IsInstalled : Boolean)
IF ExtensionAppID = ExtensionHandler.GetExtension2AppID THEN
  IsInstalled := TRUE;

So how do we use all of this? Easy, of course. The example below shows code that you could use from either of the functional extensions, so that your extensions can act differently depending on what other extensions are installed.

IF LibraryCodeunit.CheckIsExtensionInstalled(LibraryCodeunit.GetExtension1AppID) THEN
  MESSAGE('Extension 1 is installed.');

IF LibraryCodeunit.CheckIsExtensionInstalled(LibraryCodeunit.GetExtension2AppID) THEN
  MESSAGE('Extension 2 is installed.');

There, easy right? If you want to try it out yourself, you can grab the above example on GitHub here.

Now, while you can do this if you own all of the source code for each extension, you cannot use this solution to determine if “any old random extension” is installed, as you need to add the subscriber function to the functional extension code. But, if you are developing out multiple extensions and if you need to know which of them are installed, then this solution will work wonderfully!

Happy coding!

Find Out if an Extension is Installed

EDIT: I’ve just been informed that users will not have access to the “NAV App Installed App” system table in the near future, so while the following will work for a typical NAV 2017 installation, it will not work for a Dynamics 365 extension.

EDIT2: I’ve posted a much better solution here!!

Having already created and published one extension, I am now in the process of creating a second extension, and in some cases, I want my extensions to act differently depending on if the other extension is installed or not. I could easily make each extension dependent on each other so that I know they’re always both installed, but where’s the fun in that!!??

You can do this by using the NAV App Installed App system table (2000000153). This table lists all of the extensions that are installed within the tenant.

Now, before I go further, I will mention that there is a function named IsInstalled in Codeunit 2500 (NavExtensionInstallationMgmt) that you could call, but this function accepts a Package ID, which is different than the Application ID of the extension. I prefer to use the Application ID, as I am in control of what that value is through the extension packaging process, and I like that control. So……because of this, I am not going to use the built-in function.

Here’s what you need to do:

First, create a local function that returns true/false depending on if a given Application ID is installed or not.

LOCAL CheckIfExtensionInstalled(AppID : GUID) : Boolean
EXIT(NAVAppInstalledApp.GET(AppID));

Next, we need to write local functions to return our extension Application IDs. You can get the GUID from the extension manifest file when you build the NAVX file. Repeat this process and create a function for each extension that you want to see if it’s installed.

LOCAL GetExtension1AppID() : GUID
//-- AppID: GUID
EVALUATE(AppID,'e7deb9a9-6727-4157-838e-bcf4a0853942');
EXIT(AppID);

Finally…….create a global function that will check for each extension. We will call this function from throughout our application wherever we need our application to act accordingly.

Extension1IsInstalled() : Boolean
EXIT(CheckIfExtensionInstalled(GetExtension1AppID));

Now, all we need to do is use the functionality in our code, such as the following example:

IF Extension1IsInstalled THEN BEGIN
  //-- do something
END ELSE BEGIN
  //-- do something else
END;

One thing to note, you will probably also need to give your codeunit (or whatever object you added the above functions to) permissions to read the NAV App Installed App table as the average user does not typically have this permission.