TagBest Practices

Replacing Base Application Reports

Maybe I didn’t look hard enough, but I went looking at the Microsoft Docs online help for Business Central for a project I’m working on, and I could not find any information on this. I did find it listed in the AL Language Extension changelog, so figured I’d throw up a quick post to get this out there in case you don’t read the changelog. 🙂

With the Business Central Fall 2018 release, it’s now possible to override base reports with your AL extension. Not just printed reports, but processing reports too!

You can do that by creating a subscriber to the OnAfterSubstituteReport event in the ReportManagement codeunit (I thought we wanted to get rid of management codeunits 😉 !?) It’s a very straightforward bit of code and it looks like this:

codeunit 50100 "Replace Reports"
{
    [EventSubscriber(ObjectType::Codeunit, Codeunit::ReportManagement, 'OnAfterSubstituteReport', '', false, false)]
    local procedure OnAfterSubstituteReport(ReportId: Integer; var NewReportId: Integer)
    begin
        if ReportId = Report::"Customer Listing" then
            NewReportId := Report::"My New Customer Listing";
    end;
}

As you can see, very simple logic that just replaces the report that is being executed by the system.

The above logic can be enhanced to check to see if the report has already been substituted by another extension. You can do that just by comparing the ReportId and NewReportId parameters before making your change. If those parameters do not match, and NewReportId is not -1, then the report has already been replaced by another extension, and you’re of course then going to have to figure out how you handle that.

Remember when you are replacing base reports, if the report is called in code, make sure you use a compatible report, or you’ll get run time errors.

Oh, one more piece of good news here………the event above that we subscribed to is called every time a user clicks on a report action on a page, and also when any of the following commands are executed in code for a report object:

  • Run
  • RunModal
  • SaveAsHtml
  • SaveAsXml
  • SaveAsPdf
  • SaveAsExcel
  • SaveAsWord
  • RunRequestPage
  • Execute
  • Print
  • SaveAs

That’s all for today. Happy coding!!

Controlling Session Timeouts in the Web Client

Since moving to the Web Client, have you seen this at all?
timeout
You probably have, and has it popped up while you were running a long processing routine? Yup, me too.

What’s Happening?

When you kick off those long running routines, your Web Client basically sits idle while the Service Tier is doing the processing. Once that idle time reaches a particular limit, the Service Tier shuts down that connection.

Changing the Timeout

When using the Windows Client it’s easy to set a few settings (see here) in the Service Tier configuration in order to adjust the session timeout for a user. Moving forward it remains just as easy, but if you’ve tried using the same settings as in the past, you’ve certainly noticed that they don’t work for the Web Client.
The session timeout for the Web Client is now controlled by settings in the Service Tier and in the Web Client configuration. I wish there was just a single setting, but maybe once the Windows Client is gone we’ll see some of the old settings go away as well.
In order to change the timeout for the Web Client, you need to change two settings. Thankfully though, we can easily change these settings using a couple of PowerShell commands.

ClientServicesIdleClientTimeout

This setting is found in the Microsoft Business Central Server configuration.

Set-NAVServerConfiguration DynamicsNAV130 -KeyName ClientServicesIdleClientTimeout -KeyValue "00:20:00"

The timeout uses this format: [dd.]hh:mm:ss[.ff]
dd – number of days
hh – number of hours
mm – number of minutes
ss – number of seconds
ff – hundredths of a second

Note

  • You can also set the setting to “MaxValue” in order to indicate no timeout. This is also the default value for a new installation.
  • You must restart the service tier for the new setting to take affect.

SessionTimeout

This settings is found in the navsettings.json file on the Microsoft Business Central Web Server.

Set-NAVWebServerInstanceConfiguration -WebServerInstance DynamicsNAV -KeyName SessionTimeout -KeyValue "00:20:00"

The timeout uses this format: [dd.]hh:mm:ss[.ff]
dd – number of days
hh – number of hours
mm – number of minutes
ss – number of seconds
ff – hundredths of a second

Note

  • The default value is “00:20:00” (20 minutes) for a new installation.

How the Settings Work Together

The above settings are used in conjunction with each other. A session will be closed based on the setting that has the shortest time period.
By default, the ClientServicesIdleClientTimeout setting is set to “MaxValue”, which means that in a default installation, the SessionTimeout settings is what will be used.
This is also why configuring the Microsoft Business Central Server to a higher timeout does not work, since it is the setting that has the shortest time period that is used.

Summary

The ClientServicesIdleClientTimeout and SessionTimeout settings work together to determine the timeout of the Web Client. The setting that has the shortest time period is the one that is used to determine when a session is closed.
This means that in a default installation you can leave the ClientServicesIdleClientTimeout at its default value, and just change the SessionTimeout setting on the Microsoft Business Central Web Server to your desired timeout.
You can read more on this and other configurations you can set here.
Happy coding!
 

Control Addins in AL

Hey everyone,
Quick post today to highlight the updated documentation for building control addins in AL.
The docs have recently been updated to include much more information regarding what properties and methods are available.
Check out the documentation here.
You will also want to check out the style guide, so that you can make sure that your control addin fits the look and feel of the new Business Central theme.
That’s all for now, happy coding!

AL Extensions: Managing Version Numbering With VSTS Builds

I wanted to do this post to talk a bit more about something that Soren Klemmensen had in Part 2 of the blog series on automated builds with VSTS. We wanted to keep those original posts as simple as possible but there are a few more topics that we would like to expand upon.
In that particular post, he covered the PowerShell code needed to generate the AL extension using the ALC compiler tool, and in there is some interesting code around setting the version number of the extension package that gets created. That’s what I want to talk a bit more about here.
As we know, the version number that is assigned to the AL extension needs to be stored in the app.json that sits in the the AL project, and that value needs to be there before the compile process can be executed, like this:
ALExtensionVersion
Typically, this means that the developer is updating that value as needed before they compile and package the extension.
Wouldn’t it be nice to not have to have the developer do that every build? They’ve already got so much to worry about! 🙂
Well…it just so happens that one of the many great features in VSTS build definitions is the ability to automatically generate build numbers. Using a combination of static values and tokens, you’re able to generate any build number that you would like and in pretty much any format you would like.
You can read up on what tokens are available here, and how you can use them to build whatever format of build number you want.
In VSTS, these build numbers can be used to tag checkins so that you can track which checkin represents a build, but we’re going to leverage this and build ourselves a properly formatted version number that we will inject into the AL extension during the build process. This way, the developer can sit back and relax wait for the VSTS magic to happen!
Here’s how we’re going to do it…

1. Update build definition

We need to update our build definition to create our version number for us. We do that by populating the Build number format box on the Options tab of the build definition. Here’s where you can use the VSTS tokens and/or static text.
For this walkthrough, my version number needs to meet the following criteria:

  1. It will follow the standard #.#.#.# formatting that is required for an AL extension.
  2. It will be based on the date that the build is executed.
  3. I need to ensure I always get a unique version number even if I generate multiple builds in a day.
  4. This is the first version of my extension, so I want to make sure the version number begins with 1.0.

Given all of this criteria, I’ll populate the field like this:
BuildVersionFormat
As you can see, I am just hard-coding the “1.0.” so that every version I get begins the same. I’m using the date token (note the format can be changed if you like) so that the version contains the date on which the build is executed. Last but not least, I’m using the revision token to ensure that every time the build is executed I will get a unique version number. If you’re implementing Continuous Integration (and you really should!) you will certainly at some point have multiple builds done within the same day.
NOTE: The revision token is 1-based, so in my example the first build on a given day will end with .1 and not .0 (not what I would have preferred but what can you do).

2. Update build script

Remember I mentioned Soren’s post? In that post the following code is added to your build script. We only really need to update 1 line in this script, but I do want to talk about a few of the other lines so you know what’s going on here. The specific code we’re interested in is shown in red:

$ExtensionAppJsonFile = “.\app.json”
$ExtensionAppJsonObject = Get-Content -Raw -Path $ExtensionAppJsonFile | ConvertFrom-Json
$Publisher = $ExtensionAppJsonObject.Publisher
$Name = $ExtensionAppJsonObject.Name
$ExtensionAppJsonObject.Version = ‘1.0.’+$env:Build_BuildID + ‘.0’
$ExtensionName = $Publisher + ‘_’ + $Name + ‘_’ + $ExtensionAppJsonObject.Version + ‘.app’
$ExtensionAppJsonObject | ConvertTo-Json | set-content $ExtensionAppJsonFile
$ALProjectFolder = $env:System_DefaultWorkingDirectory
$ALPackageCachePath = ‘C:\ALBuild\Symbols’
Write-Host “Using Symbols Folder: ” $ALPackageCachePath
$ALCompilerPath = ‘C:\ALBuild\bin’
Write-Host “Using Compiler: ” $ALCompilerPath
$AlPackageOutPath = Join-Path -Path $env:Build_StagingDirectory -ChildPath $ExtensionName
Write-Host “Using Output Folder: ” $AlPackageOutPath
Set-Location -Path $ALCompilerPath
.\alc.exe /project:$ALProjectFolder /packagecachepath:$ALPackageCachePath /out:$AlPackageOutPath

 
The first 2 lines of highlighted code read the app.json file from the AL project into a PowerShell object using ConvertFrom-Json. This will allow us to read/write any of the properties within the json file:

$ExtensionAppJsonFile = ".\app.json"
$ExtensionAppJsonObject = Get-Content -Raw -Path $ExtensionAppJsonFile | ConvertFrom-Json

Now what we want to do is update the Version property with the build number that our VSTS build definition generated. We can get that value by using one of the environment variables that VSTS makes available while the build process is running. There are a bunch of environment variables available to us (see here for more info) but the one that we’re interested in is Build.BuildNumber. The value in this variable is the value that VSTS generated based on the Build number format that you populated in your build definition. If you leave that field blank, then this environment variable would be empty.
This is where we need to replace some of the code from Soren’s post. The original code (shown below) sets the version number to Build.BuildID, which is the identifier that VSTS assigns to each build:

$ExtensionAppJsonObject.Version = ‘1.0.’+$env:Build_BuildID + ‘.0’

Since we want to control the version number format using our build definition, this is where we need to make a small change. We’ll replace the above line of code with the following that will update the Version property to Build.BuildNumber:

$ExtensionAppJsonObject.Version = $env:Build_BuildNumber

Notes

  • To access VSTS environment variables in your PowerShell script, begin the variable name with ‘$env:’ and replace ‘.’ with ‘_’.
  • Because this environment variables are only generated during the build process you will not be able to manually run your PowerShell script to test the values of these variables.

Finally, we need to write our changes to the app.json file as so far we’ve only made the changes in the PowerShell object We use ConvertTo-Json for that:

$ExtensionAppJsonObject | ConvertTo-Json | set-content $ExtensionAppJsonFile

That’s It!

After you’ve done the above updates to your build definition and script, when you generate your build either manually or via trigger, your packaged AL extension will have version numbers similar to the following:

1.0.20180407.1   (first build of the day)
1.0.20180407.2   (second build of the day)
1.0.20180407.3   (third build of the day)
1.0.20180407.n   (nth build of the day)

Source code management

One of the obvious questions if you implement your build process like this is that if the version number is only updated at the time the build is done, what happens with source code management? Well to be honest, I do not track the version number of my extension in source code management at all.
Personally, I don’t see the value since when my build runs, it tags the checkin that represents that build, so at any point in time I can go back and find the source code for any build that I’ve done, and if I need to I can rebuild it.
So what do I keep in source code management? I keep all of my extension versions permanently set to ‘0.0.0.0’. This means that no developer ever needs to touch the version property during the development stage. It also helps identify if a database has a released version of my extension installed versus a pre-release development version, which does happen from time to time.
Hopefully you’ve found this useful and you’ve now updated your build definition to provide you the exact version number format that you want.
For a listing of the all the automated build articles, see my post here.
That’s it for now, happy coding!

AL Extensions: Automated Builds with VSTS

Hello there,
I recently spent a few hours with Soren Klemmensen, whom I have known for many years. He was actually the one who introduced me to source code management, at which I swore I’d never be happy using it because it was just “development overhead”. Story for another day, but rest assured, that’s not at all how I feel now. I cannot live without Visual Studio Team Services (VSTS)!
In any case, aside from catching up on things, we decided to put together a series of articles on automating builds using VSTS and AL Extensions.Having been the person who has spent countless hours manually doing “the build”, I cannot express how happy I was to push all of that off to automation so that “it just happens”.
Below is where you can find the articles. They’ll walk you through setting up a simple build to start, and from there we’ll build on it with some more advanced topics, such as having VSTS sign your app during the build!
Part 1: How to setup a build agent in VSTS
Part 2: Prepping your build server for creating Apps from source
Part 3: Creating a build definition
Part 4: Preparing your build server for code signing
Part 5: Updating your build server for code signing
There you have it, the first articles covering automating builds for AL extensions using VSTS.
Also, don’t forget that you can sign up for a VSTS account for free, and with that account you can have up to 5 developers use it, so for some, you just might be able to make use of the free account! There’s really no reason to not use it!
That’s all for now, this is not the end of the articles on this topic, so keep an eye out here and on Soren’s blog for more information in the future.
Happy coding….

AL Extensions: Translate your solution using Microsoft Dynamics Lifecycle Services

Hi everyone!
Today’s post is about language translations. I’ve not really done much with language translations in the past, but it always seemed as though it was a very tedious thing to have to do.
This post though is going to show you how to do it quite quickly, and if all goes well you can translate your AL extension in a matter of minutes.

Pre-Requisite

Note that in order to perform the tasks in this post, you will need access to Microsoft Dynamics Lifecycle Services (LCS). If you do not have access contact your system administrator, or the person that manages your CustomerSource/PartnerSource account, as the systems are linked in some way.

Let’s Get Started

Assuming you have already developed your AL extension, you need to enable the new translation feature. You can do that by adding the following line to your app.json:

"features": ["TranslationFile"]

This feature will cause a new translation file to be generated when the extension is built. The translation files are created using the standard XLIFF format that is used to pass data between system for the purposes of language translation. For more info on XLIFF, check here. You will find the translation file at the root of your AL project in a Translations folder.
You’re also going to need to change all of your ML properties as they are being deprecated (as of this post though they are still available). You can read more about that here.
Ok, you have now enabled the new translation file feature, and you’ve updated all of your ML properties, you need to build your extension. Once that completes, look for the newly created XLF file in the Translations folder. This is the file that we need to submit to the translation service.

The Translation Service

Before we get into submitting the file, note that in my solution my base language is English (US). I’ll be translating that to Canadian French. Substitute the appropriate languages for your solution.
Here are the steps to perform the translation:

TranslationServiceTile

  • Select the ‘+’ button at the top left to make a new request and fill in the request based on your desired translation. Make sure to select Microsoft Dynamics NAV as the product and the appropriate product version.

TranslationRequest

  • Once you have filled in your request, press Create to submit the request. At this point you just wait. You should receive an email from LCS stating that “A new request has been created.”
  • After a short period of time (for me this was only about 2 minutes!!) you will receive another email from LCS stating “The request has been completed.”
  • Follow the link in the email and you will be brought to a page that shows you the details of your translation request.
  • Click Request output at the left side of the screen.
  • Click the DownloadOutputLink link to download your translated XLF file. and extract it to the Translations folder in your AL project. It will have a new file name so it should not overwrite your existing file. Do not remove the original XLF file as that still represents the base language of your extension!

That’s it!

Now all you have left to do is to rebuild your extension with the new translation file and voila…..you now have a multi-language extension! Test it out by changing to the appropriate language in the web client.

Ongoing Maintenance

Once you have non-base translation files in your AL project, they do not get updated when the extension is built. For example, if you add a new field with a caption and build your extension, the base language XLF will get updated to include the new field. Your non-base XLF files will not, so you will need to revisit LCS and submit a new base language file to get those updated.
Hopefully this service has works as well for you as it seems to for me. I was actually quite shocked at how fast I was able to get the translated file back.
That’s all for now, happy coding!

AL Extensions: Accessing the Device Camera

If you’ve been doing any V2 extension development, you like are aware that we cannot use any .Net interop in our code.
While on premise V2 development will eventually gain access to .Net variable types, if you’re coding your extension to run in AppSource, you will remain locked away from using .Net interop in your code because the risk of these external components is too large for shared cloud servers.
Unfortunately this means that we lost the ability to interact with the device camera, as it was accessed using .Net.
In C/AL, the code to take a picture with the device camera looked like this:

TakeNewPicture()
   IF NOT CameraAvailable THEN
      EXIT;
   CameraOptions := CameraOptions.CameraOptions;
   CameraOptions.Quality := 50;
   CameraProvider.RequestPictureAsync(CameraOptions);

It’s simple enough code, but the problem in the above example is that both CameraProvider and CameraOptions are .Net variables, and therefore cannot be used in V2 extension development.
I’m happy to say though that this problem has been resolved. Yes, the underlying architecture still uses .Net controls, but Microsoft has introduced a new Camera Interaction page which acts basically like an api layer. Through this api layer you can interact with the .Net camera components just as you did in C/AL.
Not a huge change to wrap your head around at all. In your extension you will code against the Camera Interaction page instead of against the .Net controls directly. Inside the page are all the original camera functions that you were used to using before.
This greatly simplifies our extension code and allows us now to use the camera from our extensions.
The code to take a picture would now look like this:

local procedure TakePicture();
    var
        CameraInteraction: Page "Camera Interaction";
        PictureStream: InStream;
    begin
        CameraInteraction.AllowEdit(true);
        CameraInteraction.Quality(100);
        CameraInteraction.EncodingType('PNG');
        CameraInteraction.RunModal;
        if(CameraInteraction.GetPicture(PictureStream)) then begin
            Picture.ImportStream(PictureStream, CameraInteraction.GetPictureName());
        end;
    end;

That’s it! Now you can use the device camera from your V2 extensions.
Happy coding!

AL Extensions: Importing and Exporting Media Sets

One of the things that has changed when you are building V2 extensions for a cloud environment is that you cannot access most functions that work with physical files.
This presents a bit of a challenge when it comes to working with the media and media set field types, as the typical approach is to have an import and export function so that a user can get pictures in and out of the field.
An example of this is the Customer Picture fact box that’s on the Customer Card:
WorkingWithMediaFields1
As you can see, the import and export functions in C/Side leverage the FileManagement codeunit in order to transfer the picture image to and from a physical picture file. These functions are now blocked.
So…..we have got to take another approach. Enter streams.
Using the in and out stream types we can recreate the import and export functions without using any of the file based functions.
An import function would look like the following. In this example, the Picture field is defined as a Media Set field.

local procedure ImportPicture();
var
   PicInStream: InStream;
   FromFileName: Text;
   OverrideImageQst: Label 'The existing picture will be replaced. Do you want to continue?', Locked = false, MaxLength = 250;
begin
   if Picture.Count > 0 then
     if not Confirm(OverrideImageQst) then
       exit;
  if UploadIntoStream('Import', '', 'All Files (*.*)|*.*', FromFileName, PicInStream) then begin
    Clear(Picture);
    Picture.ImportStream(PicInStream, FromFileName);
    Modify(true);
  end;
end;

The UploadIntoStream function will prompt the user to choose a local picture file, and from there we upload that into an instream. At no point do we ever put the physical file on the server. Also note that the above example will always override any existing picture. You do not have to do this as media sets allow for multiple pictures. I’m just recreating the original example taken from the Customer Picture page.
For the export we have to write a bit more code. When using a Media Set field, we do not have access to any system function that allows us to export to a stream. To deal with this all we need to do is loop through the media set and get each of the corresponding media records. Once we have that then we can export each of those to a stream.
That would look like this:

local procedure ExportPicture();
var
   PicInStream: InStream;
   Index: Integer;
   TenantMedia: Record "Tenant Media";
   FileName: Text;
begin
   if Picture.Count = 0 then
      exit;
   for Index := 1 to Picture.Count do begin
      if TenantMedia.Get(Picture.Item(Index)) then begin
         TenantMedia.calcfields(Content);
         if TenantMedia.Content.HasValue then begin
            FileName := TableCaption + '_Image' + format(Index) + GetTenantMediaFileExtension(TenantMedia);
            TenantMedia.Content.CreateInStream(PicInstream);
            DownloadFromStream(PicInstream, '', '', '', FileName);
         end;
      end;
   end;
end;

We use the DownloadFromStream function to prompt the user to save each of the pictures in the media set. As in our first example, there are no physical files ever created on the server, so we’re cloud friendly!
You may notice that I use the function GetTenantMediaFileExtension in the export example to populate the extension of the picture file. Since the user can upload a variety of picture file types, we need to make sure we create the file using the correct format.

The function to do this is quite simple, however there is no current function in the product to handle it, so you’ll have to build this yourself for now. Hopefully in the near future this function will be added by Microsoft.

local procedure GetTenantMediaFileExtension(var TenantMedia: Record "Tenant Media"): Text;
begin
   case TenantMedia."Mime Type" of
      'image/jpeg' : exit('.jpg');
      'image/png' : exit('.png');
      'image/bmp' : exit('.bmp');
      'image/gif' : exit('.gif');
      'image/tiff' : exit('.tiff');
      'image/wmf' : exit('.wmf');
   end;
end;

Until next time, happy coding!

 

Learn AL: New string features

The AL language is bringing some nice features to Dynamics NAV developers. I want to cover a couple of the cool new string features.

  1. Function chaining
  2. Replace() function

The following example is something I’ve used in the past numerous times. It finds and replaces a substring within a string. Created in Visual Studio Code, but using good ol’ C/AL syntax, the code would look something like this:

NewString := DELSTR(OriginalString,STRPOS(OriginalString,FindString))
                + ReplaceWithString + COPYSTR(OriginalString,STRPOS(OriginalString,FindString) + STRLEN(FindString));

The above function is easy enough to create, but it was always something that I thought should have been part of the system functions. In AL, we now have the Replace() function, which can be used like this:

NewString := OriginalString.Replace(FindString,ReplaceWithString);
Oh by the way……invoking string functions directly from the variable!? Yes!
Using the “<variable>.<function>” syntax, we can now chain multiple functions together like this:
NewString := OriginalString.Replace(FindString,ReplaceWithString).ToLower();

Whoah! Pretty cool right!?

If you’re not always familiar with function chaining, I should point out that the functions in the chain get applied to the string from left to right, Depending on what you are doing to the string, the order you define the functions in the statement will matter.
This is some fantastic stuff and really brings Dynamics NAV development inline with current development practices.
Until next time, happy coding!

NAV Supporter's Blog

Back in February, Microsoft resurrected one of their old NAV developer blogs. It’s been renamed to the NAV Supporter’s Blog and as Microsoft puts it:

“This will be by and for people who support the Dynamics NAV products in one way or the other.”

This blog is not necessarily about new front-end features in the product, but more about tips and tools that can be used to better support and diagnose the product.

Recent posts include:
  • Multi-part series on a new NAV diagnostics toolkit
  • NAV performance troubleshooting
  • Build overviews for NAV 2013 –> 2017

The blog can be found here. Check it out!

Happy coding!

© 2019 NAV Bits and Bytes

Theme by Anders NorénUp ↑