Replacing Base Application Reports

Maybe I didn’t look hard enough, but I went looking at the Microsoft Docs online help for Business Central for a project I’m working on, and I could not find any information on this. I did find it listed in the AL Language Extension changelog, so figured I’d throw up a quick post to get this out there in case you don’t read the changelog. 🙂

With the Business Central Fall 2018 release, it’s now possible to override base reports with your AL extension. Not just printed reports, but processing reports too!

You can do that by creating a subscriber to the OnAfterSubstituteReport event in the ReportManagement codeunit (I thought we wanted to get rid of management codeunits 😉 !?) It’s a very straightforward bit of code and it looks like this:

codeunit 50100 "Replace Reports"
{
    [EventSubscriber(ObjectType::Codeunit, Codeunit::ReportManagement, 'OnAfterSubstituteReport', '', false, false)]
    local procedure OnAfterSubstituteReport(ReportId: Integer; var NewReportId: Integer)
    begin
        if ReportId = Report::"Customer Listing" then
            NewReportId := Report::"My New Customer Listing";
    end;
}

As you can see, very simple logic that just replaces the report that is being executed by the system.

The above logic can be enhanced to check to see if the report has already been substituted by another extension. You can do that just by comparing the ReportId and NewReportId parameters before making your change. If those parameters do not match, and NewReportId is not -1, then the report has already been replaced by another extension, and you’re of course then going to have to figure out how you handle that.

Remember when you are replacing base reports, if the report is called in code, make sure you use a compatible report, or you’ll get run time errors.

Oh, one more piece of good news here………the event above that we subscribed to is called every time a user clicks on a report action on a page, and also when any of the following commands are executed in code for a report object:

  • Run
  • RunModal
  • SaveAsHtml
  • SaveAsXml
  • SaveAsPdf
  • SaveAsExcel
  • SaveAsWord
  • RunRequestPage
  • Execute
  • Print
  • SaveAs

That’s all for today. Happy coding!!

Multi-level App Dependencies

So….I’m working through a new build process for our ISV solution. Why? Because we’re finally breaking it down into many extensions!! More on that another day…..

We’ve created a ‘library’ app which will house all of our common functionality. Each of our functional apps will have a dependency on the library app. Further to this, each of our functional apps will have its own test app. The test app of course has a dependency on the app that it is testing.

Like this:

So……you create your test app, add a dependency to the app you are testing and compile…..NOPE…..failure. 😦

In the above example, what needs to be done is the dependency to Library also needs to be added to Test App 1.

Like this:

Does this make sense? Maybe. This forces an assumption that because App 1 has direct access to the functions and entities within Library that Test App 1 also needs that access. In my example above, this direct access to Library was not something that we needed.

From the symbol perspective it perhaps makes a bit more sense. Adding the dependency forces the system to download the symbols for the dependent apps. If you just add the dependency for App 1‘ above, those symbols could be considered “incomplete” without also having the symbols for its dependencies, in this case Library.

I really (!!!) wish that we didn’t have to specify the extra dependencies. It would be nice if the compiler was able to figure out all of the downstream dependencies. An indirect dependency so to speak!?

The above scenario is fairly simple, but imagine this one:

  • ISV ‘AA’ creates a ‘app b’ that has a dependency on ‘app a’.
  • ISV ‘BB’ creates ‘app c’ that extends ‘app b’. 2 dependencies needed here (a, b).
  • ISV ‘BB’ sells ‘app c’ to a customer who then extends it even more with ‘app d’. This new app requires 3 dependencies (a, b, c).

…..look at all the apps now that have to be updated when ISV ‘AA’ releases a new version of ‘app b’? In an agile saas world, rapid small incremental releases are a reality. Is this also going to be magnified once Microsoft breaks down the main application into smaller (and perhaps dependent) apps?

Oh…in case you don’t know or don’t remember how to add dependencies to your app? It’s done in the app.json file in each app. See below for an example of what the app.json would look like in the ‘test app 1’ from the above example….

So how did I find this? As I mentioned, I’m working on a new build pipeline for our apps. My goal is that the pipeline will dynamically handle the dependencies so that it will update all of the versions in the app.json without the developer having to worry about that. After all, the build numbers are being generated from Azure DevOps so the developer is not going to know what the build numbers of the dependent apps will be. This little dependency hiccup caused a little bit of a wrench in my original plans.

More to come on the build pipeline later……..but spoiler alert…..it is working!!

Until next time…happy coding!

Dynamics 365 + Power Platform April 2019 Release Notes Now Available!

Microsoft announced today that the April 2019 release notes for Dynamics 365 and the Power Platform are now available for download. These notes cover all products within these platforms, but for readers of this blog, you’re likely most interested in what’s coming for Business Central.

You can get the release notes here.

A few highlights of new end-user functionality for Business Central are:

  • Base application as an app (!!)
    • Yes, one of the things I’m looking forward to most is that we’re getting the base application moved from CAL objects to 2 AL extensions – system and application. Yes, the end of C/Side is coming.
  • Add multiple items to a sales or purchase order at once.
  • Name and description fields on master/document/journal records increased from 50 to 100 characters.
    • Watch out for this one in your ISV solutions as you may need to increase your field sizes to match!
  • New Physical Inventory Order and Physical Inventory Recording interfaces to enhance physical inventory functionality.
  • Set an expiration date for your sales quotes.
  • Merge duplicate customer and vendor records (!!).
  • Configurable reports for warehouse documents.
  • Save your filtered list views (!!).
  • Document focus mode – expend the item section on documents to see more data for faster entry.
  • More keyboard shortcuts – show/hide fact box, add item, previous/next navigation, etc.
  • Adjust field importance via personalization.
  • Page inspection – See all data elements of the current page record…….think the old ‘page zoom’ but even better!

The April 2019 release also includes improvements for developers working with AL extensions, such as:

  • Optimizing the experience of using VS Code with large projects.
  • new Outline View to show the symbol tree in teh current editor.
  • The in-client designer no longer makes dependencies on all apps, only the ones that have been used in the designer.
  • Attach to an existing client session for debugging.
  • Code Actions – have VS Code suggest ways to improve your code.
  • More protection for ISV’s over their IP.
  • Standard web API moving out of beta. Will support webhooks, OAS 3.0, OData v4, and versioning.

As noted in the release notes, the above features are things that are slated to be released anywhere between April 2019 and September 2019. Some of the features may be available for preview as early as February 2019.

The above is just a highlight of what’s coming down the road for Business Central and as you can see we are in for quite a lot of new features and quite a few old features that are being resurrected in a new a better way.

Until next time, happy coding!

Controlling Session Timeouts in the Web Client

Since moving to the Web Client, have you seen this at all?

timeout

You probably have, and has it popped up while you were running a long processing routine? Yup, me too.

What’s Happening?

When you kick off those long running routines, your Web Client basically sits idle while the Service Tier is doing the processing. Once that idle time reaches a particular limit, the Service Tier shuts down that connection.

Changing the Timeout

When using the Windows Client it’s easy to set a few settings (see here) in the Service Tier configuration in order to adjust the session timeout for a user. Moving forward it remains just as easy, but if you’ve tried using the same settings as in the past, you’ve certainly noticed that they don’t work for the Web Client.

The session timeout for the Web Client is now controlled by settings in the Service Tier and in the Web Client configuration. I wish there was just a single setting, but maybe once the Windows Client is gone we’ll see some of the old settings go away as well.

In order to change the timeout for the Web Client, you need to change two settings. Thankfully though, we can easily change these settings using a couple of PowerShell commands.

ClientServicesIdleClientTimeout

This setting is found in the Microsoft Business Central Server configuration.

Set-NAVServerConfiguration DynamicsNAV130 -KeyName ClientServicesIdleClientTimeout -KeyValue "00:20:00"

The timeout uses this format: [dd.]hh:mm:ss[.ff]

dd – number of days
hh – number of hours
mm – number of minutes
ss – number of seconds
ff – hundredths of a second

Note

  • You can also set the setting to “MaxValue” in order to indicate no timeout. This is also the default value for a new installation.
  • You must restart the service tier for the new setting to take affect.

SessionTimeout

This settings is found in the navsettings.json file on the Microsoft Business Central Web Server.

Set-NAVWebServerInstanceConfiguration -WebServerInstance DynamicsNAV -KeyName SessionTimeout -KeyValue "00:20:00"

The timeout uses this format: [dd.]hh:mm:ss[.ff]

dd – number of days
hh – number of hours
mm – number of minutes
ss – number of seconds
ff – hundredths of a second

Note

  • The default value is “00:20:00” (20 minutes) for a new installation.

How the Settings Work Together

The above settings are used in conjunction with each other. A session will be closed based on the setting that has the shortest time period.

By default, the ClientServicesIdleClientTimeout setting is set to “MaxValue”, which means that in a default installation, the SessionTimeout settings is what will be used.

This is also why configuring the Microsoft Business Central Server to a higher timeout does not work, since it is the setting that has the shortest time period that is used.

Summary

The ClientServicesIdleClientTimeout and SessionTimeout settings work together to determine the timeout of the Web Client. The setting that has the shortest time period is the one that is used to determine when a session is closed.

This means that in a default installation you can leave the ClientServicesIdleClientTimeout at its default value, and just change the SessionTimeout setting on the Microsoft Business Central Web Server to your desired timeout.

You can read more on this and other configurations you can set here.

Happy coding!

 

Control Addins in AL

Hey everyone,

Quick post today to highlight the updated documentation for building control addins in AL.

The docs have recently been updated to include much more information regarding what properties and methods are available.

Check out the documentation here.

You will also want to check out the style guide, so that you can make sure that your control addin fits the look and feel of the new Business Central theme.

That’s all for now, happy coding!

Dynamics 365 Business Central in an Azure Container Instance

I recently came across this article, which talks about using the Azure Cloud Shell to create an Azure Container Instance that runs Dynamics 365 Business Central. I was intrigued because Azure Container Instances has just recently been released to the public and I just gotta try the new stuff! Thanks Andrey for that article!

What is an Azure Container Instance you might be asking? If you’ve been keeping up with Dynamics 365 Business Central development, you have been using containers to create your environments. This requires Docker to run either on a Windows 10 or Windows 2016 Server machine that’s either hosted or on-prem. Either way, you’re carrying the overhead of the machine, physical or virtual. With Azure Container Instances, you can create the containers directly in Azure without that machine overhead. This ‘should’ translate to some sort of cost savings, but as my container has only been up for about 2 hours as of the time of this article, I don’t yet know if or how much savings there will be.

In Andrey’s post, he walks you through using the Azure Portal and Azure Cloud Shell to create the container. Being the ‘lazy developer’ that I am though, I prefer to do as little manual work as possible so I thought I’d take a stab at building PowerShell script that I can run locally and potentially automate the entire process. Yup, even opening my browser to paste the code into Azure Cloud Shell is apparently too much work for me. 🙂

Turns, out this is pretty easy to do. Using the Azure Resource Manager PowerShell module, we can easily connect to our Azure account, and create the necessary container pieces.

Here’s how…

Connect-AzureRmAccount
The first thing we need to do is connect to our subscription and tenant. The user will be prompted for credentials when this command is executed. If you don’t know what your subscription and tenant IDs are, you can find instructions here for the subscription ID, and here for the tenant ID.

New-AzureRmResourceGroup
Once we’re connected we need to create the Azure Resource Group that will be used for our container instance.

New-AzureRmContainerGroup
Once the resource group is created now we can create the container. This is where we get to set the parameters for the container. One change I made from Andrey’s initial post is that I assigned the container the DnsNameLabel, which will mean we can use the Fqdn to access the container instead of the IP address. If you’ve used FreddyK‘s NavContainerHelper module, you’ll also notice that the parameters here are similar to some of the ones used by the New-NavContainer commandlet. Hey maybe we can get some new additions to the module for this stuff!

Ok…..here’s the actual code. It’s pretty basic at this point in time. Just getting my feet wet to see how it goes.

Install-Module AzureRM

### SET VARIABLES
$azureSubID = ''
$azureTenantID = ''
$azureResourceGroupName = 'myResourceGroup'
$azureLocation = 'EastUS'
$containerEnvVariables = @{ACCEPT_EULA='Y';USESSL='N'}
$containerImage = 'microsoft/bcsandbox:us'
$containerName = 'myContainer'

### CONNECT TO AZURE ACCOUNT
Connect-AzureRmAccount -Environment AzureCloud -Force -Subscription $azureSubID -TenantId $azureTenantID

### CREATE RESOURCE GROUP
New-AzureRmResourceGroup -Name $azureResourceGroupName -Location $azureLocation

### CREATE CONTAINER
New-AzureRmContainerGroup -Image $containerImage `
 -Name $containerName `
 -ResourceGroupName $azureResourceGroupName `
 -Cpu 2 `
 -EnvironmentVariable $containerEnvVariables `
 -IpAddressType Public `
 -MemoryInGB 4 `
 -OsType Windows `
 -DnsNameLabel $containerName ​
 -Port 80,443,7048,7049,8080 `
 -Location $azureLocation

Once you execute the above script, go grab a coffee. After about 15-20 minutes your container should be up and running. You can check on the state of your container using the following code:

Get-AzureRmContainerGroup -ResourceGroupName $azureResourceGroupName -Name $containerName

When you run the above code you’ll see various properties of the container. What you want to pay attention to are ProvisioningState and State, which will appear as ‘Creating‘ and ‘Pending‘ as shown below.

InkedAzContainerDeploy1_LI

 

Once the container has been created, you should see the following statuses:

InkedAzContainerDeploy2_LI.jpg

 

Take note of the Fqdn property and save the address value. This is the address that you will need to use to connect to your Business Central environment later on.

Once your container has a State of ‘Running‘, you can check the container logs by using the following code:

Get-AzureRmContainerInstanceLog -ContainerGroupName $containerName -ResourceGroupName $azureResourceGroupName

Running the above code will show you the container logs, and again, if you’ve been using the NavContainerHelper, these logs will look very familiar to you:

InkedContainerLogs_LI

 

Remember!!!
When you connect to your container via Visual Studio Code or the Web Client, or to download the VSIX, you need to use the address from the FQDN property of the container instance, and not the address values that you see in the container logs. See some examples below:

Insiders
If you have access to the private insider builds for Business Central, you need to provide credentials in order to access the Docker image registry. You can do that by adding the ‘-RegistryCredential‘ parameter and supplying a PSCredential object to the New-AzureRmContainerGroup command.

Oh, if you’re into this kind of thing, you can check out the Azure Container Instance SLA here. It’s a super fun read! 🙂

Thanks again to Andrey Baludin for his original post on Azure Container Instances!

Happy coding!

AL Extensions: Translate Your Solution With the Multilingual App Toolkit Editor

In this post, I showed you how you can use Microsoft Dynamics Lifecycle Services to translate your AL project into a variety of languages.

As with most things, there are multiple ways to go about doing things. This post will look at the Microsoft Multilingual App Toolkit. This toolkit is integrated into Visual Studio but there is also a standalone version, called the Multilingual App Toolkit Editor.

With this tool you can manually do your translation and/or you can connect it to the Microsoft Translator service via Azure, which is what I will describe in this post.

Here’s how…

Download and install the Multilingual App Toolkit Editor.
https://developer.microsoft.com/en-us/windows/develop/multilingual-app-toolkit

If all you want to do is work offline and manually do your translations, you can stop here. Continue on if you want to connect to the translation service in Azure, but note that you do need an active Azure subscription for this.

Enable the Translator Text API in Azure.
Using the Azure portal, do the following to add the Translator Text API to your Azure subscription:

  1. Choose “Create a Resource“.
  2. Search for “Translator Text API” and select it.
  3. On the Translator Text API blade, press the Create button to begin configuring the subscription.
  4. Fill in the fields accordingly for the API by giving it a name, pricing tier, etc. Note that there is a free tier option that lets you translate up to 2 million characters per month. Additional pricing info can be found here.
  5. Press the Create button to deploy the API to your Azure subscription. This might take a few minutes.

Get your Translator Text API authentication key.
Once the API has been deployed, you need to get your subscription key so that the Multilingual Tool can authenticate and connect to it.

  1. In the Azure portal, select “All Resources” and select the Azure subscription that you deployed the API to.
  2. In the list of resources, click on the Translator Text API service that you deployed.
  3. In the Translator Text API blade, select Keys.
  4. Copy one of the 2 keys that are associated with the service. You will need this key value in the next step.

Add Multilingual App Toolkit Editor credentials.
Now that we have the Translator Text API up and running, and the Multilingual App Toolkit Editor installed, we need to configure the authentication. We do that using the Windows Credential Manager.

  1. On your Windows machine, launch Credential Manager.
  2. Select Windows Credentials.
  3. Select Add a generic credential.
  4. Enter the following values:
    • Internet or network address: Multilingual/MicrosoftTranslator
    • User name: Multilingual App Toolkit
    • Password: <the Translator Text API key that you retrieved earlier>
  5. Click OK.
  6. Close Credential Manager.

Ok, now that we have everything installed, deployed, and configured, we can open up the Multilingual App Toolkit Editor (search Multilingual Editor in your Start menu) and translate the XLF file from our AL project. You can learn about generating this file here.

  1. Copy the auto-generated ‘.g.xlf‘ file to create a new file in the same folder. Rename the file based on the AL standards here.
  2. Edit the new file and update the ‘target-language‘ property to be the language that you are translating the file to (e.g. fr-CA).
  3. Close and save the file.
  4. Using the Multilingual App Toolkit Editor, select Open and select your new file.
  5. From the ribbon, select Translate > Translate All. The toolkit will now use the Translator Text API in Azure to translate the file based on the source and target languages. This might take a few minutes based on the numbers of items that need to be translated in your solution.
  6. Once the translation is done you can manually review and edit any if you wish.
  7. Close and save the file.

Now you have your new translation file. Simply repeat the steps to generate each translation that your AL solution requires!

Submitting to AppSource
If you are submitting your solution to AppSource, even if you do not need multi-language support in your solution, you still must provide (at a minimum) a translation file for the base language (e.g. en-US) in which your solution is published.

Note that the auto-generated ‘.g.xlf’ file is NOT a properly formatted translation file and your solution will not pass validation if you do not create at least the base language file.

In the pic below you have the raw ‘.g.xlf’ file as it gets created during the AL project build process. As you can see, there is only a ‘source‘ property for the message control even though the ‘target-language‘ of the file is set to ‘en-US’:

AutoGenTranslationFileFormat

In a properly formatted translation file, you will have both the ‘source‘ and the ‘target‘ properties:

ProperTranslationFileFormat

In addition to the formatting, you can’t rely on editing the ‘.g.xlf’ file because it gets overwritten each time you build your AL project.

In short, use the ‘.g.xlf’ file ONLY as the source for generating other translation files.

EDIT: I was just informed that fellow MVP Tobias Fenster‘s ALRunner VS Code extension can (among many things) convert the ‘.g.xlf’ translation file into a properly formatted file. Quick and easy! Check it out here!

Happy coding!

Dynamics 365 Business Central

Hey everyone!

I’m off enjoying the March Break with my family this week but I wanted to get this out, since it’s (finally!) been announced by Microsoft that the product formerly known as Dynamics ‘Tenerife’ will be made available April 2, 2018 under the name Dynamics 365 Business Central.

As the announcement states, it’s going to be available April 2, 2018 to 14 countries: United States, Canada, United Kingdom, Denmark, Netherlands, Germany, Spain, Italy, France, Austria, Switzerland, Belgium, Sweden, and Finland. Australia and New Zealand will be available on July 1, 2018. Take note that you must purchase Dynamics 365 Business Central through a Cloud Solution Provider (CSP) partner.

I don’t have much else to say right now, other than I’ve been working with Dynamics 365 Business Central for a while now and I think everyone’s going to love it! This product is moving fast and the advancements being made are amazing! Definitely some very exciting times ahead!

If you want to read more, check out the Dynamics 365 Business Central website here.

Until next time….happy coding!

AL Extensions: Translate your solution using Microsoft Dynamics Lifecycle Services

Hi everyone!

Today’s post is about language translations. I’ve not really done much with language translations in the past, but it always seemed as though it was a very tedious thing to have to do.

This post though is going to show you how to do it quite quickly, and if all goes well you can translate your AL extension in a matter of minutes.

Pre-Requisite

Note that in order to perform the tasks in this post, you will need access to Microsoft Dynamics Lifecycle Services (LCS). If you do not have access contact your system administrator, or the person that manages your CustomerSource/PartnerSource account, as the systems are linked in some way.

Let’s Get Started

Assuming you have already developed your AL extension, you need to enable the new translation feature. You can do that by adding the following line to your app.json:

"features": ["TranslationFile"]

This feature will cause a new translation file to be generated when the extension is built. The translation files are created using the standard XLIFF format that is used to pass data between system for the purposes of language translation. For more info on XLIFF, check here. You will find the translation file at the root of your AL project in a Translations folder.

You’re also going to need to change all of your ML properties as they are being deprecated (as of this post though they are still available). You can read more about that here.

Ok, you have now enabled the new translation file feature, and you’ve updated all of your ML properties, you need to build your extension. Once that completes, look for the newly created XLF file in the Translations folder. This is the file that we need to submit to the translation service.

The Translation Service

Before we get into submitting the file, note that in my solution my base language is English (US). I’ll be translating that to Canadian French. Substitute the appropriate languages for your solution.

Here are the steps to perform the translation:

TranslationServiceTile

  • Select the ‘+’ button at the top left to make a new request and fill in the request based on your desired translation. Make sure to select Microsoft Dynamics NAV as the product and the appropriate product version.

TranslationRequest

  • Once you have filled in your request, press Create to submit the request. At this point you just wait. You should receive an email from LCS stating that “A new request has been created.”
  • After a short period of time (for me this was only about 2 minutes!!) you will receive another email from LCS stating “The request has been completed.”
  • Follow the link in the email and you will be brought to a page that shows you the details of your translation request.
  • Click Request output at the left side of the screen.
  • Click the DownloadOutputLink link to download your translated XLF file. and extract it to the Translations folder in your AL project. It will have a new file name so it should not overwrite your existing file. Do not remove the original XLF file as that still represents the base language of your extension!

That’s it!

Now all you have left to do is to rebuild your extension with the new translation file and voila…..you now have a multi-language extension! Test it out by changing to the appropriate language in the web client.

Ongoing Maintenance

Once you have non-base translation files in your AL project, they do not get updated when the extension is built. For example, if you add a new field with a caption and build your extension, the base language XLF will get updated to include the new field. Your non-base XLF files will not, so you will need to revisit LCS and submit a new base language file to get those updated.

Hopefully this service has works as well for you as it seems to for me. I was actually quite shocked at how fast I was able to get the translated file back.

That’s all for now, happy coding!

AL Extensions: Accessing the Device Camera

If you’ve been doing any V2 extension development, you like are aware that we cannot use any .Net interop in our code.

While on premise V2 development will eventually gain access to .Net variable types, if you’re coding your extension to run in AppSource, you will remain locked away from using .Net interop in your code because the risk of these external components is too large for shared cloud servers.

Unfortunately this means that we lost the ability to interact with the device camera, as it was accessed using .Net.

In C/AL, the code to take a picture with the device camera looked like this:

TakeNewPicture()
   IF NOT CameraAvailable THEN
      EXIT;
   CameraOptions := CameraOptions.CameraOptions;
   CameraOptions.Quality := 50;
   CameraProvider.RequestPictureAsync(CameraOptions);

It’s simple enough code, but the problem in the above example is that both CameraProvider and CameraOptions are .Net variables, and therefore cannot be used in V2 extension development.

I’m happy to say though that this problem has been resolved. Yes, the underlying architecture still uses .Net controls, but Microsoft has introduced a new Camera Interaction page which acts basically like an api layer. Through this api layer you can interact with the .Net camera components just as you did in C/AL.

Not a huge change to wrap your head around at all. In your extension you will code against the Camera Interaction page instead of against the .Net controls directly. Inside the page are all the original camera functions that you were used to using before.

This greatly simplifies our extension code and allows us now to use the camera from our extensions.

The code to take a picture would now look like this:

local procedure TakePicture();
    var
        CameraInteraction: Page "Camera Interaction";
        PictureStream: InStream;
    begin
        CameraInteraction.AllowEdit(true);
        CameraInteraction.Quality(100);
        CameraInteraction.EncodingType('PNG');
        CameraInteraction.RunModal;
        if(CameraInteraction.GetPicture(PictureStream)) then begin
            Picture.ImportStream(PictureStream, CameraInteraction.GetPictureName());
        end;
    end;

That’s it! Now you can use the device camera from your V2 extensions.

Happy coding!