Wednesday, 5 October 2011

Preview parts and surrogate keys in Ax2012

This post is a quick walk-through of surrogate keys, replacement keys, and field preview parts, which are new concepts in Ax2012.

Overview

The requirement used in this example is as follows:
  • Have a new field on the sales order header called 'Priority'.
  • This should point to a user-configurable table, containing a priority code and description.

Before we start there are a couple of terms you need to understand:
  • Natural key. Think of this as the primary key that makes the most sense. eg CustTable.AccountNum and InventTable.ItemID. We can ignore the effect of DataAreaID for non-shared tables for now.
  • Surrogate key. The surrogate key in database terms refers to a field that also uniquely identifies a record, but isn't a natural selector. When looking at Ax, this is the RecID. In other systems this could be a sequential number, or GUID. Typcially, it's something created by the database itself, like an identity column, however in Ax it's managed by the kernel.
  • Primary key. The unique primary key should point to either the natural key, or the surrogate key.
  • Clustered index. This affects the physical layout of records in the database. This doesn't have any real functional impact, but you do need to be very careful when selecting the clustered index as it can have a serious effect on performance if setup incorrectly.
These aren't new concepts - Wikipedia has tonnes of information on general database theory. One of the main changes in Ax2012 is that it more heavily promotes the use of the surrogate keys when relating tables. This is something that was always used in Ax, but more often when we had general-purpose tables, such as document-handling entries/misc. charge lines, that pointed to different types of records. Now you'll find that in a lot of places when two tables are related it's by the RecID instead of the natural key.
Ax also has the concept of the Replacement key. This is used to indicate which fields are displayed on the UI, regardless of the table's primary key. 

Creating the objects

First we create the basic datatype and table structure:
  • Create an extended data type (string) called SalesPriorityID. This will be the priority 'code' displayed on forms.
  • Create table SalesPriority.
  • Add types SalesPriorityID and Description to the table.
  • Create index SalesPriorityIdx containing field SalesPriorityID. Set AllowDuplicates to No, and AlternareKey to Yes. Note than an index cannot be designated as a table's alternate key unless it's unique.
  • In the table properties, you'll notice that the PrimaryIndex and ClusterIndex have already been set to SurrogateKey. You'll also notice that the CreateRecIdIndex table is set to Yes and locked for editing. 
  • Set the ReplacementKey to SalesPriorityIdx. This indicates to Ax that even though our primary key is SurrogateKey (automatic index on RecID), we want the priority code displayed on forms. 

Now we have a basic table with a primary key (on RecID), and unique replacement key (on SalesPriorityID). To reference this table using the RecID, we need a new extended data type:
  •  Create an extended data type (Int64) called SalesPriorityRefRecID. Make sure this extends RefRecID.
  • Set the ReferenceTable property to SalesPriority.
  • Under the Table References node, set it so that SalesPriorityRefRecID = = SalesPriority.RecID.
Don't forget to extend RefRecID. It looks like if you forget to do this, the replacement key functionality doesn't work correctly, even if the data-type is Int64. 
So we now have a table, and an extended data type that references it, via the RecID. All we have to do is drag this field onto SalesTable. You'll notice that when we do this, Ax prompts you to automatically create a relationship based on the EDT reference. Click yes. Rename the field to SalesPriority.

The field can now be dragged from the form data-source onto the design as normal. You'll see that instead of adding an Int64 control, it adds a reference, since Ax has determined the relations automatically. When a reference group is shown on the form, it will display the value of the alternate/replacement key instead of the underlying RecID pointing to the SalesPriority table. I added the field to the 'Status' group, shown below:


In the underlying SalesTable record, the field SalesPriority points to SalesPriority.RecID, but displays the value of SalesPriorityID since it's contained in the nominated Replacement key.

Adding a preview part

I won't go into too much detail here, but the idea is to add a preview to the field, which acts as a sort of extended tool-tip:



The basic steps are:

  • Create form SalesPriorityPreview. Add table SalesPriority as a datasource, setting allow edit/add/delete to false. Add fields directly onto the design.
  • Create FormPart of the same name and set the Form property.
  • Create a display menu-item of the same name, pointing to the FormPart.
  • On the SalesPriority table, set property PreviewPartRef to the menu item.

Now when you hover the priority value on the sales order form, you'll see your preview form popup automatically.

There's plenty of information on this - Another good post on the subject can be read here, and of course the MSDN.

Monday, 3 October 2011

X++ select statements in IL code

I doubt this will be of much use in the real-world, but I was curious as to how X++ select statements are translated into IL code. There's obviously no direct translation from an X++ select to C# code (except possibly with LINQ but it would be almost impossible to match the query behaviour exactly).

Compiling the following X++ code:


private void simpleSelect()
{
    SalesLine                   salesLine;
    str                         inventTransID;

    // Basic selection
    select firstonly salesLine
        order by InventTransId desc
        where   salesLine.RecId != 0;

    InventTransId = salesLine.InventTransId;
}

Gives us the following C# (Extracted from the compiled DLL using RedGate Reflector):

public override void Simpleselect()
{
    SalesLine salesLine = new SalesLine();
    string inventTransID = PredefinedFunctions.GetNullString();
    SalesLine table = salesLine;
    table.Find(0x167);
    table.FirstOnly();
    FieldList fieldList = new FieldList();
    fieldList.Add(0x1a, 0);
    table.Order(fieldList);
    int o = 0;
    PredefinedFunctions.Where(PredefinedFunctions.newBinNode(PredefinedFunctions.newFieldExpr(salesLine, 0xfffe), new valueNode(o), 0x13), table);
    table.EndFind();
    inventTransID = salesLine.InventTransId;
}



Not the friendliest looking code is it? Basically, it's building up an object model representing the select statement. I presume this is similar to using the QueryBuild classes, although it looks like a completely different API.

And a more complicated example with joins, in X++:


private void complexSelect()
{
    InventTable                 inventTable;
    InventTrans                 inventTrans;
    InventTransOriginSalesLine  inventTransOrigin;
    SalesLine                   salesLine;
    Amount                      amount;
    ;

    select firstonly salesLine
        order by InventTransId desc
    join inventTransOrigin
        where   inventTransOrigin.SalesLineInventTransId    == salesLine.InventTransId
    join inventTrans
        where   inventTrans.InventTransOrigin               == inventTransOrigin.RecId;

    amount = salesLine.LineAmount;

}

Gives us:

public override void Complexselect()
{
    InventTable inventTable = new InventTable();
    InventTrans inventTrans = new InventTrans();
    InventTransOriginSalesLine inventTransOrigin = new InventTransOriginSalesLine();
    SalesLine salesLine = new SalesLine();
    SalesLine joinParent = salesLine;
    joinParent.Find(0x167);
    joinParent.FirstOnly();
    FieldList fieldList = new FieldList();
    fieldList.Add(0x1a, 0);
    joinParent.Order(fieldList);
    InventTransOriginSalesLine table = inventTransOrigin;
    table.Join(0, joinParent, 0xba7);
    PredefinedFunctions.Where(PredefinedFunctions.newBinNode(PredefinedFunctions.newFieldExpr(inventTransOrigin, 2), PredefinedFunctions.newFieldExpr(salesLine, 0x1a), 0x12), table);
    InventTrans trans = inventTrans;
    trans.Join(0, table, 0xb1);
    PredefinedFunctions.Where(PredefinedFunctions.newBinNode(PredefinedFunctions.newFieldExpr(inventTrans, 0x44), PredefinedFunctions.newFieldExpr(inventTransOrigin, 0xfffe), 0x12), trans);
    joinParent.EndFind();
    decimal amount = salesLine.LineAmount;
}


The hex-values refer to the object numbers of the table/fields being referenced. eg 0x167 in the call to table.Find is 359 in decimal, which is the table ID of SalesLine.
The actual work is still carried out in native Kernel code (AxServ32.exe) - The compiled DLL links to the executable and calls it via interop.
Interesting? A little bit.. Useful? Not really.

More practical posts to follow.

Performance comparison of X++ compiled into CIL

In Ax2012 it's now possible to compile and run code under the .NET run-time as opposed to using the X++ kernel (in a custom pcode format). This is a fairly major development from a technical standpoint, but I was interested in testing the actual performance differences between the two execution methods.

The basic code for the test is as follows: Note, this method is defined within class ProcessTimeTest, extending RunBaseBatch (*). The main method creates an instance of the class, then calls this method 50 times, the idea being to average out the results.
* The suggested best practice for Ax2012 and beyond is to use the Business Operation Framework, but that's overkill for this job.

protected void runSingleTest()
{

    int64               startTime,endTime,dt;
    System.DateTime     dateTime;
    int                 loopCount,innerCount;
    real                dummyReal = 1;
    str                 stringBuf;
    ProcessTimeTestLog  log;

    NoYesID             runningAOS  = Global::isRunningOnServer();
    NoYesId             runningCLR   = xSession::isCLRSession();

    // Start the timer
    dateTime = System.DateTime::get_Now();
    startTime = dateTime.get_Ticks();

    // Do pointless activity, lots of times.
    for(loopCount = 1;loopCount <= 100;loopCount++)
    {
        InventTable::find(strFmt("__%1",loopCount));    // cache-miss
        InventTable::find('1000');                      // cahce-hit

        dummyReal = 1;
        stringBuf = "";
        for(innerCount = 1;innerCount < 100;innerCount++)
        {
            // FP arithmetic
            dummyReal = dummyReal * 3.14152;
            dummyReal = dummyReal / 2.89812;
            dummyReal = dummyReal - 0.00310;
            dummyReal = dummyReal + 1.21982;

            // String concatenation + metadata
            stringBuf += strFmt("%1-23",
                innerCount,
                tableId2name(tablenum(SalesLine)));

            // Construction+removal of object (GC overhead)
            this.newObject();
        }

        this.recursiveFunctionCall();
    }

    // Stop timer and save results
    dateTime            = System.DateTime::get_Now();
    endTime             = dateTime.get_Ticks();
    dt                  = endTime - startTime;  // in ticks


    log.clear();
    log.RunningInCLR    = runningCLR;
    log.RunningOnAOS    = runningAOS;
    log.RunningTime     = dt / 10000; // tick = 1/10,000th of a ms
    log.insert();

}

Basic performance test code

The above code is completely pointless, but it is testing the following aspects of the run-time:

  • Record querying/selection. The first find on the local item table is a cache miss, and will cause an actual  query against the database. The second find method should be picked up by the Ax record cache.
  • A bit of floating point arithmetic, and some string concatenation, which also includes an AOT/meta-data query (resolving table ID/name).
  • Construction and removal of a new object via method newObject, which just creates an instance of the same class. It's in a separate method to ensure it's fully out of scope and destroyed. Note the actual removal is subject to the garbage collection cycle, which is completely different for code running under the CIL.
  • A recursive function call, which goes 10 deep. I wouldn't expect the fact that it's recursive to make a huge difference - It's more to test the overhead of function calls in general.
So, each test does this basic sequence of operations 100 times. To run the test under the AX run-time, it's just a case of 'running' the class (ie right-click open). To run under the .NET run-time, the CIL needs to be updated:


Then, the job run as a batch process (make sure your AOS is correctly configured). Towards the end of the code you'll see that it writes the timing information to a table, along with flags indicating whether the code is running on the AOS and whether it's being executed by the Ax or .NET interpreters.

The results are encouraging:


That's about a 75% improvement in execution time when running under the .NET run-time!  There are plenty of reasons for this: .NET has a JIT compiler meaning the code is executed as native machine code, the garbage-collector is more sophisticated, the C# code optimizer is more advanced etc.

This is certainly good news, but keep in mind that CPU is rarely a bottle-neck in Ax implementations. As a developer, the main things you should be focusing on are (still):
  • Table and index structure
  • Using caching effectively
  • Minimal recalculation of data that can be pre-stored for reporting and inquiries
  • and all the other stuff that isn't apparent until it implodes during go-live!

This is definitely a great effort from the technical team at Microsoft, as it would have been no mean feat to accurately translate the X++ p-code into C#. 

I'll be following up soon with a bit more information on code running under the CIL. If interested there's also a good (if confusingly coloured) blog posting here on debugging IL code.