Tuesday, May 31, 2016

Query Tuning in Ax

When I started at my present position, the database handling was in a very poor state.  Performance was slow, indexes were defragmented and users were complaining by the hour.  After sorting out the defragmentation issues, we were still having performance issues with a Portal that was bolted on to Ax.  Looking at the integration piece that was written, I found that that it was a mix of miscellaneous query calls done through the business connector and custom queries that were written as concatenated strings.  In other words, it was a mess.



In general, I find that the Dynamics Ax query parser sucks.  It tends to connect tables via cross joins and make the links in the where clause.  Aside from looking ugly, it may have impacts on performance.  Recently, my manager gave me an example where a query using one customer account number was fairly quick, but using another it was very slow (from 1 to 8 seconds).  When I took the query out of AX, I found that owing to the way that the query was built, it generated different query plans and used different indexes.  This was why the performance varied.



If you need to get data out of Ax for an external application, I tend to find that using stored procedures and functions are a preferable route.  The weakness, however, is that you at the mercy of business rule or logic changes in Ax that would force modifications in custom code.



Caveats aside, there are some special considerations when you need to optimize Ax based queries.



1) Do not use Ax query standards.  This is an out of the box view from Ax.
CREATE VIEW [dbo].[LOGISTICSLOCATIONPARTY] AS SELECT T1.LOCATION AS LOCATION,T1.PARTY AS PARTY,T1.ISPRIMARY AS ISPRIMARY,T1.PARTITION AS PARTITION,T1.RECID AS RECID,
T2.PARTITION AS PARTITION#2,T2.LOCATIONROLE AS LOCATIONROLE
FROM DIRPARTYLOCATION T1
CROSS JOIN DIRPARTYLOCATIONROLE T2
WHERE (T2.PARTYLOCATION=T1.RECID AND (T2.PARTITION = T1.PARTITION))

The cross join is where the problems start.  It is far better to take a more standard approach of using inner and left joins to get the data needed.

CREATE VIEW [dbo].[LOGISTICSLOCATIONPARTY] AS SELECT T1.LOCATION AS LOCATION,T1.PARTY AS PARTY,T1.ISPRIMARY AS ISPRIMARY,T1.PARTITION AS PARTITION,T1.RECID AS RECID,
T2.PARTITION AS PARTITION#2,T2.LOCATIONROLE AS LOCATIONROLE FROM DIRPARTYLOCATION T1
INNER JOIN DIRPARTYLOCATIONROLE T2 ON T2.PARTYLOCATION=T1.RECID
AND T2.PARTITION = T1.PARTITION


2) All tables in Ax 2012 use partition and (if exists) dataareaid.  In the custom code, most of the table joins used ignored this fact.  As a result, query performance was very slow.  The worst offenders took over 30 seconds to process.  By using dataareaids and partitions I was able to reduce processing time to less than a second.
 

Thursday, May 19, 2016

.Net Development in Ax 2012 R2


Painful...


I have been developing under .Net since the initial beta (ca. 2000).  So I would think that I would know a little bit about that environment.  We currently use Ax 2012 R2 CU6.  Lately, we have a had a few issues where using .Net processing would be easier than using X++ classes.  One of these involved a little bit of xml file creation.  The initial design involved calling a stored procedure that would format the data and the writing the result to an xml file.  Using C#, this is a trivial exercise and took a short while to complete.  I tried to deploy the result to my dev AOT (Client and Server).  Back in the AOT, the MorphX environment could find references to the classes created.  However, I couldn't compile the results.  I kept getting errors.  In frustration, I decided to rewrite the classes in X++.
Here are a couple  working samples of what I ended up with for calling the store procedure:


public System.String getDataAsXMLString(System.Data.SqlClient.SqlCommand command)
{
    System.Text.StringBuilder result = new System.Text.StringBuilder();
    System.Data.IDbConnection connection;
    System.Xml.XmlReader xmlr;
    new InteropPermission( InteropKind::ClrInterop ).assert();
    try
    {
        command.set_CommandTimeout(0);
        connection = command.get_Connection();
        connection.Open();
        xmlr = command.ExecuteXmlReader();
        xmlr.Read();
        while (xmlr.get_ReadState() != System.Xml.ReadState::EndOfFile)
        {
            result.Append(xmlr.ReadOuterXml());
            result.Append(System.Environment::get_NewLine());
        }
        connection.Close();
    }
    catch
    {
        if (connection.get_State() != System.Data.ConnectionState::Open)
        {
            connection.Close();
        }
    }
    CodeAccessPermission::revertAssert();
    if(xmlr)
        xmlr.Dispose();
    if (command)
        command.Dispose();
    if (connection)
        connection.Dispose();
    return result.ToString();
}


public System.String getData()
{
    System.Data.SqlClient.SqlCommand command;
    System.String result, spName;
    PC_CalcDataAccess da = new PC_CalcDataAccess();
    System.Data.SqlClient.SqlParameterCollection parameters;
    System.Data.SqlClient.SqlParameter parameter;
    spName = this.parmSPName();
    da.parmDb(db);
    da.parmInstance(instance);
    command = da.getCommand(spName);
    parameters = command.get_Parameters();
    parameter = parameters.Add("@parmCatalogId", System.Data.SqlDbType::NVarChar);
    parameter.set_Value(catalogCode);

    result = da.getDataAsXMLString(command);
    return result;
}



First note: X++ provides no facility for identifying name spaces: there are no using or imports statements.  As a result every reference to a framework class must be fully qualified.  Second: chaining "." operators does not work.  So, the statement command.get_Parameters().Add("@parmCatalogId", System.Data.SqlDbType::NVarChar).set_Value(catalogCode) does not work.  It had to be written line by line as indicate above to work.  Notice that the naming conventions used for getters and setters appear to be based on the IL usages for the class.  Third:  Marshalling between X++ and .Net does not work as advertised.  According to the literature, str in X++ is supposed to work interchangeably with System.String.  It does not.  You cannot use an str member as a parameter when System.String is indicated.  Rather, the str member must be directly assigned to a System.String member and the System.String member may be used as a parameter.  That is why we have the line spName = this.parmSPName().   This is also true of primitive types as well.
Does any know if this is fixed in R3 or AX7?

Tuesday, May 17, 2016

Hacking Data Compression in Dynamics Ax 2012



Introduction


Data compression is a feature that was added in SQL Server 2008.  It enabled compression on tables and indexes over either rows or pages of data.  Although it works reasonably well, there a number of limitations around compression of inserted or updated data.

Since its introduction, there have been articles published praising the value of using compression in a Dynamics 2012 environment.  After some testing, it was found that compression really does not work out of the box and when it does, it is very clumsy to use.  The discussion below outlines some of those limitations and how they can be worked around.


Environment



Our in house environment currently consists of Dynamics Ax 2012 R2 CU6 (we are in the process of upgrading to CU9), SQL Server 2012 and Windows Server 2012. Storage is maintained on a NAS Server separated into different partitions for different purposes.


Initial Problem



A large portion of our testing environment is housed on a partition of the NAS and may have up to 4 copy backs of production data for different purposes.  Although on the small side, our current Ax database is about 200 GB.  Given projections for growth and immediate space storage limitations, it was decided to investigate whether compression would be a viable short term alternative to save space.  Consideration would also need to be given to performance impacts.


When starting, the first hurdle to overcome is activating compression.  In the first article I found (https://blogs.msdn.microsoft.com/axsupport/2012/10/21/ax-2012-database-how-to-enable-sql-compression-enterprise-edition/), activating compression was supposed to be a matter of opening the form SysSQLSetup from within the AOT and setting some values.


 


https://msdnshared.blob.core.windows.net/media/MSDNBlogsFS/prod.evol.blogs.msdn.com/CommunityServer.Blogs.Components.WeblogFiles/00/00/01/39/16/2627.Step1.pngb


 


However, on our local installation, the form was somewhat less expansive. 




This change was not particularly helpful.  So what happened?  It turns out the determination of whether compression is supported is made in a class called SysSqlSetupHelper.




 


You may check if you want, but suffice it to say that version 10 is SQL 2008.  I suspect that someone did not get the memo saying that SQL 2008 was not the terminal release.  Fixing this issue is relatively minor.  If we substitute the string version with a real and determine that it is greater than or equal to 10.0, we should be fine.




And when the form is run again, the compression options do appear.


Compression Complexity



If you paid attention to the form SysSqlSetup, you would notice that there are 5 option for compression.  1 Table, 1 Index on the Selected Table, All Indexes for Selected Table, All Tables and All Indexes.  Doing a quick check in SQL Server would reveal that there are about 5,000 Tables and 12,000 Indexes in the Dynamics Ax 2012 database.  However the compression recommendations coming out of Microsoft indicate that some tables/indexes should be left alone, some should have row compression and others page compression.  (https://msdn.microsoft.com/en-us/library/dd894051(SQL.100).aspx).


The decision of how and what to compress is based on an analysis of the usage patterns (percentage of update operations and scan operations) of each table an index.  This scan reveals only candidates for compression.  Then a SQL stored procedure (sp_estimate_data_compression_savings) can give an indication of whether row or page compression will yield greater results.


During the course of research, the article, “Demonstrating the Effects of Using Data Compression in SQL Server” (https://www.mssqltips.com/sqlservertip/3187/demonstrating-the-effects-of-using-data-compression-in-sql-server/), outlined the performance impacts of using SQL.  Under the controlled circumstances indicated, it was found that compression actually improves query performance, but that the cost of slightly increasing CPU utilization rates.  At the bottom of the article, there is comment from Cees Meuleman dated October 19, 2015.  In the comment he provides a link to a script that uses the principles from the 2 articles above.  I have revised the stored procedure slightly, but kept the original principals.  A revised copy has been attached in the Appendix.


The default operation of this stored procedure only evaluates tables and indexes that extend over 50 pages (400 KB).  For these objects, if the scan percentage is greater than 75% and the update percentage is less than 20% and the savings from page compression less than the savings from row compression is greater than 20 percentage points, then a page compression is indicated.  If both the scan and update percentage are both lower than 5%, then no compression is indicated.  If neither category is met, then row compression is indicated.  The routine then has an optional section to perform the compression.  Then an amendment to the original merges the data into a new table placed in the master database.


New Problems



The actual mechanism to compress a table or an index is an alter statement:


ALTER TABLE dbo.myTable REBUILD PARTITION = ALL WITH (DATA_COMPRESSION = ‘PAGE|ROW|NONE’, ONLINE=ON|OFF, SORT_IN_TEMPDB=ON|OFF)


ALTER INDEX dbo.myIndex REBUILD PARTITION = ALL WITH (DATA_COMPRESSION = ‘PAGE|ROW|NONE’, ONLINE=ON|OFF, SORT_IN_TEMPDB=ON|OFF)


The first problem that should be noted is that there may be a problem with SORT_IN_TEMPDB.  If the collation of the Ax does not match the collation of TempDB, then sorting in TempDB will fail.  It should be noted that the default collation of SQL Server (SQL_Latin1_General_CP1_CI_AS) does not match the default collation of Dynamics Ax (Latin1_General_CI_AS).  If the collations are different, then SORT_IN_TEMPDB should be excluded or set to off.  The collation can be checked by selected the database in SSMS, right clicking and selecting Properties -> Options.


 


While running the store procedure above will compress designated objects in the database, the compression can be removed if the indexes or tables are rebuilt without the compression predicates used in the alter statements.  This will occur whenever a database synchronize database is made from an AOT or if automated tools are used to update statistics/reorganize/rebuild indexes.


Batch Job to Populate SQLSTORAGE



This is where the hacking element comes into play.  In order to assure that the objects compresses survive an AOT database synchronization the table that feeds the compression options must be filled correctly.  Also, our index analysis tools are based in the master database and it will be modified to use the data compression information supplied by the stored procedure. If you experiment with the form SysSQLSetup and apply compression definitions to a table and couple of indexes, you will get something like the following in the table SQLSTORAGE (NB: access to this table is only from SSMS.  This table cannot be opened from the AOT).


 


Tables:




Indexes:




 


Each object has 2 sets of records.  The first sets out each parameter along with its value (ID = 0) and the second provides the full predicate to use in the index rebuild (ID = 1).  In order to generate the correct values, a stored procedure was created GetCompressionParametersAx.  This stored procedure is then called through a batch process to process the result records and merge them into SQL Storage.  The complication here is that some of the table and index ids had to be found in the main Ax database, others had to be found in the Ax Model database.  For compression values, 0 = NONE, 1 = PAGE and 2 = ROW.


This stored procedure is then called by a batch service controlled by PC_SQLCompressionController.  The details are contained within the attached project xpo.  This also includes the full batch job, along with the changes to SysSQLSetupHelper and the 2 forms SysSqlAdmin and SysSqlSetup (unchanged) for reference.  The service itself is nothing special.  It simply calls the stored procedure, finds the matching table buffer in SQLStorage and merges the results.  This prepares the correct data so that the synchronization will succeed.


Modifications to Index Evaluation Jobs



In our shop, we use the SQL Server Maintenance Solution published by Ola Hallengren (https://ola.hallengren.com/).  We have been using the index optimize on user databases for over a year on a daily basis without any issues and considerable performance enhancements.  The table IndexCompression was added to the master database as part of the first stored procedure above.  The data in the table is now consumed.  All that is required here is to add the compression predicate to statement that that the routine is building.


 




 


This revised job has been running on my development machine for about a week and without any side effects.


Results



All of this effort would not have been worthwhile if there were no noticeable results.  Calling the Shrink File Form in SSMS before compression shows a data file that is complete full.




 


After compression, 48% of the main data file has been compressed.  Giving a net improvement here of 80 GB. 




Another experiment in the test environment results in a similar improvement of about 50% (or about 100 GB).


Future Steps



Our current configuration is still placed on a single partition.  At some point in the future, archived data should be moved to a separate set of partitions.  This would require some tuning of the main procedure as older data can be page compressed with no impact (archived data should be read only).


 


Appendix I – CompressDatabase Stored Procedure