Is global.json irrelevant (or dead)?

If you followed our .NET Core 1.0.1 Migration Workflow post you may have noticed that global.json was deleted as part of the migration.  Why?

Previously global.json was used primarily for its projects property which specified directories containing project.json files.  In effect, this property provided the same behavior as a Visual Studio Solution file (.sln).  So, the migration process replaced global.json with a .sln file.

Less frequently used, however, was global.json’s sdk property which specifies which SDK version (in $env:ProgramFiles\dotnet\sdk).  While the projects property is ignored, the sdk property is still used.

Consider this global.json:

{
    "projects": [
        "foo", "bar", "Does Not Exist"
    ],
    "sdk": { "version": "1.0.3" }
}
Executing dotnet restore or dotnet build succeeds even though the directories listed in project do not exist.  That property is obviously ignored.  Changing the sdk version number, however, impacts which tooling the dotnet command uses.

 

Interested in more details? See How to use Global.json in the .NET Core Tools 1.0 world.

 

How to use global.json in the .NET Core Tools 1.0 world

The .NET Core Tools 1.0 release officially made the switch from using global.json and project.json to using Visual Studio Solution and Project files, respectively.  See Microsoft’s .NET Core Tools 1.0 announcement and our migration guide.

Global.json is not completely useless in this new world, however.  Importantly, you can use it to control which tooling version the dotnet command uses.

Controlling .NET Core Tooling Version

What if you need to create a .NET Core project via older tooling?  Your team may still need to use project.json for some period of time, your builds have not been updated yet, or you may just need it for testing purposes.  Instead of creating a VM or some other heavyweight procedure, just use global.json!  Add the following global.json to an empty directory:

{
    "sdk": { "version": "1.0.0-preview2-003133" }
}

NOTE: This json refers to the 1.0.0-preview2-003133 tooling. If this specific version doesn’t work on your machine, check %ProgramFiles%\dotnet\sdk to see which tooling versions are installed.

Now enter dotnet –help on the command line from within the same directory.  Notice that the first line of output reads:

.NET Command Line Tools (1.0.0-preview2-003133)

Whereas entering dotnet –help from within a different directory (i.e., sans global.json) produces:

.NET Command Line Tools (1.0.3)

Going Back In Time – The Easy Way!

Now, go back to the directory with the global.json file and enter dotnet new -t Console.  Since global.json references the older tooling version, this command creates a project.json file, just like the early days of .NET Core!

 

 

 

Note also that the file dates are set to 9/20/2016 – seemingly the date of the tooling’s binaries in %ProgramFiles%\dotnet\sdk\1.0.0-preview2-003133.

Using dotnet migrate for project.json to .csproj

Read our post at .NET Core 1.0.1 Migration Workflow which explains using .NET Core CLI for migrating from the old project.json approach to the new Visual Studio Project (a.k.a., MSBuild) approach.

.NET Core Tools 1.0 Migration Workflow

If you’ve been using .NET Core for very long, you have code based on global.json and one or more project.json files.  Several months (a year?) ago Microsoft announced deprecating this approach, and a return to Visual Studio Solution (.sln) and Project (.csproj, .fsproj, . vbproj, etc.) files.  This transition also involves changing from directory based to file based orientation.  For example, attempting dotnet build with the 1.0.1 toolset yields:

MSBUILD : error MSB1003: Specify a project or solution file. The current working directory does not contain a project or solution file.

Specifying a project.json file doesn’t work either; it yields:

error MSB4025: The project file could not be loaded. Data at the root level is invalid. Line 1, position 1.

After upgrading to the toolset, migrating code involves a few steps:

  1. Create VS Solution file.  The migration process is much easier if you create this file first.  Using a shell (cmd, powershell, bash, etc.), change to the directory containing the project’s global.json and run dotnet create sln.  The result is a (relatively) empty .sln file.
  2. Migrate project.json files to VS Project files.  Run dotnet migrate in the same directory as above.  This command will recursively find project.json files, convert them to C# project files, and add references to them in the solution file.
  3. At this point you should be able to restore and build using the .sln.  Recall that dotnet tool is no longer directory oriented.  Include the .sln file in the restore and build commands, e.g., dotnet build <project>.sln.

 

 

.NET Core multi-project build change in tasks.json

In earlier versions of .NET Core tooling, building multiple projects was simply a matter of adding them as args in the tasks.json file.

1
2
3
4
5
6
7
8
9
   "tasks": [
        {
            "taskName": "build",
            "command": "dotnet",
            "args": [
                "./J3DI.Domain/",
                "$./J3DI.Infrastructure.EntityFactoryFx/",
                "$./J3DI.Infrastructure.RepositoryFx/",
   ...

Each directory was a child of the location with global.json, and each had its own project.json. This approach worked very well for producing multiple .NET Core libraries and their associated unit tests.

After migrating this code and changing the references to the specific .csproj files, we found that only 1 arg is allowed for the dotnet build task. If the args array contains more than one item, compilation gives

MSBUILD : error MSB1008: Only one project can be specified.

The fix is to reference the Visual Studio Solution File (.sln) in the args.

 

1
2
3
4
5
6
7
    "tasks": [
        {
            "taskName": "build",
            "args": [
                "${workspaceRoot}/J3DI.sln"
            ],
    ...

Good News: There’s still a way to build multiple projects by encapsulating them in a .sln file.

Bad News: Visual Studio required. (IOW, ever tried manually creating or managing a .sln?)

.NET Core: No Sophisticated Unit Testing, Please!

In my previous post, I wrote about .NET Core’s limitation regarding directory depth.  I explained that I’m trying to create several related Domain-Driven Design packages for J3DI.  One of .NET Core’s strengths is the ability to use exactly what’s needed.  Apps don’t need the entire .NET Framework; they can specify only the packages / assemblies necessary to run.  Since I want J3DI to give developers this same option — only use what is needed — I broke the code down in to several aspects.

I’ve enjoyed using Microsoft’s lightweight, cross-platform IDE, Visual Studio Code (VSCode), with this project. It has a nice command palette, good Git integration, etc. But, unfortunately, it appears that VSCode will only execute a single test project.

For context, here’s my tasks.json from the .vscode directory:

{
   "version": "0.1.0",
   "command": "dotnet",
   "isShellCommand": true,
   "args": [],
   "tasks": [
      {
         "taskName": "build",
         "args": [ 
            "./J3DI.Domain", 
            "./J3DI.Infrastructure.EntityFactoryFx",
            "./Test.J3DI.Common", 
            "./Test.J3DI.Domain", 
            "./Test.J3DI.Infrastructure.EntityFactoryFx" 
         ],
         "isBuildCommand": true,
         "showOutput": "always",
         "problemMatcher": "$msCompile",
         "echoCommand": true
     },
     {
         "taskName": "test",
         "args": [
            "./Test.J3DI.Domain", 
            "./Test.J3DI.Infrastructure.EntityFactoryFx"
         ],
         "isBuildCommand": false,
         "showOutput": "always",
         "problemMatcher": "$msCompile",
         "echoCommand": true
      }
   ]
}

Notice how args for the build task includes 5 sub-directories. When I invoke this build task from VSCode’s command palette, it builds all 5 sub-directories in order.

Now look at the test task which has 2 sub-directories specified. I thought specifying both would execute the tests in each. Maybe you thought so, too. Makes sense, right? Well, that’s not what happens. When the test task is invoked from VSCode, the actual command invoked is:

running command> dotnet test ./Test.J3DI.Domain ./Test.J3DI.Infrastructure.EntityFactoryFx
...
error: unknown command line option: ./Test.J3DI.Infrastructure.EntityFactoryFx

(BTW, use the echoCommand in the appropriate task section to capture the actual command)

Hmmmm, maybe the build task works differently? Nope. Here’s its output:

running command> dotnet build ./J3DI.Domain ./J3DI.Infrastructure.EntityFactoryFx ./Test.J3DI.Common ./Test.J3DI.Domain ./Test.J3DI.Infrastructure.EntityFactoryFx

Ok, so it seems that dotnet build will process multiple directories, but dotnet test will only process one. To be clear, this is not a bug in VSCode — it’s just spawning the commands as per tasks.json. So I thought maybe multiple test tasks could work. I copied the test task into a new section of tasks.json, removed the first directory from the new section, and remove the second directory from the original section. Finally, I set isTestCommand for both sections.

{
   "taskName": "test",
   "args": [ "./Test.J3DI.Domain" ],
...
   "isTestCommand": true
}
,
{
   "taskName": "test",
   "args": [ "./Test.J3DI.Infrastructure.EntityFactoryFx" ],
...
   "isTestCommand": true
}

I hoped this was the magic incantation, but I was once again disappointed. Hopefully Microsoft will change dotnet’s test task to behave like the build task. Until then, we’re stuck using shell commands like the one shown in this stackoverflow question.

Try .NET Core, but keep it shallow

I’ve been building a Domain-Driven Design (DDD) framework for .NET Core.  The intent is to allow developers to use only what they need, rather than requiring an entire framework.  The project, J3DI, is available on GitHub (get it? Jedi for for DDD?)

The initial layout had 3 projects under src, and 4 under test:

..\J3DI
| global.json
| LICENSE
| NuGet.config
+---src
| \---J3DI.Domain
| \---J3DI.Infrastructure.EntityFactoryFx
| \---J3DI.Infrastructure.RepositoryFactoryFx
\---test
\---Test.J3DI.Common
\---Test.J3DI.Domain
\---Test.J3DI.Infrastructure.EntityFactoryFx
\---Test.J3DI.Infrastructure.RepositoryFactoryFx

The global.json in J3DI included these projects:

{
   "projects": [
      "src/J3DI.Domain",
      "src/J3DI.Infrastructure.EntityFactoryFx",
      "src/J3DI.Infrastructure.RepositoryFactoryFx",
      "test/Test.J3DI.Common",
      "test/Test.J3DI.Domain",
      "test/Test.J3DI.Infrastructure.EntityFactoryFx"
      "test/Test.J3DI.Infrastructure.RepositoryFactoryFx"
   ]
}

Well, that was a mistake.  After building the src projects, the test projects were not able to find the necessary dependencies from within src.

error: Unable to resolve ‘J3DI.Domain (>= 0.1.0)’ for ‘.NETStandard,Version=v1.3’.

Assuming I had something wrong, I tinkered around in global.json, but couldn’t find the magical incantation of path string format.  Finally it dawned on me that dotnet might not be treating the path as having depth.

So, it turns out, .NET Core only lets you go one level down from global.json (as of versions 1.0.0 and 1.0.1).  After pulling each project up a level, effectively removing the src and test levels, I updated the global.json file.

{
   "projects": [
      "J3DI.Domain",
      "J3DI.Infrastructure.EntityFactoryFx",
      "J3DI.Infrastructure.RepositoryFactoryFx",
      "Test.J3DI.Common",
      "Test.J3DI.Domain",
      "Test.J3DI.Infrastructure.EntityFactoryFx"
      "Test.J3DI.Infrastructure.RepositoryFactoryFx"
   ]
}

After that, dotnet got happy. Magic incantation found!

Must Have Tooling for .NET Core Development

Here’s a great set of tools for smoothing your transition to developing in .NET Core.

IDE

  • VSCode – cross platform IDE; great for coding .NET Core

Portability

Porting

Does SLA Impact DSA?

When potential customers are considering your company’s products, naturally everyone wants to put their best foot forward.  When they ask about Service Level Agreements (SLA), it can be easy to promise a little too much.  “Our competitor claims four nines (99.99%) up-time; we’d better say the same thing.”  No big deal, right?  Isn’t is just a matter of more hardware?

Not so fast.  Many people are surprised to learn that increasing nines is much more complicated than “throwing hardware at the problem.” Appropriately designed Distributed System Architecture (DSA) takes availability and other SLA elements into account, so going from three nines to four often has architectural impacts which may require substantial code changes, multiple testing cycles, etc.

Unfortunately, SLAs are often defined reactively after a system is in production.  Sometimes an existing or a potential customer requires it, sometimes a system outage raises attention to it, and so on.

For example, consider a website or web services hosted by one web server and one database server.  Although this system lacks any supporting architecture, it can probably maintain two nines on a monthly basis.  Since two nines allows for 7 hours of downtime per month, engineers can apply application updates, security patches and even reboot the systems.

 

Three nines allows for just 43.8 minutes per month.  If either server goes down for any reason, even for reboot after patches, the risk of missing SLA is very high.  If the original application architecture planned for multiple web servers, adding more may help reduce this risk since updating in rotation becomes possible.  But updating the database server still requires tight coordination with very little room for error.  Meeting SLA will probably be lost if an unplanned database server outage occurs.

This scenario hardly scrapes the surface of the difficulties involved for increasing just one aspect (availability) of a SLA.  Yet it also highlights the necessities of defining SLAs early and architecting the system accordingly.  Product Managers/Planners: Take time in the beginning to document system expectations for SLA.  System Architects: Regardless of SLA, use DSA to accommodate likely expectation increases in the future.

Perils of Async: Locking Out Performance

In a previous post, Perils of Async: Data Corruption, we saw the consequences of inadequate concurrency control in asynchronous code.  The first implementation using Parallel.ForEach did not protect shared data, and its results were wrong.  The corrected implementation used C#’s lock for the necessary protection from concurrent access.

Parallel.ForEach(input, kvp =&gt;
{
    if (0 == kvp.Value % 2)
    {
        lock (mrr)
        {
            ++mrr.Evens;
        }
    }
    else
    {
        lock (mrr)
        {
            ++mrr.Odds;
        }
    }
    if (true == AMT.Math.IsPrime.TrialDivisionMethod(kvp.Value))
    {
        lock (mrr)
        {
            ++mrr.Primes;
        }
    }
});

Some may ask, “Why lock so many times? Can’t the code just lock once inside the loop?”

Parallel.ForEach(input, kvp =&gt;
{
    lock (mrr)
    {
        if (0 == kvp.Value % 2)
        {
            ++mrr.Evens;
        }
        else
        {
            ++mrr.Odds;
        }
        if (true == AMT.Math.IsPrime.TrialDivisionMethod(kvp.Value))
        {
            ++mrr.Primes;
        }
    }
});

Although moving the lock just above the first if clause seems to have some benefits – it simplifies the code, shared data access is still synchronized, etc.  But it also kills performance – making it slower than even the non-parallel SerialMapReduceWorker.

9999999 of 9999999 input values are unique
[SerialMapReduceWorker] Evens: 5,000,533; Odds: 4,999,466; Primes: 244,703; Elapsed: 00:00:51.6998025
[ParallelMapReduceWorker] Evens: 5,000,533; Odds: 4,999,466; Primes: 244,703; Elapsed: 00:00:30.6871152
[ParallelMapReduceWorker_SingleLock] Evens: 5,000,533; Odds: 4,999,466; Primes: 244,703; Elapsed: 00:01:35.0778434

This situation highlights the common rule of thumb, “lock late.”  Locking late (or “low” in the code) implies that code should lock just before accessing shared data, and unlocking just afterwards.  This approach reduces the amount of code which executes while the lock is held, so it provides contenders (the threads) with more opportunities to acquire the lock.