Using dotnet migrate for project.json to .csproj

Read our post at .NET Core 1.0.1 Migration Workflow which explains using .NET Core CLI for migrating from the old project.json approach to the new Visual Studio Project (a.k.a., MSBuild) approach.

.NET Core Tools 1.0 Migration Workflow

If you’ve been using .NET Core for very long, you have code based on global.json and one or more project.json files.  Several months (a year?) ago Microsoft announced deprecating this approach, and a return to Visual Studio Solution (.sln) and Project (.csproj, .fsproj, . vbproj, etc.) files.  This transition also involves changing from directory based to file based orientation.  For example, attempting dotnet build with the 1.0.1 toolset yields:

MSBUILD : error MSB1003: Specify a project or solution file. The current working directory does not contain a project or solution file.

Specifying a project.json file doesn’t work either; it yields:

error MSB4025: The project file could not be loaded. Data at the root level is invalid. Line 1, position 1.

After upgrading to the toolset, migrating code involves a few steps:

  1. Create VS Solution file.  The migration process is much easier if you create this file first.  Using a shell (cmd, powershell, bash, etc.), change to the directory containing the project’s global.json and run dotnet create sln.  The result is a (relatively) empty .sln file.
  2. Migrate project.json files to VS Project files.  Run dotnet migrate in the same directory as above.  This command will recursively find project.json files, convert them to C# project files, and add references to them in the solution file.
  3. At this point you should be able to restore and build using the .sln.  Recall that dotnet tool is no longer directory oriented.  Include the .sln file in the restore and build commands, e.g., dotnet build <project>.sln.

 

 

.NET Core multi-project build change in tasks.json

In earlier versions of .NET Core tooling, building multiple projects was simply a matter of adding them as args in the tasks.json file.

1
2
3
4
5
6
7
8
9
   "tasks": [
        {
            "taskName": "build",
            "command": "dotnet",
            "args": [
                "./J3DI.Domain/",
                "$./J3DI.Infrastructure.EntityFactoryFx/",
                "$./J3DI.Infrastructure.RepositoryFx/",
   ...

Each directory was a child of the location with global.json, and each had its own project.json. This approach worked very well for producing multiple .NET Core libraries and their associated unit tests.

After migrating this code and changing the references to the specific .csproj files, we found that only 1 arg is allowed for the dotnet build task. If the args array contains more than one item, compilation gives

MSBUILD : error MSB1008: Only one project can be specified.

The fix is to reference the Visual Studio Solution File (.sln) in the args.

 

1
2
3
4
5
6
7
    "tasks": [
        {
            "taskName": "build",
            "args": [
                "${workspaceRoot}/J3DI.sln"
            ],
    ...

Good News: There’s still a way to build multiple projects by encapsulating them in a .sln file.

Bad News: Visual Studio required. (IOW, ever tried manually creating or managing a .sln?)

.NET Core: No Sophisticated Unit Testing, Please!

In my previous post, I wrote about .NET Core’s limitation regarding directory depth.  I explained that I’m trying to create several related Domain-Driven Design packages for J3DI.  One of .NET Core’s strengths is the ability to use exactly what’s needed.  Apps don’t need the entire .NET Framework; they can specify only the packages / assemblies necessary to run.  Since I want J3DI to give developers this same option — only use what is needed — I broke the code down in to several aspects.

I’ve enjoyed using Microsoft’s lightweight, cross-platform IDE, Visual Studio Code (VSCode), with this project. It has a nice command palette, good Git integration, etc. But, unfortunately, it appears that VSCode will only execute a single test project.

For context, here’s my tasks.json from the .vscode directory:

{
   "version": "0.1.0",
   "command": "dotnet",
   "isShellCommand": true,
   "args": [],
   "tasks": [
      {
         "taskName": "build",
         "args": [ 
            "./J3DI.Domain", 
            "./J3DI.Infrastructure.EntityFactoryFx",
            "./Test.J3DI.Common", 
            "./Test.J3DI.Domain", 
            "./Test.J3DI.Infrastructure.EntityFactoryFx" 
         ],
         "isBuildCommand": true,
         "showOutput": "always",
         "problemMatcher": "$msCompile",
         "echoCommand": true
     },
     {
         "taskName": "test",
         "args": [
            "./Test.J3DI.Domain", 
            "./Test.J3DI.Infrastructure.EntityFactoryFx"
         ],
         "isBuildCommand": false,
         "showOutput": "always",
         "problemMatcher": "$msCompile",
         "echoCommand": true
      }
   ]
}

Notice how args for the build task includes 5 sub-directories. When I invoke this build task from VSCode’s command palette, it builds all 5 sub-directories in order.

Now look at the test task which has 2 sub-directories specified. I thought specifying both would execute the tests in each. Maybe you thought so, too. Makes sense, right? Well, that’s not what happens. When the test task is invoked from VSCode, the actual command invoked is:

running command> dotnet test ./Test.J3DI.Domain ./Test.J3DI.Infrastructure.EntityFactoryFx
...
error: unknown command line option: ./Test.J3DI.Infrastructure.EntityFactoryFx

(BTW, use the echoCommand in the appropriate task section to capture the actual command)

Hmmmm, maybe the build task works differently? Nope. Here’s its output:

running command> dotnet build ./J3DI.Domain ./J3DI.Infrastructure.EntityFactoryFx ./Test.J3DI.Common ./Test.J3DI.Domain ./Test.J3DI.Infrastructure.EntityFactoryFx

Ok, so it seems that dotnet build will process multiple directories, but dotnet test will only process one. To be clear, this is not a bug in VSCode — it’s just spawning the commands as per tasks.json. So I thought maybe multiple test tasks could work. I copied the test task into a new section of tasks.json, removed the first directory from the new section, and remove the second directory from the original section. Finally, I set isTestCommand for both sections.

{
   "taskName": "test",
   "args": [ "./Test.J3DI.Domain" ],
...
   "isTestCommand": true
}
,
{
   "taskName": "test",
   "args": [ "./Test.J3DI.Infrastructure.EntityFactoryFx" ],
...
   "isTestCommand": true
}

I hoped this was the magic incantation, but I was once again disappointed. Hopefully Microsoft will change dotnet’s test task to behave like the build task. Until then, we’re stuck using shell commands like the one shown in this stackoverflow question.

Try .NET Core, but keep it shallow

I’ve been building a Domain-Driven Design (DDD) framework for .NET Core.  The intent is to allow developers to use only what they need, rather than requiring an entire framework.  The project, J3DI, is available on GitHub (get it? Jedi for for DDD?)

The initial layout had 3 projects under src, and 4 under test:

..\J3DI
| global.json
| LICENSE
| NuGet.config
+---src
| \---J3DI.Domain
| \---J3DI.Infrastructure.EntityFactoryFx
| \---J3DI.Infrastructure.RepositoryFactoryFx
\---test
\---Test.J3DI.Common
\---Test.J3DI.Domain
\---Test.J3DI.Infrastructure.EntityFactoryFx
\---Test.J3DI.Infrastructure.RepositoryFactoryFx

The global.json in J3DI included these projects:

{
   "projects": [
      "src/J3DI.Domain",
      "src/J3DI.Infrastructure.EntityFactoryFx",
      "src/J3DI.Infrastructure.RepositoryFactoryFx",
      "test/Test.J3DI.Common",
      "test/Test.J3DI.Domain",
      "test/Test.J3DI.Infrastructure.EntityFactoryFx"
      "test/Test.J3DI.Infrastructure.RepositoryFactoryFx"
   ]
}

Well, that was a mistake.  After building the src projects, the test projects were not able to find the necessary dependencies from within src.

error: Unable to resolve ‘J3DI.Domain (>= 0.1.0)’ for ‘.NETStandard,Version=v1.3’.

Assuming I had something wrong, I tinkered around in global.json, but couldn’t find the magical incantation of path string format.  Finally it dawned on me that dotnet might not be treating the path as having depth.

So, it turns out, .NET Core only lets you go one level down from global.json (as of versions 1.0.0 and 1.0.1).  After pulling each project up a level, effectively removing the src and test levels, I updated the global.json file.

{
   "projects": [
      "J3DI.Domain",
      "J3DI.Infrastructure.EntityFactoryFx",
      "J3DI.Infrastructure.RepositoryFactoryFx",
      "Test.J3DI.Common",
      "Test.J3DI.Domain",
      "Test.J3DI.Infrastructure.EntityFactoryFx"
      "Test.J3DI.Infrastructure.RepositoryFactoryFx"
   ]
}

After that, dotnet got happy. Magic incantation found!

Must Have Tooling for .NET Core Development

Here’s a great set of tools for smoothing your transition to developing in .NET Core.

IDE

  • VSCode – cross platform IDE; great for coding .NET Core

Portability

Porting

REST, Versioning & Religion, Part I

Have you been tracking opinions on the “right” way to version REST API’s?  Don’t miss out on the fun!  If you’d like to get a brief overview of the matrix of possibilities, check out Troy Hunt’s Your API Versioning Is Wrong.  Although this is an oldie, it’s definitely a goodie!  Reading it will give you a brief glimpse into the religious nature of REST advocates.

In case you’re of the TLDR persuasion, here’s a quick summary:

Premises

  • REST proposes that the URL (sans query) specifies the resource.
  • Services need some manner of versioning.  IOW, it’s impossible to design and implement the perfect service which never changes.

Problems

  • Does that mean that service API’s (URL) should be versioned?  Well, that depends on your religious views.  Does the resource really change?
  • Oh, so the representations should be versioned? Maybe. It really depends on your religious views.

So stick around to get *all the answers in Part II!

* – refers to opinions you’re likely to disagree with, possibly with religious fervor.

The Beginning of the End of OS’s

Who cares about operating systems anymore? Microsoft’s recent moves toward Linux, along with their emphasis on Azure, should make it clear that OS’s are diminishing in importance.  (cf. Red Hat on AzureSQL Server on LinuxBash on Windows)  Breaking with Steve Ballmer’s (misbegotten) approach to Linux, Nadella’s Microsoft realizes that Windows isn’t the center of their universe anymore (and can’t be considering their inability to convert desktop dominance to mobile devices).

Developer sentiment is another indicator.  Fewer and fewer developers care about the OS.  OS just doesn’t matter as much in a world of Ruby, Python, Node, MEAN, etc.  This trend will accelerate as PaaS providers continue to improve their offerings.

OS’s aren’t going away, but their importance or mind-share is waning broadly.

Is JSON API an REST Anti-Pattern?

JSON API is an anti-pattern of REST (at least partially).  JSON API’s core problem is that it restricts 1/3 of the fundamental concepts, namely representation of resources. In the Content Negotiation section of the JSON API spec we learn:

  • Clients must pass content-type: application/vnd.api+json in all request headers
  • Clients are not allowed to use any media type parameters
  • Servers must pass content-type: application/vnd.api+json in all response headers
  • Servers must reject requests containing media type parameters in content-type (return error code 415 – Unsupported Media Type)
  • Servers must reject requests lacking an unadorned Accept header for application/vnd.api+json (return error code 406 – Not Acceptable)

In other words, application/vnd.api+json is the only representation allowed.  This restriction may be temporary – v1 spec indicates these requirements “exist to allow future versions of this specification to use media type parameters for extension negotiation and versioning.”  Will the restrictions be lifted in v1.1, v2.0, v3.0?

So What?

“Ok, so JSON API is overly restrictive on representations.  Big deal.  Why should I care?”  As always “it depends” (typical, right?).  Teams meeting the following criteria may not need to care about this issue:

  • Simple / Single Application – if the application is single purpose; service is not expected to serve multiple clients, client types
  • JSON Only – if the application is never expected to provide media formats other than JSON
  • Simple Representations – if the application is never expected to provide different representations; IOW, if media type parameters will always be sufficient

Does SLA Impact DSA?

When potential customers are considering your company’s products, naturally everyone wants to put their best foot forward.  When they ask about Service Level Agreements (SLA), it can be easy to promise a little too much.  “Our competitor claims four nines (99.99%) up-time; we’d better say the same thing.”  No big deal, right?  Isn’t is just a matter of more hardware?

Not so fast.  Many people are surprised to learn that increasing nines is much more complicated than “throwing hardware at the problem.” Appropriately designed Distributed System Architecture (DSA) takes availability and other SLA elements into account, so going from three nines to four often has architectural impacts which may require substantial code changes, multiple testing cycles, etc.

Unfortunately, SLAs are often defined reactively after a system is in production.  Sometimes an existing or a potential customer requires it, sometimes a system outage raises attention to it, and so on.

For example, consider a website or web services hosted by one web server and one database server.  Although this system lacks any supporting architecture, it can probably maintain two nines on a monthly basis.  Since two nines allows for 7 hours of downtime per month, engineers can apply application updates, security patches and even reboot the systems.

 

Three nines allows for just 43.8 minutes per month.  If either server goes down for any reason, even for reboot after patches, the risk of missing SLA is very high.  If the original application architecture planned for multiple web servers, adding more may help reduce this risk since updating in rotation becomes possible.  But updating the database server still requires tight coordination with very little room for error.  Meeting SLA will probably be lost if an unplanned database server outage occurs.

This scenario hardly scrapes the surface of the difficulties involved for increasing just one aspect (availability) of a SLA.  Yet it also highlights the necessities of defining SLAs early and architecting the system accordingly.  Product Managers/Planners: Take time in the beginning to document system expectations for SLA.  System Architects: Regardless of SLA, use DSA to accommodate likely expectation increases in the future.