Bill Blogs in C#

Bill Blogs in C#

Created: 5/3/2017 8:37:36 PM

Consider this two methods:

 

public Task DoWorkAsync()
{
    var arg1 = ComputeArg();
    var arg2 = ComputeArg();
    return AwaitableMethodAsync(arg1, arg2);
}

public async Task DoWork2Async()
{
    var arg1 = ComputeArg();
    var arg2 = ComputeArg();
    await AwaitableMethodAsync(arg1, arg2);
}

 

Do you notice the difference?

The first is a synchronous method that returns a Task. The Task may or may not have completed when the method returns. The second as an async method that returns the result of awaiting other work.

These two methods look almost the same, but the code generated by the compiler for them is very different. These two talks on InfoQ by Jon Skeet and I go into all the gory details about the differences:

In most cases, you should prefer writing the first version when possible. The method is much simpler, and is much easier to reason about. It’s a synchronous method that returns an object that represents work that may be ongoing.

The second is more complicated. It builds a state machine. It manages re-entrancy for code that should execute when the awaited task finishes. It returns. It resumes execution. It’s difficult to reason about.

You can write the first version for any task-returning method that could be a synchronous method. That’s the case when:

  • The method does not do any work after the only task-returning method is called.
  • The return of the only task returning method matches the signature of this method.

The curious case of IDisposable

Now, let’s look at a variation of the two methods above:

 

public Task DoWorkAsync()
{
    using (var service = new Service())
    {
        var arg1 = ComputeArg();
        var arg2 = ComputeArg();
        return service.AwaitableMethodAsync(arg1, arg2);
    }
}

public async Task DoWork2Async()
{
    using (var service = new Service())
    {
        var arg1 = ComputeArg();
        var arg2 = ComputeArg();
        await service.AwaitableMethodAsync(arg1, arg2);
    }
}

 

Can you spot the difference? Can you spot the bug? The introduction of a local variable the refers to an object the implements IDisposable means you must use the second version, where the compiler generates the state machine and a continuation.

I gave a hint as to the reason in the first description. The first method is synchronous. There are no continuations. The service object will be Disposed() as soon as AwaitableMethodAsync() returns. The object is disposed if the async work is completed. The object is disposed when the async work is not completd. The compiler generated finally clause will be executed before the method returns the (possibly still running) task. There is a high-probability that this idiom results in an ObjectDisposedException in some cases.

The asynchronous method generates the code so that the compiler generated finally clause executes only after the task returned from AwaitableMethodAsync() completes. The service will be Disposed only when it’s done doing all its work.

Note that my explanation of when you can write the synchronous version above is accurate: because of the compiler generated finally clause, there is code that must execute after the task completes. It’s just not easily visible in your source code.

Testing for this case

This condition can be hard to catch in automated unit tests. (In fact the error I introduced this week was not caught by unit tests in the library I was working on.) Often we write unit tests for asynchronous methods that always return synchronously, using Task.FromResult(). These tests are fine, and verify that the fast path works correctly.

You should also write tests that verify the slow path, where a Task has not completed synchronously. It doesn’t have to be measurably slow. Just sprinkle an ‘await Task.Yield()’ statement in your mock implementation and you will force the slow path.

Yes, that bug I introduced is fixed. It’s also now caught by a test.

Tags: C#
Created: 1/23/2017 12:24:21 PM

I had the pleasure of speaking at NDC London again this year. I gave two talks this year.

First, a discussion of controversial C# Language design decisions. In this talk, I explained the rationale behind some of the more controversial decisions made by the C# language design team. I discussed two new features: local functions, and the extensions to the switch statement for pattern matching. The rest of the list contained overload resolution and base classes, XML literals, var and implicitly typed variables, nested scopes, and declaring base classes and interfaces on partial classes.

The second talk was a deep tour of Pattern matching in C# 7. In this talk, I went through the new syntax for patterns, the extensions to the is expression, and the switch statement. I discussed the patterns that are currently supported: the constant pattern, the type pattern, and the var pattern. We closed with a look at some of the specified features that may be consdiered for future releases.

Slides and demos are available on my OneDrive.

Created: 12/7/2016 6:58:41 PM

I’m excited to announce that the 3rd edition of “Effective C#” is coming out this month. Just in time for a Christmas gift for that developer on your list.

This is the first major milestone in a large project: I’m updating both “Effective C#” and “More Effective C#”. The first edition of “More Effective C#” was released in 2005, coinciding with the release of C# 3. A number of the items covered LINQ and related language features. The second edition of “Effective C#” came up a few years later with C# 4. The new areas were dynamic support, and the PLINQ libraries.

Then, I waited. I did not want to make more updates until Roslyn came out. The switch to the Roslyn compiler came  with C# 6.  Both books are somewhat out of date. They also aren’t organized for developers who did not start using C# with one of the earliest versions. The existing versions were organized based on the version that was current when the book was released.

I took the opportunity afforded by updating both books to reorganize the content (all 100 items in both books). I believe that Effective C#, and the upcoming edition of More Effective C# are more approachable to experienced developers whose C# experience is only with the more recent versions.

Yes, that does mean I’m currently working on updating More Effective C#. I’ll announce it here when it is ready.

You can pre-order Effective C# directly from InformIT or from Amazon.

Created: 11/3/2016 6:59:06 PM

I’m quite thrilled to announce that I’ll be speaking at NDC London this coming January. I’ve got two talks, one very practical, and one a fun technical exploration.

 

First, there’s a Deep Dive into C# Pattern Matching. Pattern Matching in C# 7 will change the way you code in C#. Your gaining powerful new tools for many different idioms. In this session, I’ll explain why this feature was added. You’ll see lots of examples of different types of patterns you can work with, and we’ll discuss some of the initial guidance for using these features.

Second, there’s a discussion and critique on different language features. I’ll discuss the thinking behind some of the features that initially appear counter-intuitive. I’ll explain the thinking behind the decisions, and why these decisions were made. You’ll have plenty of time to provide counter arguments and why these features still provide you with fits.

Overall, the NDC events are some of my favorites. The attendees are some of the most knowledgeable people at any conference. They have deep knowledge of their chosen platform, and strong knowledge of software development in general. Every speaker I know considers deeper talks for NDC than many other conferences. It’s a fantastic learning opportunity, and I highly recommend it.

Visit the NDC site (link above) and explore the entire program. You’ll see why I’m very honored to be included in the program. I hope to see you there.

Created: 10/6/2016 6:24:06 PM

I’ve been working with Docker on both Windows and the Mac these past few weeks. Everything I’ve been doing is command line based. In this post, I list the commands I use most often, with the options I need the most.

Disclaimer: This not meant to be a complete Docker reference. It’s a quick way to remember the commands and options I use most often. Your mileage may vary. All of these topics are covered in more depth on the Docker site. For specific .NET Content, checkout the .NET Core and .NET Framework content for running .NET applications in Docker.

As you start working with Docker, you must be able to distinguish between Docker Images and Docker Containers. A great way to explain it is for C# developers is that images correspond to classes and containers correspond to objects. An image provides a template for containers. A container is a running copy of an image.

Creating and Managing Images

You build images with “docker build”. That command reads a “Dockerfile” that describes the image you want built. Typcially, I use:

docker build –t <ImageName> .

This command builds your image. The ‘-t’ argument lets you tag the image. The ‘.’ is a required argument that specifies the directory for the Dockerfile.

You look at yoru catalog of images using:

docker images

This command lists the images and their tags.

When you no longer need an image, your use:

docker rmi <ImageName>

The <ImageName> is the name you gave to this image when you built it.

Starting and Stopping Containers

You launch a container using docker run. I usually specify these arguments:

docker run –d –p 80:8000 –name <ContainerName> <ImageName>

‘-d’ means your container runs in detached mode, in the background. The ‘-p’ specifies port mapping. The first port is the port used on the host, and second is the port used on the container. In the example above, your application would be listening on port 8000 in the containers, and external processes would make a request on port 80. Docker maps those requests.

I use the – --name argument in development environments to give a name to my containers. Docker assigns each container an id (basically a sha) and you must use that to manage a container unless you have given it a name.

That makes it easier for me to manage them. The last argument is the image name to start.

To see your running containers, use:

docker ps

By default, this only shows the running images. IF you want to see all images, use the ‘-a’ argument.

To stop the container, use:

docker stop <ContainerNameOrSHA>

Stopping doesn’t remove a container. (You can still see it with ‘docker ps –a’). If you want to restart a container that you have stopped, use:

docker start <ContainerNameOrSHA>

To remove an image that you have stopped, use:

docker rm <ContainerNameOrSHA>

When you remove an image, you can reuse the name specified above in docker run.

Those are the commands and options I use most often when I’m working with Docker. Hope that helps.

Tags: C#
Created: 8/2/2016 3:52:00 PM

Last week, I posted a puzzle on twitter. Fill in the GetSpecialSequence() method such that Any() returns false, and All() returns true.  Here’s the code from the puzzle:

 

using static System.Console;

using System.Collections.Generic;

using System.Linq;

 

namespace AnyAndAll

{

    public class Program

    {

        public static void Main(string[] args)

        {

            var sequence = GetSpecialSequence();

 

            // Prints "True"

            WriteLine(sequence.All(b => b == true));

 

            // Prints "False"

            WriteLine(sequence.Any(b => b == true));

        }

 

        private static IEnumerable<bool> GetSpecialSequence()

        {

            // Puzzle: What should this implementation be?

            return null;

        }

    }

}

 

The answer is simply to return the empty sequence:

 

private static IEnumerable<bool> GetSpecialSequence()

{

    return Enumerable.Empty<bool>();

}

Let’s discuss why this is both important, and a good thing.

True for “All None of them”

Several developers on twitter did get the right answer to the puzzle. Many of those that did get the right answer also responded by saying how they disliked this behavior, and why it was confusing.

The behavior is correct if you understand set theory from mathematics:

  • Any() returns true when at least one item in the set satisfies the condition. It follows that the empty set always return false.
  • All() returns true when all of the items in the set satisfy the condition. It follows that the empty set always return true.

Let’s drill into that second statement: All 0 elements in the empty set satisfy the condition. Stated another way, there are no elements in the set that do not satisfy the condition.

The importance of “All None of Them”

Let’s discuss how this bit of set theory can make our lives as developers easier.

If you know that a set of data always satisfies a condition, or is empty, we can often simplify the conditional branching in our code. In the C# spec meetings, it came up as a proposal for simplifying the rules around Definite Assignment.  This code does not generate an error because it uses a variable before it has been assigned:

int x;
if (false)
    WriteLine(x);

 

There are two rules in the existing spec that define this behavior.  Paraphrasing, one says that a variable ‘v’ is definitely assigned at the beginning of any unreachable statement. The other (more complicated rule) says that v is definitely assigned if and only if v is definitely assigned on all control flow transfers that target the beginning of the statement.

Read it carefully with set theory in mind, and you see that the second rule makes the first rule redundant: if a variable is definitely assigned on all (none) of the control paths that target the statement, that variable is definitely assigned. We could remove similar redundancies in this section of the spec by finding other rules that effectively created an empty set of possible paths, and clearing them out.

Consider how this might apply in your work. If you can find logic in your code that finds data which satisfies a particular criteria and then branches based on that data. Can you simplify that code if the empty set code “just works” the same as other paths? Can you find other locations in code where “for all items” can work for the interesting case of “for all none of these items”?

 

I hope you enjoyed the puzzle. If you want another brain teaser, work out why this solution satisfies the original puzzle.

Hat tip to Neal Gafter for doing the work to simplify (and correct) the Definite Assignment section of the C# spec.

Hat tip to Evan Hauck for the most interesting solution to the puzzle.

Tags: C#
Created: 7/26/2016 7:43:06 PM

This is the story of a C# language specification change.

The specification changed because the 1.0 version of the C# spec disagreed with itself, and one of those locations was incorrect and broke a feature.

 

The change is in the section on “Conditional Logic Operators”.  Version 1 of the spec states:

  • The operation x && y corresponds to the operation x & y, except that y is evaluated only if x is true.
  • The operation x || y corresponds to the operation x | y, except that y is evaluated only if x is false.

The later versions (starting with version 3) state:

  • The operation x && y correspond to the operation x & y, except that y is evaluated only if x is not false.
  • The operation x || y correspond to the operation x | y, except that y is evaluated only if x is not true.

Why the change?

Well, a couple sections later, the spec defines “User-Defined Conditional Logical Operators”. The C# Language does not allow you to create a user defined operator && or operator ||. Instead, you must define operator |, operator &, operator true and operator false. Here is the pertinent text:

 

The && and || operation is evaluated by combining the user-defined operator true or operator false with the selected user-defined operator:

  • The operation x && y is evaluated as T.false(x) ? x : T.&(x,y), …... In other words, x is first evaluated and operator false is invoked on the result to determine if x is definitely false. Then, if x is definitely false, the result of the operation is the value previously computed for x. Otherwise, y is evaluated, and the selected operator & is invoked on the value previously computed for x and the value computed for y to produce the result of the operation.
  • The operation x || y is evaluated as T.true(x) ? x : T.|(x,y), …... In other words, x is first evaluated and operator true is invoked on the result to determine if x is definitely true. Then, if x is definitely true, the result of the operation is the value previously computed for x. Otherwise, y is evaluated, and the selected operator | is invoked on the value previously computed for x and the value computed for y to produce the result of the operation.

The key points here are that for operator &&, x is checked to ensure that it is not false, and for operator ||, x is checked to ensure that it is not true.

Why the spec had to change

The version 1.0 of the spec had serious limitations. User Defined Types that defined operator & or operator | would work with if and only if operator true and operator false were defined such that exactly one of them was true at all times.

Nothing in the language mandates that explicitly.

As a thought exercise, suppose you have a type that may be neither true nor false in some states. Maybe there are ranges of “true” and “false” and a range in between of “neither true nor false”.

If you want a concrete example, consider a type that has a single byte field. Its operator true returns true when all bits are 1. Its operator false returns true when all bits are 0. In all cases, to evaluate ‘x && y’, or ‘x || y’ requires both the left and right side of the operator. No short circuiting is possible.

That’s why the spec changed at version 3.

A little more explanation in the spec

We felt the spec needed a little more explanation around this change. In ECMA version 5, we’re adding this note:

note: The reason that short circuiting uses the 'not true' and 'not false' conditions is to enable user defined conditional operators to define when short circuiting applies. User defined types could be in a state where operator true returns false and operator false returns false. In those cases, neither && or || would short circuit.

The C# spec (thankfully) has relatively few locations where the spec disagrees with itself. But in a large document, it’s easy for them to creep in. Several people review and make corrections whenever we find them.

Created: 5/18/2016 6:19:33 PM

The TL;DR; version is:

Sometimes.

The more important question is how you ensure that you generate the method call you want. Let’s start with a bit of background. Lambda expressions do not have types. However, they can be converted into any compatible delegate type. Take these two declarations as a starting point:

Action task = async () => await Task.Yield();
Func<Task> task2 = async () => await Task.Yield();

Notice that this lambda body can be assigned to either an Action or a Func<Task>. The lambda expression can represent either an async void method, or a Task returning async method.

Well, let’s suppose you call Task.Run with that lambda body:

Task.Run(async () => await Task.Yield());

(Ignore for a moment the obvious uselessness of calling Task.Run and telling it to yield.) Which of the following overloads does that Lambda resolve to:

public static Task Run(Action action);
public static Task Run(Func<Task> action);

They correspond to the two delegate declarations used in the first code sample above. This call compiles, so the compiler must find one of them to be a better method. Which one?

The compiler prefers the method call that expects a delegate that returns a Task. That’s good, because it’s the one you’d rather use. The compiler comes to this conclusion using the type inference rules about the return value of anonymous delegates. The “inferred return type” of any async anonymous function is assumed to be a Task. Knowing that the anonymous function represented by the lambda returns a Task, the overload of Task.Run() that has the Func<Task> argument is the better match.

The C# language overload rules, along with the rules for type inference for async lambda expressions ensures that the preferred overload generates a Task returning async method.

So, what does that mean for me?

Remember that async void methods are not recommended. They are fire-and-forget methods, and you can’t observe any errors that might occur in the async method. You want to avoid accidentally creating an async void lambda expression.

There are two recommendations that come from these rules in the language specification.

First, avoid using async lambdas as arguments to methods that expect Action and don’t provide an overload that expects a Func<Task>. If you do that, you’ll create an async void lambda. The compiler will happily assume that’s what you want.

Second, if you author methods that take delegates as arguments, consider whether programmers may wish to use an async lambda as that argument. If so, create an overload that uses Func<Task> as the argument in addition to Action. As a corollary, create an overload that takes Func<Task<T>> in addition to Task<T> for arguments that return a value.

The language team members worked hard on these overload rules to ensure that in most cases, the compiler prefers Task-returning anonymous functions when you write async lambdas. You have to make sure the right overloads are available.

Created: 5/11/2016 8:14:48 PM

I had the opportunity to speak at Tech O Rama in Mechelen, Belgium last week.  It was my first trip to continental Europe. Belgium is a wonderful country, and I’m very impressed with the conference that Gill, Pieter, Kevin, and the other volunteers put together.

My talks were on C# 7, and using the Roslyn APIs. Those talks were both updates form my NDC talks. The repositories contain the updated presentations and code. I also substituted for Martin Woodward, giving his talk on the .NET Foundation. And appearing in an upcoming .NET Rocks show discussing Open Source.

The C# 7 story has moved forward since I spoke at NDC London. There’s now preview bits. (Preview 2 came out this week). Using that build, you can try out three of the upcoming C# 7 features: Nested Local Functions, Pattern Matching, and Ref Returns. The release notes explain how to turn on each of these language features. Some of the other features initially discussed may not be in the next release (but may make a later release). Note: pay careful attention to ‘may’ as the verb. Watch the team’s announcements on GitHub for the official announcements.

Preview 2 also contains updates to the Analyzer SDK. These updates make it simpler to create analyzers that focus only on code semantics (as opposed to syntax models). I haven’t updated my NDC and Tech O Rama samples for that model yet, but I will.

I would recommend any of my readers that can should try and attend Tech O Rama. It’s a wonderful conference in a great location. The recent events made travel a bit of a challenge, but people in Belgium responded and made it as safe and convenient as possible. 

Created: 4/12/2016 8:48:30 PM

//Build changes Everything

I started this series after giving a presentation at NDC London on the potential features up for discussion in C# 7. That information was based on the public design discussions on GitHub. Now that //build has happened, we know a bit more about the plans. There is even a public preview of Visual Studio 15 available for download. There are two versions: a ‘classic’ installer, and a new lightweight installer that runs much more quickly but has fewer scenarios supported.

I have installed both installers of Visual Studio 15 on my machine. They install side-by-side, and my machine works well. (It is also a machine with Visual Studio 2015 on it, and that’s been unaffected. This is still pre-release software, and you should proceed with some caution.

All of which means that there are two important updates to this series: First, the plans have been updated. The team announced at //build that the languages will have a faster cadence than before. That’s great news. But, it comes at a price. Some of the features that were slated for C# 7 are likely to be pushed to the release that follows C# 7. Private Protected is one of those features. Non-Nullable Reference Types (covered in my NDC talk, but not yet in this blog series) may be another.

Immutable Types and With Expressions

Now that those announcements are made, let’s discuss the addition of ‘with expressions’ to make it easier to work with immutable types.

Immutable types are becoming a more common part of our design toolkit. Immutable types make it easier to manage multi threaded code. Shared data does not create issues when that data can’t change.

However, working with immutable types can become very cumbersome. Making any change means making a new object, and initializing it with all the properties of the original object, except the one you want to change.

With expressions are meant to address this issue. In their most basic use, consider that you have created an immutable Person object:

var scott = new Person(“Scott”, “Hanselman”);

Later, you find you need an object that’s almost the same, but must have a different last name:

var coolerScott = scott with { LastName = “Hunter” };

This simple example shows the syntax, but doesn’t provide great motivation for using the feature. It’s almost as simple to create a new object and explicitly set the two fields by calling the constructor. With expressions become much more useful in real world scenarios where more fields are needed to initialize the object. Imagine a more extensive Person class that included employer, work address and so on. When a Person accepts a new role, the code to create the new object becomes much more heavyweight. And it’s all boilerplate code. That represents a lot of busy work that adds minimal value. In those cases, With expressions are your friend.

This feature will leverage a convention found throughout the Roslyn APIs. You may have seen that many types have .With() methods that create a new object by copying an existing object and replacing one property. The proposed feature would use a With() method if one was available. If not, one proposal would generate a call to a constructor and explicitly set all the properties. Another concept would only support types that had an appropriate With() method.

The syntax for With expressions was originally proposed for record types (which I will cover in a future blog post). Record types are a new feature, and the compiler can generate all the necessary code to support new syntax like With expressions. The current proposal would specify that Record types would generate With() methods that would support this language feature.

When With Expressions are applied to record types, the generated With() method provides a great example of how such a method can be generated that would support many permutations of With Expressions. That proposal minimizes the amount of work necessary to support a full set of With Expressions for all combinations of updated properties.

Open Questions

In the previous section, I said that one proposal would fall back to a constructor if a With() method was not available. The advantage to that design is that With Expressions would work with all existing types. The advantage of requiring a With() method is that it enables richer support for positional and name mapping.

But there are more questions. In the scenario above, suppose the Person type was a base class for other types: Teacher, Student, Teaching Assistant, Tutor, Advisor. Should a With Expression that uses a variable of type ‘Person’ work correctly on any derived type? There’s a goal to enable those scenarios. You can read about the current thinking in the February C# Design Notes.

With Expressions are one language feature that will make working with immutable types more pleasant and natural in C#. These features will make it easier to create the designs we want to support. It’s part of that “Pit of Success” design goal for C#: Make it easier to do the proper design.

Most importantly, these issues are still being discussed and debated. If you have ideas, visit the links I’ve put in place above. Participate and add your thoughts.

Created: 4/6/2016 2:48:30 PM

Private Protected access likely comes back in C# 7.

My readers are likely familiar with the four access modifiers in C#: public, protected, internal, and private. Public access means accessible by any code. Protected access enables access for all derived classes. Internal access enables access from any code in the same assembly. Private access is limited to code in the same class. C# also supports “protected internal” access. Protected Internal access enables access from all code in the same assembly, and all derived classes.

What was missing was a more restrictive access: enable only code in the same assembly AND derived from this class to access those members. The CLR has supported this for some time, but it was not legal in C#. The team wanted to ad it in C# 6, using the keywords “private protected”. That generated a tremendous amount of feedback. While everyone liked the feature, there was a lot of negative feedback on the syntax. Well, after much discussion, thought, and experimentation, it’s back. It’s back with the same syntax.

Let’s explain some of the thinking behind this.

One overriding goal for the team was that this feature should not require a new keyword that could potentially break code. New keywords might be used in existing code as identifiers (variables, fields, methods, class names, and so on). In fact, the C# language design team has managed to avoid adding any new global keywords since C# 2.0. All the features for LINQ, dynamic, async and await, and more have been implemented using contextual keywords. Contextual keywords have special meaning only when used in a particular context. That enabled the language designers to add new features with less concern that they could be breaking existing code.

Using Contextual Keywords is very hard when you are talking about access modifiers. Remember that access modifiers are optional. Members of a class have a default access: private. Therefore, when the language parser looks at a method declaration, the first token may be an optional access modifier, ore it may be the return type. So, some new keyword for the new restrictive access would have the potential to break code: if some developer had created a type with the name of the proposed modifier, that code would break.

So, new keywords are out. That removes any suggestions like “protected or internal” and “protected and internal”. Those would be great suggestions, were it not for the breaking code problem.

However this feature was going to be implemented, it needed to use a combination of the current keywords. This new access is more restrictive than the current “protected internal” access. The modifier used should reflect that. The design question now becomes what combination of access modifier keywords would reflect a more restrictive access, and yet express that both internal and protected modifiers are in play?

Let’s reject out of hand the suggestion that the current ‘protected internal’ access should be repurposed for this feature, and a new combination of keywords used for the existing feature. That would break way too much code, and there’s no way for tools to know if you meant the old meaning, or the new meaning.

The other possible suggestion was to make “protected internal” have the current meaning, and make “internal protected” take on the new meaning. Well, that’s also a breaking change. In today’s world, you can type the ‘protected’ and ‘internal’ keywords in either order, and it has the same meaning. That fails the breaking change concern.

Of the possible combinations, “private protected” comes out best. Along with “private internal” it’s the only combination of 2 access modifiers that make sense, and isn’t already in use. One other option could be “private protected internal”, but that’s a lot of extra typing.

Overall, there are a lot of requests for adding the feature and enabling this accessibility. The proposed syntax is still the best way to express it. The language design team thought through alternatives, polled the community, and asked in public. This is still the best expression for this feature.

I’m glad it’s back.

Created: 4/2/2016 4:04:52 PM

Important note: I’m still an independent consultant, Regional Director, and C# MVP. I haven’t officially started at Microsoft. All opinions expressed below are my own, and do not represent any Microsoft statement.

I spent the better part of last week at //build/ in San Francisco. //Build/ is Microsoft’s premier developer event. It’s where they focus on new ideas, new tools, and new visions. This year did not disappoint. If anything, this was one of the best //build/ conferences I’ve been to in many years.

An opening filled with Inspiration

Let’s start with the first day. We were introduced to many of the cutting edge ideas that Microsoft is investing in. The concepts revolved around More Personal Computing and leveraging the power of computers to improve the human condition. I won’t bury the lead. The most inspirational moment of the first day was the video showing the incredible work done by Saqib Shaikh. Go watch it now. I’ll wait.

Did you have enough tissues to get through that? It hits right in the feels, doesn’t it? Saqib was at build, and chatted with quite a few developers during the three days. He is truly inspirational.

All the technology highlighted the first day was very forward looking. Libraries to build bots. Libraries to build cognitive software. Machine Learning. Libraries to work with Cortana. Artificial Intelligence. HoloLens. And, early versions of all the libraries are available in preview form, or will be soon. I kept jotting down ideas that I could start exploring.

The overall message of all these demos and tools is to aim higher: How can we build software that communicates more naturally with humans? How can we build software that learns from the immense stores of data we have at our disposal? How can we create software that learns to get better over time? How can we target different form factors intelligently?

Open, Collaborative Developer Tools

The next major theme, which was covered on both the first and second day keynotes involved developer tools. And, having those tools be more open, more collaborative, and more platform agnostic. We hear about Docker on Linux. Docker on Windows. We heard about running Linux VMs in Azure. We heard about running bash in Windows. (Not in a VM, and not as a heavyweight Windows process, like cygwin, but something in between.) And, bash on Windows should bring full fidelity for any of the tools you use in the bash shell.

And, you’ve probably heard the huge news:  That the Xamarin tools are now free to anyone with a valid Visual Studio license: if you have VS Enterprise, VS Pro, with MSDN subscription or not, and even VS Community, you have access to all that Xamarin provides.

But wait, there’s more.

The Xamarin tools and libraries are also being open sourced. They will be under the .NET Foundation umbrella. The plan is to release them under the MIT license. This is just awesome news for the developer community. Now, with no extra cost, you can develop applications for Windows, iOS, and Android in C#. And at a cost (free) that is in reach of the hobbyist developer.

There were also major announcements about development on Azure, including Azure Functions. Azure Functions are small, lightweight, micro-services. I haven’t explored it completely yet, but I’m really interested in the concepts.

The overriding theme for this is that Windows will be the most productive OS for any developer. Great tools, great libraries, and you can target everything: Linux, iOS, Android, and even Windows. It’s the perfect platform for anyone developing software.

The Future of Languages

My breakout session time was looking at .NET Core, C#, and TypeScript. There’s great news on all fronts.

.NET Core and ASP.NET Core are getting closer and closer to being the same tools, libraries, and command line. The new CLI (coming soon) will use the same or similar commands for any different application type. ASP.NET Applications can target Docker containers. They can run on Liinux. They can run on MacOS. And, the tooling will work with any shell (Powershell, Cmd, bash, and so on) on any developer OS. If you want to learn more, watch this talk by Scott Hanselman and Scott Hunter.

The C# team ( Mads Torgersen and Dustin Campbell) showed an updated view of the plans into C# 7. You can watch that presentation online at Channel 9. If you haven’t looked at C# because you thought it was “Windows only”, check it out. C# (and programs written in C#) run on Windows, MacOS, and Linux. You’ll find C# very competitive with your favorite language. Have fun!

Ander Hejlsberg discussed the Future of TypeScript in another ssession. (Also available on Channel 9. They are tracking toward the 2.0 release of TypeScript. That release will include support for async / await that supports downlevel (ES5) execution environments. Like current browsers. I can’t wait. If you haven’t checked out out TypeScript, watch this talk. Anders spends quite a bit of time talking about how to migrate from JavaScript to TypeScript. He also shows how you can get benefits all along the migration path.

And much much more.

Now that I’m home, I’m watching many of the sessions that I did not see live. You can too. All the sessions, and more, are available on Channel 9 as well. Check out the ones that interest you. There’s more on Azure. There’s more on .NET. There’s more on UWP and PCL development. And much, much more.

Created: 3/28/2016 12:56:54 PM

I’ve been self-employed for quite some time. I’ve started three companies, including building one of them into a 2 time Inc. 5000 awardee. I’ve enjoyed all the time as an independent consultant, entrepreneur, and business owner.

At my core, though, I love software. I enjoy building software. I enjoy helping other developers learn new tools, and new skills.  Since selling SRT Solutions, I’ve spent my time teaching developers to use .NET and C#. I’ve been teaching classes for corporate clients, bootcamps for people just learning to develop software, speaking at seminars and at conferences. That’s been great fun. It’s also been somewhat limiting. There’s only so many people I can reach as an independent consultant.

So it’s time to continue this mission as part of a larger organization.

This last week, I accepted a full time position with Microsoft on the .NET Core content team. I’ll be part of a team building learning resources for developers that are new to the .NET Core platform. One audience is experienced .NET developers that want to learn what’s different as they start working with .NET core. Another important audience is developers that are experienced with other platforms and want to investigate .NET Core.

One key reason why I accepted this position is the exciting future for the .NET platform. Running on Linux, MacOS, Android and iOS opens many new possibilities for the platform, the languages, and the framework. Seeing the rapid pace of innovation in the C# language now that the team is building on the Roslyn platform is equally exciting. I’m glad that I’ll now have a role as part of the team responsible for helping developers use these tools.

Equally important is the respect I have for the team members. Both the engineering team and the content team are filled with awesome, smart people. I’ve worked with many of them as an MVP and RD over the past several years, and I’ve got immense respect for the people that will be my co workers.

The final motivator is to continue creating content for all the different styles of learning that exist. Some people enjoy reading, some enjoy watching video based content, others want guided labs to help them explore. I’m excited that the .NET Core content team is exploring all of these ideas, and more different ways to help developers learn the platform, the libraries, and the languages.

I’m excited to work with a much larger audience to learn more about .NET and C#. It’s going to be fun.

Created: 3/2/2016 4:30:52 PM

Let’s discuss another of the features that may be coming to the next version of the C# language: Local Functions.

This post discusses a proposed feature. This feature may or may not be released. If it is released, it may or may not be part of the next version of C#. You can contribute to the ongoing discussions here.

Local functions would enhance the language by enabling you to define a function inside the scope of another function. That supports scenarios where, today, you define a private method that is called from only one location in your code. A couple scenarios show the motivation for the feature.

Suppose I created an iterator method that was a more extended version of Zip(). This version puts together items from three different source sequences. A first implementation might look like this:

public static IEnumerable<TResult> SuperZip<T1, T2, T3, TResult>(IEnumerable<T1> first,
   
IEnumerable<T2> second,
   
IEnumerable<T3> third,
   
Func<T1, T2, T3, TResult> Zipper)
{
   
var e1 = first.GetEnumerator();
   
var e2 = second.GetEnumerator();
   
var e3 = third.GetEnumerator();
   
while (e1.MoveNext() && e2.MoveNext() && e3.MoveNext())
       
yield return Zipper(e1.Current, e2.Current, e3.Current); }

 

This method would throw a NullReferenceException in the case where any of the source collections was null, or if the Zipper function was null. However, because this is an iterator method (using yield return), that exception would not be thrown until the caller begins to enumerate the result sequence.

That can make it hard to work with this method: errors may be observed in code locations that are not near the code that introduced the error. As a result, many libraries split this into two methods. The public method validates arguments. A private method implements the iterator logic:

public static IEnumerable<TResult> SuperZip<T1, T2, T3, TResult>(IEnumerable<T1> first,
   
IEnumerable<T2> second,
   
IEnumerable<T3> third,
   
Func<T1, T2, T3, TResult> Zipper)
{
   
if (first == null)
       
throw new NullReferenceException("first sequence cannot be null");
   
if (second == null)
       
throw new NullReferenceException("second sequence cannot be null");
   
if (third == null)
        
throw new NullReferenceException("third sequence cannot be null");
   
if (Zipper == null)
        
throw new NullReferenceException("Zipper function cannot be null");

   
return SuperZipImpl(first, second, third, Zipper);
}
private static IEnumerable<TResult> SuperZipImpl<T1, T2, T3, TResult>(IEnumerable<T1> first,
   
IEnumerable<T2> second,
   
IEnumerable<T3> third,
   
Func<T1, T2, T3, TResult> Zipper) {
   
var e1 = first.GetEnumerator();
   
var e2 = second.GetEnumerator();
   
var e3 = third.GetEnumerator();
   
while (e1.MoveNext() && e2.MoveNext() && e3.MoveNext())
       
yield return Zipper(e1.Current, e2.Current, e3.Current); }

 

This solves the problem. The arguments are evaluated, and if any are null, an exception is thrown immediately. But it isn’t as elegant as we might like. The SuperZipImpl method is only called from the SuperZip() method. Months later, it may be more difficult to understand what was originally written, and that the SuperZipImpl is only referred to from this one location.

Local functions make this code more readable. Here would be the equivalent code using a Local Function implementation:

 

public static IEnumerable<TResult> SuperZip<T1, T2, T3, TResult>(IEnumerable<T1> first,
   
IEnumerable<T2> second,
   
IEnumerable<T3> third,
   
Func<T1, T2, T3, TResult> Zipper) {
   
if (first == null)
       
throw new NullReferenceException("first sequence cannot be null");
   
if (second == null)
       
throw new NullReferenceException("second sequence cannot be null");
   
if (third == null)
       
throw new NullReferenceException("third sequence cannot be null");
   
if (Zipper == null)
       
throw new NullReferenceException("Zipper function cannot be null");

    IEnumerable<
TResult>Iterator()
    {
       
var e1 = first.GetEnumerator();
       
var e2 = second.GetEnumerator();
       
var e3 = third.GetEnumerator();

       
while (e1.MoveNext() && e2.MoveNext() && e3.MoveNext())
           
yield return Zipper(e1.Current, e2.Current, e3.Current);
    }
   
return Iterator(); }

 

Notice that the local function does not need to declare any arguments. All the arguments and local variables in the outer function are in scope. This minimizes the number of arguments that need to be declared for the inner function. It also minimizes errors. The local Iterator() method can be called only from inside SuperZip(). It is very easy to see that all the arguments have been validated before calling Iterator(). In larger classes, it could be more work to guarantee that if the iterator method was a private method in a large class.

This same idiom would be used for validating arguments in async methods.

This example method shows the pattern:

 

public static async Task<int> PerformWorkAsync(int value)
{
    
if (value < 0)
        
throw new ArgumentOutOfRangeException("value must be non-negative");
    
if (value > 100)
        
throw new ArgumentOutOfRangeException("You don't want to delay that long!");
 
   
// Simulate doing some async work     await Task.Delay(value * 500);

   
return value * 500; }

 

 

This exhibits the same issue as the iterator method. This method doesn’t synchronously throw exceptions, because it is marked with the ‘async’ modifier. Instead, it will return a faulted task. That Task object contains the exception that caused the fault. Calling code will not observe the exception until the Task returned from this method is awaited (or its result is examined).

In the current version of C#, that leads to this idiom:

 

public static Task<int> PerformWorkAsync2(int value)
{
    
if (value < 0)
        
throw new ArgumentOutOfRangeException("value must be non-negative");
    
if (value > 100)
        
throw new ArgumentOutOfRangeException("You don't want to delay that long!");
    
return PerformWorkImpl(value); }

private static async Task<int> PerformWorkImpl(int value) {
    
await Task.Delay(value * 500);
    
return value * 500; }

 

Now, the programming errors cause a synchronous exception to be thrown (from PerformWorkAsync) before calling the async method that leverages the async and await features. This idiom is also easier to express using local functions:

 

public static Task<int> PerformWorkAsync(int value)
{
    
if (value < 0)
        
throw new ArgumentOutOfRangeException("value must be non-negative");
    
if (value > 100)
        
throw new ArgumentOutOfRangeException("You don't want to delay that long!");
     async Task<
int> AsyncPart()
     {
        
await Task.Delay(value * 500);
        
return value * 500;
     }
 
   
return AsyncPart();
}

 

The overall effect is a more clear expression of your design. It’s easier to see that a local function is scoped to its containing function. It’s easier to see that the local function and its containing method are closely related.

This is just a small way where C# 7 can make it easier to write code that more clearly expresses your design.

Tags: C#
Created: 2/23/2016 4:15:30 PM

One of the fun parts of exploring and investigating the C# Language Specification is writing code that you would not write for a production application. It’s fun to write code that bends the language features.

Most developers are familiar with the concept that in .NET, exceptions are always objects that are derived from System.Exception.

This is covered in S. 8.9.5 of the C# Language Specification (4th edition). It states:

“The expression [in a throw statement] must denote a value of the class type System.Exception, of a class type that derives from System.Exception (or a subclass thereof), or of a type parameter type that has System.Exception (or a subclass thereof) as its effective base class.”

Here are examples of throwing an object derived from System.Exception, and a type parameter that has System.Exception as its base class:

 

public static void ThrowThingsVersionOne()
{     throw new InvalidOperationException
        ("Because the object's state is investigating exceptions");
}
 
public static void ThrowThingsVersionTwo<T>()      where T : System.Exception, new()
{     throw new T();
}

 

This section goes on to explain what happens in this instance:

 

public static void ThrowThingsVersionThree()
{     throw null;
}

 

The spec states (also in S. 8.9.5):

“If evaluation of the expression produces null, a System.NullReferenceException is thrown instead.”

You could write this:

 

public static void ThrowThingsVersionFour()
{     throw default(NullReferenceException);
}

 

Or, if wanted to confuse the developers that read your code later, you could write this:

 

public static void ThrowThingsVersionFive()
{     // Throws a NullReferenceException:
    throw default(InvalidOperationException);
}

 

Now, we are starting to get to some harder to read code. I’ve added an explanatory comment. Without it, we’re beginning to write code that can confuse other developers. Let’s see how far we can take this.

Let’s try this:

 

public static void ThrowThingsVersionSix()
{     throw default(string);
}

 

The compiler prevents this sort of evil. I’ve tried to throw null, but I’ve declared it such that the compile time type is System.String. That’s not derived from System.Exception, so the compiler flags the error.

Well, let’s learn how good the compiler is at determining what’s being thrown. First, let’s try an implicitly typed local variable:

 

public static void ThrowThingsVersionSeven()
{     var e = new InvalidOperationException
        ("Because the object's state is investigating exceptions");     throw e;
}

 

That compiles, and throws the expected InvalidOperationException. Implicitly typed variables have a compile time type that matches the right hand side of the assignment. How about this:

 

public static void ThrowThingsVersionEight()
{     object e = new InvalidOperationException
        ("Because the object's state is investigating exceptions");     throw e;
}

 

It doesn’t compile, because the compile time type of ‘e’ is System.Object. Well, let’s try to coerce the compiler and bend it to our evil will:

 

public static void ThrowThingsVersionNine()
{     dynamic e = new InvalidOperationException
        ("Because the object's state is investigating exceptions");     throw e;
}

 

The compiler still thwarts our evil intent. This doesn’t compile, because ‘dynamic’ doesn’t derive from System.Exception. Because the language rules for dynamic allow us to try to convert it to any type, we can bend the compiler to our evil will:

 

public static void ThrowThingsVersionTen()
{     dynamic e = new InvalidOperationException
        ("Because the object's state is investigating exceptions");     throw (System.Exception)e;
}

 

Bwa ha ha ha, I say. We’ve finally found a path to force the compiler to pure evil.

 

To finish, let’s try to throw something that’s not an exception. Without running the code, try and figure out what this might do:

 

public static void ThrowThingsVersionEleven()
{     dynamic e = "Because the object's state is investigating exceptions";     throw (System.Exception)e;
}

 

I’ll update this post toward the end of the week with the explanation.

Current Projects

I create content for .NET Core. My work appears in the .NET Core documentation site. I'm primarily responsible for the section that will help you learn C#.

All of these projects are Open Source (using the Creative Commons license for content, and the MIT license for code). If you would like to contribute, visit our GitHub Repository. Or, if you have questions, comments, or ideas for improvement, please create an issue for us.

I'm also the president of Humanitarian Toolbox. We build Open Source software that supports Humanitarian Disaster Relief efforts. We'd appreciate any help you can give to our projects. Look at our GitHub home page to see a list of our current projects. See what interests you, and dive in.

Or, if you have a group of volunteers, talk to us about hosting a codeathon event.