How SHOULD I do Linux/BSD Systems Management?

I’m still looking for some of these answers.. but at least I’m on the journey.

How SHOULD I distribute a SSH command to 20 different linux servers?

So I’ve been playing around with Hadoop the past couple of months, it’s been fascinating. I still think that it’s ridiculously slow for some things.. and just don’t know when I’m gonna see the light on it.. I probably need to dust off this stack of Pentium4 and really truly setup a 5-node cluster, I think.

But for now, it’s primarily virtual.. Maybe some copies on Windows, Some in VMware (CentOS for now.. although I CANNOT *WAIT*  for HDP 2.x to be deployable onto Ubuntu 14.x..

Hpyer-V Generation 2 is supported today in VMware 14.04.. I really honestly think that Generation 2 FEELS twice as fast as Generation 1.  Maybe I’m crazy.  Maybe I’ll try to do some benchmarks.. I just love Generation 2, seems lightning fast to me.  The ONE feature that I don’t like with better support on Linux.. I don’t EVER want a huge screen resolution, I really wish that VMware or HyperV allowed me to size the screen resolution easier. There has GOT to be a better way to do this than currently.. Maybe I

I want to get a couple of different Hadoop clusters up I think.. one for system management, one for staging / testing.. kindof keep them separated.. I Just am fascinated by the ability to use Hadoop as a Systems Management server, it looks like it has a lot of capabilities that SQL Agent has.. would be nice if I could take small pieces of job control and put it on 20 Linux servers, instead of just 1.. without installing the full hadoop stack.

or should I just grow a pair and learn puppet or something similar?

Hyperic –

Chef –

Zabbix –

Nagios –

Zenoss –



Windows Server 2012 R2 Update – enables METRO IE?

I swear, I was SHOCKED to find this.. but I swear, installing the ‘Window Server 2012 R2 Update’ was a really exciting thing for me.  I was excited to get the new controls on top of Metro apps.. where you can easily minimize those applications to the task bar. KILLER FEATURE.

I was enthusiastic about some of the features for making ‘Windows 8.1’ a little bit easier to use.

The ONE feature that drives me CRAZY? METRO IE.  I really honestly thought that I’d NEVER have to deal with that travesty ever again.   I swear, I’m never gonna install another patch from Microsoft in my whole life. LOL

Problem with Tiered Storage / Storage Space. Gets disconnected every reboot. Need to attach manually.

I have a new Windows 2012 R2 Storage Space.. tiered storage.. Just two drives.. 2tb SATA and 40gb SSD. It literally gets disconnected every time I reboot my server. Every single time. 10 times in a row- this array goes offline, it gives some warning like ‘unknown’ with a yellow warning triangle.. And I just can’t figure out any other reasonable workaround.

I think that it’s because I only have 1gb writeback.. Is there any way to resize the array? I swear, It took me about 10 attempts to get it to work in the first place.. I really don’t think that it’s a driver issue.. it’s just that ALL the tutorials I saw were for a half-dozen disks.. Really couldn’t find a tutorial for just 1 large disk with 1 SSD.

I really am looking forward to using Tiered Storage with SQL 2014.. I just can’t use it in a production capacity until I have this resolved.

It’s really just a development environment, and I’m phoneless for a couple of days.. otherwise, I’d probably call in for help on this.

Just can’t find anything about this online.. Google is too full of spam to be useful to me anymore.


-Aaron Website – nothing but errors.

I generally detest 3rd party software.
I generally detest websites with broken links.

But getting BOTH features in one quick trip to – COME ON PEOPLE!

#1) under the products menu, choose SQL Traffic Accelerator – 404

#2) Downloaded SOME random software.. underneath the ‘Download Now’ link, it clearly said ‘Click on one of these 4 buttons to see the rest of our products’.  Unfortunately there were only 3 buttons.


How to Automatically Create Build Backups in Visual Studio


If you are a one man development team, you probably don’t really have the need for a full blown version control system, yet creating source code backups for each released version is undoubtedly important.

By leveraging the power of post-build events and a simple batch script, you can easily add the ability to have Visual Studio automatically create a source code backup for each release code build.

How it works

Our solution is simple: whenever a successful build event occurs, we have a batch script run which creates a compressed archive (optionally tagged and timestamped) of all files in the respective Visual Studio project folder.

That’s it.  All you have to do is follow the steps below.

Setting up automatic build backups

First you will need to download and extract the batch script file from the link at the bottom of the article. Additionally, you will need the 7-Zip command line tool (this is included with a the ‘full’ version of the Project Build Backup script, or you can download it separately). In our example, we extracted these files to the directory “C:Tools”, but any location will work.

Open your Visual Studio Project properties, by double-clicking on My Project under the respective project.


In the project properties, go to the Compile section.


In the bottom right corner, click the Build Events button.


In our case, we want to make a backup after a successful compile action. Make sure you have the option to run the post-build event “On successful build” and then click the Edit Post-build button.


The command below creates a build backup only for the compile of the Release configuration (this is what IF condition checks for) as, realistically, we probably don’t want to make a backup of each Debug/testing build. Additionally, the current timestamp will be appended (/D switch) with the backup file being in 7z file format (/7z) as opposed to zip. By adding the /T “$(ConfigurationName)” as a parameter, we are appending the build type (Release in this case) to the name of the backup file.

IF “$(ConfigurationName)” == “Release” CALL C:ToolsProjectBuildBackup.bat “$(SolutionDir)” “$(ProjectDir)” “$(ProjectName)” /T “$(ConfigurationName)” /D /7z

Using the Macros button, you can have Visual Studio prefill project specific information so no hardcoding is required. You can adjust this command as needed (especially the location of the batch file), but the first three parameters will likely not need to be changed.

It is important to keep in mind that post-event operations run regardless of the project configuration selected. This is why we need to add the IF “$(ConfigurationName)” == “Release” statement – otherwise the backup action would occur on every successful build event.


Once you finish your command and apply it, the command string should appear in the Post-build events section.

Note that while the “CALL” command is not technically required, it is highly recommended, as if this is omitted then any events added after this may not execute.


Now whenever you run a compile/build with your project in the Release configuration, you will see the output from the build backup operation.




Each successful Release build creates a new timestamped archive with the solution folder in a subdirectory, “Builds” (which can be custom defined with the /O switch if needed).


The contents of each backup is the full Visual Studio project – source files, configuration settings, compiled binaries, and all – which makes this a true point in time backup.


Not a replacement for a full version control system

In closing, we just want to reiterate that this tool is not intended to replace a full blown version control system. It is simply a useful tool for developers to create snapshots of their project’s source code after each compilation.

In the event you ever have to go back and examine a prior version, having a ready-to-use (just extract to a new directory) project file for a point in time compilation can really come in handy.

Hyper-V Gaining Share Against VMware

IDC noted that VMware continues to dominate the market in terms of market share with 56.8% – but that is down from 65.4% in 2008.   What’s particularly interesting to me is that we’re seeing an even faster ramp for Hyper-V in our core midrange market as companies with between 50 and 5000 employees seem to be adopting Hyper-V faster than the Fortune 500 data centers (although we see evidence of Fortune 500 departmental Hyper-V adoption growing at a fast clip.)

Unitrends of course has a free fully-functional virtual appliance (Unitrends Enterprise Backup(TM)) that works on Hyper-V and VMware vSphere – so we’re not disinterested third-parties.  About 40% of our tens of thousands of downloads have been for Hyper-V.

Microsoft’s position was summarized by Kevin Turner, Microsoft COO: “We’re now in a situation where we have a very, very strong market share. We’re growing every quarter, and the dominant guy is losing every quarter as it relates to virtualization.”

What do you think?  Do you think that Microsoft Hyper-V going to eventually have superior market share when compared to VMware vSphere?

Top 10 Mistakes that C# Programmers Make

About C#

C# is one of several languages that target the Microsoft Common Language Runtime (CLR). Languages that target the CLR benefit from features such as cross-language integration and exception handling, enhanced security, a simplified model for component interaction, and debugging and profiling services.  Of today’s CLR languages, C# is the most widely used for complex, professional development projects that target the Windows desktop, mobile, or server environments.

C# is an object oriented, strongly-typed language. The strict type checking in C#, both at compile and run times, results in the majority of typical programming errors being reported as early as possible, and their locations pinpointed quite accurately. This can save the programmer a lot of time, compared to tracking down the cause of puzzling errors which can occur long after the offending operation takes place in languages which are more liberal with their enforcement of type safety.  However, a lot of programmers unwittingly (or carelessly) throw away the benefits of this detection, which leads to some of the issues discussed in this article.

About this article

This article describes 10 of the most common programming mistakes made, or pitfalls to be avoided, by C# programmers.

While most of the mistakes discussed in this article are C# specific, some are also relevant to other languages that target the CLR or make use of the Framework Class Library (FCL).


Common Mistake #1: Using a reference like a value or vice versa

Programmers of C++, and many other languages, are accustomed to being in control of whether the values they assign to variables are simply values or are references to existing objects. In C#, however, that decision is made by the programmer who wrote the object, not by the programmer who instantiates the object and assigns it to a variable.  This is a common “gotcha” for newbie C# programmers.

If you don’t know whether the object you’re using is a value type or reference type, you could run into some surprises. For example:

  Point point1 = new Point(20, 30);
  Point point2 = point1;
  point2.X = 50;
  Console.WriteLine(point1.X);       // 20 (does this surprise you?)
  Console.WriteLine(point2.X);       // 50

  Pen pen1 = new Pen(Color.Black);
  Pen pen2 = pen1;
  pen2.Color = Color.Blue;
  Console.WriteLine(pen1.Color);     // Blue (or does this surprise you?)
  Console.WriteLine(pen2.Color);     // Blue

As you can see, both the Point and Pen objects were created the exact same way, but the value of point1 remained unchanged when a new X coordinate value was assigned to point2, whereas the value of pen1 was modified when a new color was assigned to pen2. We can therefore deduce that point1 and point2 each contain their own copy of a Point object, whereas pen1 and pen2 contain references to the same Pen object. But how can we know that without doing this experiment?

The answer is to look at the definitions of the object types (which you can easily do in Visual Studio by placing your cursor over the name of the object type and pressing F12):

  public struct Point { … }     // defines a “value” type
  public class Pen { … }        // defines a “reference” type

As shown above, in C#, the struct keyword is used to define a value type, while the class keyword is used to define a reference type. For those with a C++ background, who were lulled into a false sense of security by the many similarities between C++ and C# keywords, this behavior likely comes as a surprise.

If you’re going to depend on some behavior which differs between value and reference types – such as the ability to pass an object as a method parameter and have that method change the state of the object – make sure that you’re dealing with the correct type of object.

Common Mistake #2: Misunderstanding default values for uninitialized variables

In C#, value types can’t be null. By definition, value types have a value, and even uninitialized variables of value types must have a value. This is called the default value for that type.  This leads to the following, usually unexpected result when checking if a variable is uninitialized:

  class Program {
      static Point point1;
      static Pen pen1;
      static void Main(string[] args) {
          Console.WriteLine(pen1 == null);      // True
          Console.WriteLine(point1 == null);    // False (huh?)

Why isn’t point1 null? The answer is that Point is a value type, and the default value for a Point is (0,0), not null. Failure to recognize this is a very easy (and common) mistake to make in C#.

Many (but not all) value types have an IsEmpty property which you can check to see if it is equal to its default value:

  Console.WriteLine(point1.IsEmpty);        // True

When you’re checking to see if a variable has been initialized or not, make sure you know what value an uninitialized variable of that type will have by default and don’t rely on it being null..

Common Mistake #3: Using improper or unspecified string comparison methods

There are many different ways to compare strings in C#.

Although many programmers use the == operator for string comparison, it is actually one of the least desirable methods to employ, primarily because it doesn’t specify explicitly in the code which type of comparison is wanted.

Rather, the preferred way to test for string equality in C# is with the Equals method:

  public bool Equals(string value);
  public bool Equals(string value, StringComparison comparisonType);

The first method signature (i.e., without the comparisonType parameter), is actually the same as using the == operator, but has the benefit of being explicitly applied to strings. It performs an ordinal comparison of the strings, which is basically a byte-by-byte comparison. In many cases this is exactly the type of comparison you want, especially when comparing strings whose values are set programmatically, such as file names, environment variables, attributes, etc. In these cases, as long as an ordinal comparison is indeed the correct type of comparison for that situation, the only downside to using the Equals method without a comparisonType is that somebody reading the code may not know what type of comparison you’re making.

Using the Equals method signature that includes a comparisonType every time you compare strings, though, will not only make your code clearer, it will make you explicitly think about which type of comparison you need to make. This is a worthwhile thing to do, because even if English may not provide a whole lot of differences between ordinal and culture-sensitive comparisons, other languages provide plenty, and ignoring the possibility of other languages is opening yourself up to a lot of potential for errors down the road.  For example:

  string s = "strasse";

  // outputs False:
  Console.WriteLine(s == "straße");
  Console.WriteLine(s.Equals("straße", StringComparison.Ordinal));
  Console.WriteLine(s.Equals("Straße", StringComparison.CurrentCulture));        
  Console.WriteLine(s.Equals("straße", StringComparison.OrdinalIgnoreCase));

  // outputs True:
  Console.WriteLine(s.Equals("straße", StringComparison.CurrentCulture));
  Console.WriteLine(s.Equals("Straße", StringComparison.CurrentCultureIgnoreCase));

The safest practice is to always provide a comparisonType parameter to the Equals method. Here are some basic guidelines:

  • When comparing strings that were input by the user, or are to be displayed to the user, use a culture-sensitive comparison (CurrentCulture or CurrentCultureIgnoreCase).
  • When comparing programmatic strings, use ordinal comparison (Ordinal or OrdinalIgnoreCase).
  • InvariantCulture and InvariantCultureIgnoreCase are generally not to be used except in very limited circumstances, because ordinal comparisons are more efficient.  If a culture-aware comparison is necessary, it should usually be performed against the current culture or another specific culture.

In addition to the Equals method, strings also provide the Compare method, which gives you information about the relative order of strings instead of just a test for equality. This method is preferable to the <, <=, > and >= operators, for the same reasons as discussed above.

Common Mistake #4: Using iterative (instead of declarative) statements to manipulate collections

In C# 3.0, the addition of Language-Integrated Query (LINQ) to the language changed forever the way collections are queried and manipulated.  Since then, if you’re using iterative statements to manipulate collections, you didn’t use LINQ when you probably should have.

Some C# programmers don’t even know of LINQ’s existence, but fortunately that number is becoming increasingly small. Many still think, though, that because of the similarity between LINQ keywords and SQL statements, its only use is in code that queries databases.

While database querying is a very prevalent use of LINQ statements, they actually work over any enumerable collection (i.e., any object that implements the IEnumerable interface).  So for example, if you had an array of Accounts, instead of writing:

  decimal total = 0;
  foreach (Account account in myAccounts) {
    if (account.Status == "active") {
      total += account.Balance;

you could just write:

  decimal total = (from account in myAccounts
                   where account.Status == "active"
                   select account.Balance).Sum();

While this is a pretty simple example, there are cases where a single LINQ statement can easily replace dozens of statements in an iterative loop (or nested loops) in your code.  And less code general means less opportunities for bugs to be introduced. Keep in mind, however, there may be a trade-off in terms of performance. In performance-critical scenarios, especially where your iterative code is able to make assumptions about your collection that LINQ cannot, be sure to do a performance comparison between the two methods.

Editor’s note: want posts just like this delivered straight to your inbox? Subscribe below to receive our latest engineering articles.

Common Mistake #5: Failing to consider the underlying objects in a LINQ statement

LINQ is great for abstracting the task of manipulating collections, whether they are in-memory objects, database tables, or XML documents.  In a perfect world, you wouldn’t need to know what the underlying objects are. But the error here is assuming we live in a perfect world.  In fact, identical LINQ statements can return different results when executed on the exact same data, if that data happens to be in a different format.

For instance, consider the following statement:

  decimal total = (from account in myAccounts
                   where account.Status == "active"
                   select account.Balance).Sum();

What happens if one of the object’s account.Status equals “Active” (note the capital A)?  Well, if myAccounts was a DbSet object (that was set up with the default case-insensitive configuration), the where expression would still match that element.  However, if myAccounts was in an in-memory array, it would not match, and would therefore yield a different result for total.

But wait a minute.  When we talked about string comparison earlier, we saw that the == operator performed an ordinal comparison of strings. So why in this case is the == operator performing a case-insensitive comparison?

The answer is that when the underlying objects in a LINQ statement are references to SQL table data (as is the case with the Entity Framework DbSet object in this example), the statement is converted into a T-SQL statement. Operators then follow T-SQL rules, not C# rules, so the comparison in the above case ends up being case insensitive.

In general, even though LINQ is a helpful and consistent way to query collections of objects, in reality you still need to know whether or not your statement will be translated to something other than C# under the hood to ensure that the behavior of your code will be as expected at runtime.

Common Mistake #6: Getting confused or faked out by extension methods

As mentioned earlier, LINQ statements work on any object that implements IEnumerable. For example, the following simple function will add up the balances on any collection of accounts:

  public decimal SumAccounts(IEnumerable myAccounts) {
      return myAccounts.Sum(a => a.Balance);

In the above code, the type of the myAccounts parameter is declared as IEnumerable<Account>.  Since myAccounts references a Sum method (C# uses the familiar “dot notation” to reference a method on a class or interface), we’d expect to see a method called Sum() on the definition of the IEnumerable<T> interface.  However, the definition of IEnumerable<T>, makes no reference to any Sum method and simply looks like this:

  public interface IEnumerable<out T> : IEnumerable {
      IEnumerator GetEnumerator();

So where is the Sum() method defined? C# is strongly typed, so if the reference to the Sum method was invalid, the C# compiler would certainly flag it as an error.  We therefore know that it must exist, but where?  Moreover, where are the definitions of all the other methods that LINQ provides for querying or aggregating these collections?

The answer is that Sum() is not a method defined on the IEnumerable interface. Rather, it is a static method (called an “extension method”) that is defined on the System.Linq.Enumerable class:

  namespace System.Linq {
    public static class Enumerable {
      // the reference here to “this IEnumerable<TSource> source” is
      // the magic sauce that provides access to the extension method Sum
      public static decimal Sum<TSource>(this IEnumerable<TSource> source,
                                         Func<TSource, decimal> selector);

So what makes an extension method different from any other static method and what enables us to access it in other classes?

The distinguishing characteristic of an extension method is the this modifier on its first parameter. This is the “magic” that identifies it to the compiler as an extension method. The type of the parameter it modifies (in this case IEnumerable<TSource>) denotes the class or interface which will then appear to implement this method.

(As a side point, there’s nothing magical about the similarity between the name of the IEnumerable interface and the name of the Enumerable class on which the extension method is defined. This similarity is just an arbitrary stylistic choice.)

With this understanding, we can also see that the sumAccounts function we introduced above could instead have been implemented as follows:

  public decimal SumAccounts(IEnumerable<Account> myAccounts) {
      return Enumerable.Sum(myAccounts, a => a.Balance);

The fact that we could have implemented it this way instead raises the question of why have extension methods at all?  Extension methods are essentially a convenience of the C# language that enables you to “add” methods to existing types without creating a new derived type, recompiling, or otherwise modifying the original type.

Extension methods are brought into scope by including a using [namespace]; statement at the top of the file. You need to know which namespace includes the extension methods you’re looking for, but that’s pretty easy to determine once you know what it is you’re searching for.

When the C# compiler encounters a method call on an instance of an object, and doesn’t find that method defined on the referenced object class, it then looks at all extension methods that are within scope to try to find one which matches the required method signature and class. If it finds one, it will pass the instance reference as the first argument to that extension method, then the rest of the  arguments, if any, will be passed as subsequent arguments to the extension method.  (If the C# compiler doesn’t find any corresponding extension method within scope, it will throw an error.)

Extension methods are an example of “syntactic sugar” on the part of the C# compiler, which allows us to write code that is (usually) clearer and more maintainable.  Clearer, that is, if you’re aware of their usage. Otherwise, it can be a bit confusing, especially at first.

While there certainly are advantages to using extension methods, they can cause headaches and wasted time for those developers who aren’t aware of them or don’t properly understand them. This is especially true when looking at code samples online, or at any other pre-written code. When such code  produces compiler errors (because it invokes methods that clearly aren’t defined on the classes they’re invoked on), the tendency is to think the code applies to a different version of the library, or to a different library altogether. A lot of time can be spent searching for a new version, or phantom “missing library”, that doesn’t exist.

Even developers who are familiar with extension methods still get caught occasionally, when there is a method with the same name on the object, but its method signature differs in a subtle way from that of the extension method. A lot of time can be wasted looking for a typo or error that just isn’t there.

Use of extension methods in C# libraries is becoming increasingly prevalent.  In addition to LINQ, the Unity Application Block and the Web API framework are examples of two heavily-used modern libraries by Microsoft which make use of extension methods as well, and there are many others. The more modern the framework, the more likely it is that it will incorporate extension methods.

Of course, you can write your own extension methods as well. Realize, however, that while extension methods appear to get invoked just like regular instance methods, this is really just an illusion.  In particular, your extension methods can’t reference private or protected members of the class they’re extending and therefore cannot serve as a complete replacement for more traditional class inheritance.

Common Mistake #7: Using the wrong type of collection for the task at hand

C# provides a large variety of collection objects, with the following being only a partial list:
Array, ArrayList, BitArray, BitVector32, Dictionary<K,V>, HashTable, HybridDictionary, List<T>, NameValueCollection, OrderedDictionary, Queue, Queue<T>, SortedList, Stack, Stack<T>, StringCollection, StringDictionary.

While there can be cases where too many choices is as bad as not enough choices, that isn’t the case with collection objects. The number of options available can definitely work to your advantage.  Take a little extra time upfront to research and choose the optimal collection type for your purpose.  It will likely result in better performance and less room for error.

If there’s a collection type specifically targeted at the type of element you have (such as string or bit) lean toward using that one first. The implementation is much more efficient when it’s targeted to a specific type of element.

To take advantage of the type safety of C#, you should usually prefer a generic interface over a non-generic one. The elements of a generic interface are of the type you specify when you declare your object, whereas the elements of non-generic interfaces are of type object. When using a non-generic interface, the C# compiler can’t type-check your code. Also, when dealing with collections of primitive value types, using a non-generic collection will result in repeated boxing/unboxing of those types, which can result in a significant negative performance impact when compared to a generic collection of the appropriate type.

Another common pitfall is to write your own collection object. That isn’t to say it’s never appropriate, but with as comprehensive a selection as the one .NET offers, you can probably save a lot of time by using or extending one that already exists, rather than reinventing the wheel.  In particular, the C5 Generic Collection Library for C# and CLI offers a wide array of additional collections “out of the box”, such as persistent tree data structures, heap based priority queues, hash indexed array lists, linked lists, and much more.

Common Mistake #8: Neglecting to free resources

The CLR environment employs a garbage collector, so you don’t need to explicitly free the memory created for any object. In fact, you can’t. There’s no equivalent of the C++ delete operator or the free() function in C . But that doesn’t mean that you can just forget about all objects after you’re done using them. Many types of objects encapsulate some other type of system resource (e.g., a disk file, database connection, network socket, etc.).  Leaving these resources open can quickly deplete the total number of system resources, degrading performance and ultimately leading to program faults.

While a destructor method can be defined on any C# class, the problem with destructors (also called finalizers in C#) is that you can’t know for sure when they will be called. They are called by the garbage collector (on a separate thread, which can cause additional complications) at an indeterminate time in the future. Trying to get around these limitations by forcing garbage collection with GC.Collect() is not a good practice, as that will block the thread for an unknown amount of time while it collects all objects eligible for collection.

This is not to say there are no good uses for finalizers, but freeing resources in a deterministic way isn’t one of them. Rather, when you’re operating on a file, network or database connection, you want to explicitly free the underlying resource as soon as you are done with it.

Resource leaks are a concern in almost any environment. However, C# provides a mechanism that is robust and simple to use which, if utilized, can make leaks a much rarer occurrence. The .NET framework defines the IDisposable interface, which consists solely of the Dispose() method. Any object which implements IDisposable expects to have that method called whenever the consumer of the object is finished manipulating it. This results in explicit, deterministic freeing of resources.

If you are creating and disposing of an object within the context of a single code block, it is basically inexcusable to forget to call Dispose(), because C# provides a using statement that will ensure Dispose() gets called no matter how the code block is exited (whether it be an exception, a return statement, or simply the closing of the block). And yes, that’s the same using statement mentioned previously that is used to include namespaces at the top of your file. It has a second, completely unrelated purpose, which many C# developers are unaware of; namely, to ensure that Dispose() gets called on an object when the code block is exited:

  using (FileStream myFile = File.OpenRead("foo.txt")) {
    myFile.Read(buffer, 0, 100);

By creating a using block in the above example, you know for sure that myFile.Dispose() will be called as soon as you’re done with the file, whether or not Read() throws an exception.

Common Mistake #9: Shying away from exceptions

C# continues its enforcement of type safety into runtime. This allows you to pinpoint errors much more quickly than in languages such as C++, where faulty type conversions can result in arbitrary values being assigned to an object’s fields. However, once again, programmers can squander this great feature of C#. They fall into this trap because C# provides two different ways of doing things, one which can throw an exception, and one which won’t. Some will shy away from the exception route, figuring that not having to write a try/catch block saves them some coding.

For example, here are two different ways to perform an explicit type cast in C#:

  // METHOD 1:
  // Throws an exception if account can't be cast to SavingsAccount
  SavingsAccount savingsAccount = (SavingsAccount)account;

  // METHOD 2:
  // Does NOT throw an exception if account can't be cast to
  // SavingsAccount; will just set savingsAccount to null instead
  SavingsAccount savingsAccount = account as SavingsAccount;

The most obvious error that could occur with the use of Method 2 would be a failure to check the return value. That would likely result in an eventual NullReferenceException, which could possibly surface at a much later time, making it much harder to track down the source of the problem.  In contrast, Method 1 would have immediately thrown an InvalidCastException making the source of the problem much more immediately obvious.

Moreover, even if you remember to check the return value in Method 2, what are you going to do if you find it to  be null? Is the method you’re writing an appropriate place to report an error? Is there something else you can try if that cast fails? If not, then throwing an exception is the correct thing to do, so you might as well let it happen as close to the source of the problem as possible.

Here are a couple of examples of other common pairs of methods where one throws an exception and the other does not:

  int.Parse();     // throws exception if argument can’t be parsed
  int.TryParse();  // returns a bool to denote whether parse succeeded

  IEnumerable.First();           // throws exception if sequence is empty
  IEnumerable.FirstOrDefault();  // returns null/default value if sequence is empty

Some programmers are so “exception adverse” that they automatically assume the method that doesn’t throw an exception is superior.  While there are certain select cases where this may be true, it is not at all correct as a generalization.

As a specific example, in a case where you have an alternative legitimate (e.g., default) action to take if an exception would have been generated, then that the non-exception approach could be a legitimate choice.  In such a case, it may indeed be better to write something like this:

  if (int.TryParse(myString, out myInt)) {
    // use myInt
  } else {
    // use default value

instead of:

  try {
    myInt = int.Parse(myString);
    // use myInt
  } catch (FormatException) {
    // use default value

However, it is incorrect to assume that TryParse is therefore necessarily the “better” method.  Sometimes that’s the case, sometimes it’s not. That’s why there are two ways of doing it. Use the correct one for the context you are in, remembering that exceptions can certainly be your friend as a developer.

Common Mistake #10: Allowing compiler warnings to accumulate

While this one is definitely not C# specific, it is particularly egregious in C# since it abandons the benefits of the strict type checking offered by the C# compiler.

Warnings are generated for a reason.  While all C# compiler errors signify a defect in your code, many warnings do as well. What differentiates the two is that, in the case of a warning, the compiler has no problem emitting the instructions your code represents. Even so, it finds your code a little bit fishy, and there is a reasonable likelihood that your code doesn’t accurately reflect your intent.

A common simple example is when you modify your algorithm to eliminate the use of a variable you were using, but you forget to remove the variable declaration. The program will run perfectly, but the compiler will flag the useless variable declaration. The fact that the program runs perfectly causes programmers to neglect to fix the cause of the warning. Furthermore, programmers take advantage of a Visual Studio feature which makes it easy for them to hide the warnings in the “Error List” window so they can focus only on the errors. It doesn’t take long until there are dozens of warnings, all of them blissfully ignored (or even worse, hidden).

But if you ignore this type of warning, sooner or later, something like this may very well find its way into your code:

  class Account {

      int myId;
      int Id;   // compiler warned you about this, but you didn’t listen!

      // Constructor
      Account(int id) {
          this.myId = Id;     // OOPS!


And at the speed Intellisense allows us to write code, this error isn’t as improbable as it looks.

You now have a serious error in your program (although the compiler has only flagged it as a warning, for the reasons already explained), and depending on how complex your program is, you could waste a lot of time tracking this one down. Had you paid attention to this warning in the first place, you would have avoided this problem with a simple five-second fix.

Remember, the C# compiler gives you a lot of useful information about the robustness of your code… if you’re listening. Don’t ignore warnings. They usually only take a few seconds to fix, and fixing new ones when they happen can save you hours. Train yourself to expect the Visual Studio “Error List” window to display “0 Errors, 0 Warnings”, so that any warnings at all make you uncomfortable enough to address them immediately.

Of course, there are exceptions to every rule.  Accordingly, there may be times when your code will look a bit fishy to the compiler, even though it is exactly how you intended it to be. In those very rare cases, use #pragma warning disable [warning id] around only the code that triggers the warning, and only for the warning ID that it triggers. This will suppress that warning, and that warning only, so that you can still stay alert for new ones.


C# is a powerful and flexible language with many mechanisms and paradigms that can greatly improve productivity.  As with any software tool or language, though, having a limited understanding or appreciation of its capabilities can sometimes be more of an impediment than a benefit, leaving one in the proverbial state of “knowing enough to be dangerous”.

Familiarizing oneself with the key nuances of C#, such as (but by no means limited to) the issues raised in this article, will help optimize use of the language while avoiding some of its more common pitfalls.

From Google Apps to Office 365: Why my company ditched Google

You’re probably expecting me to write a scathing exposé on how I’ve come to dislike Google Apps. That’s quite far from the truth behind why we left Google. There is a lot more to the story than meets the eye. It goes way farther than just a decision based on boxes checked off on a spec sheet. After more than one month since making the move to Office 365 full time, I can comfortably say we made the right decision as a company.

And of anyone who can make an honest dissection of Google Apps against Office 365, I’d say I’m as well suited as anyone in the IT blogosphere to be passing such critical judgement. Notwithstanding my own personal usage of Gmail since 2005 and Google Apps for my IT company since early 2010, I’ve likewise been both a Google Apps Certified Trainer and Google Apps Certified Deployment Specialist for years now. And I’ve personally been involved in Google Apps transitions for numerous small and large organizations in both the public and private sectors. So to say that I’ve been deeply invested in Google-ism for some time now is an understatement.

I’ve written some in-depth reviews of Google Apps and Office 365 separately in the past, and get frequent mail from both of them based on how I pitted one suite against the other in this category or that aspect. And while I’m not saying for a moment that I take back any of the statements I made in those pieces, I do honestly believe that “dogfooding” a given platform into your day to day business needs is the truest way to form the most accurate opinion of a product.

Surely, all of the monthly consulting time I spend helping other clients with their Office 365 and Google Apps installations gives me a raw insight with which to form solid opinions upon. But eating the dogfood you’re peddling to clients? That puts your own skin in the game in ways that doesn’t compare otherwise.

So that was my intended experiment of sorts. After spending nearly four years on Google Apps, learning its every nook and crevice, I threw an audible at my staff and told them we were transitioning to Office 365 by Thanksgiving 2013. And that’s exactly what we did. By Turkey Day, we were fully transitioned off Google Apps and drinking Redmond’s email kool-aid primetime.

The last month and a few days have been an interesting ride. From UI shock during the first week or so, down to natural comfort at this point. Here’s the skinny on what insight we’ve learned about leaving Google behind.

Forget Spec Sheets: This is a Battle of the Ecosystems

For anyone that has cold-called me asking about whether they should go Google or Microsoft for email, they know full well I don’t tote the corporate line that either company wishes. The big players in the cloud email arena tend to have pitched their tents in one camp or another. They’re either Microsoft Office 365 only, or conversely, stuck in Google Apps-ville. Unlike car dealers, where buyers stepping through the doors know exactly what they’re going to hear when they walk in, clients looking for honest direction for their email and UC needs want more than marketing drivel.

The battle between Microsoft and Google goes a lot further than who has bigger inboxes, more mobile apps, or whatever new whizbang feature can generate easy buzz. I’ve carefully learned that this is more-so a battle of the ecosystems at this point. Who’s got the all-encompassing platform that is looking to solve business needs the way your company views them? Who’s going to solve your email problems today, but offer you a segway to cloud document storage & unified communications & etc tomorrow?

That’s the question companies and organizations should be asking themselves. Because it’s the realization I’ve come to after our two-feet-first jump onto Office 365 a little more than a month ago. Google Apps isn’t a bad platform by any means. In fact, it’s pretty darn good. But in my eyes, when you view both suites as the sum of their individual parts, as a collective experience, Office 365 takes the upper hand. And I’ll explain why in detail.

At face value, Google’s core Apps offerings in the form of Gmail, Docs/Drive, Sites, and Hangouts are fairly solid offerings. But as a collective whole, they lack a certain polish. That x-factor which takes a platform from just good or great, to excellent. Google’s way is just that — the Google way or the highway.

This in-or-out dilemma exists in many facets in the Google Apps realm. For example, using Google’s Hangouts functionality for video and voice chat requires you to have a Google+ account activated. It’s basically a Google account that is opted into Google’s social network, Google+.

I have nothing against Google+ as I find it vibrantly different and more gratifying than Facebook these days, but forcing your meeting participants to all have Google+ enabled on top of having Google accounts as well? That’s more than a bit self serving if you ask me. In contrast, Micrososft’s Lync doesn’t require any of this for me to initiate meetings with external users. As long as I myself have a paid account for Lync, I can invite whoever I want (up to 250 of them, in fact) no matter if they ever had an Office 365 or Microsoft account in their life.

Google plays the same card on the way they treat Microsoft Office users. Sure, you can upload anything you want into Google Drive and store it to your heart’s content — but good luck trying to edit or collaborate on those documents in a web browser. Google will gladly convert those files into Google Docs, and force you to play the Docs-vs-Office juggling act in your file storage needs. We did it for years, but I had enough.

The same goes for Google’s half-hearted support for Microsoft Outlook. I know very well that Google has been advertising their half baked Google Apps Sync for Outlook tool for years. I’m far from an Outlook desktop app lover, as I use Outlook Web App nearly 99 percent of the time on Office 365, but I know many companies live or die by it. You don’t want me to describe the feelings that users of this plugin which have been conveyed to me. The comments I’ve heard in the field would make a Comedy Central comedian blush.

Not to mention that Google spent the better part of 2013 lambasting Microsoft for making changes to how Office installs via Click-to-Run, saying that their Sync tool wouldn’t be compatible with the new 2013 edition of Office for this reason. And then they made a 180 degree about face come November 2013 and released an edition of Sync that actually does work with Click-to-Run after all. I guess enough enterprise customers poured their lungs out at Google support and they eventually kowtowed.

Mind you, Outlook extensions of all sorts were functional with Outlook 2013 leagues before Google got their act together, including ACT! by Sage and ESET NOD32 Antivirus, to name a few. But I digress.

At face value, Google claims their Sync for Outlook tool is the perfect holdover for those who wish to use Outlook on Google Apps. In reality, I know this is far from the case. Of the numerous companies I’ve moved to Google Apps who are reliant on Outlook and use this tool, not one has been completely satisfied with the product due to bugs, glitches, and other oddities we run into all the time. Google should be advertising their Sync tool as sorta works, sorta doesn’t. (Image Source: Google)

If you take a look at the ecosystem that Office 365 affords, it’s breadth and approach is different in every conceivable way. Google believes in an all-you-can-eat pricing approach; Microsoft believes in paying for only what you need.  Google’s Drive cloud storage app treats Office files like the plague; Microsoft believes you should be able to work on the desktop or in the browser as you choose. Google’s Hangouts tool gets first class treatment in Google branded products only (Chrome, Android); Microsoft offers Lync capability nearly ubiquitously on almost every device and OS on the market.

The same can be said about Google’s approach to an industry compliance necessity for the medical sector, HIPAA, which has begun affecting our company due to our status as a business associate for healthcare customers. While Office 365 has supported full HIPAA compliance since its early days, Google has been a holdout until Sep of last year. Mind you, Sep 23 was the deadline under the HITECH Act amendment that stated health organizations had to be in full compliance by that date. In short, too little too late from Google’s end — they shouldn’t be surprised that healthcare is staying far away from their platform.

It goes without saying, then, that if you are solely chasing feature matrices when making your decision between Google and Microsoft, you’re only revealing half the story. An email platform in 2013 is not just an inbox; it’s a unified communications tool that will make or break the way your organization works with the rest of the world.

SharePoint vs Google Drive: Who’s Hosting your Cloud File Server?

Up until Office 365, my company was living a double life in terms of its document storage needs. We had an office NAS box (a nice QNAP TS-239 Pro II+, which we still use for bare metal client PC backups) that was storing traditional Office documents for some aspects of our day to day needs. And then Google Drive, which was the hub for collaborative authoring that we needed for our onsite tech support team and training team.

But this duality was causing more confusion and headache as time went on. Was something stored on the NAS or Drive? Was it a Word document that we converted to Docs? Which copy was the master at that point? I call this mess the “Docs v Office juggling nightmare” and I was sick of it. Google Docs is awesome for sharing and collaborating, but Google forces you into using their online file format; it’s an all or nothing proposition.

So we ate our own dogfood once again after the 365 move, and converted our two-pronged data storage approach into a single unified SharePoint “file server in the cloud.” It’s definitely not pick-up-and-play like Google Drive/Docs is, but the time invested in building out document libraries with proper permissions was well worth it.

First of all, Google’s thinking around how they allocate and manage storage in Drive has always driven my clients and myself nuts. Instead of being able to dole out storage space that is meant for separated, shared purposes — like shared folders that represent root file shares of a traditional server — they force storage to be tied to someone’s Google account. That is usually an admin, a lead user, or someone similar. In theory, it works decently, but you run into traps easily.

For example, if someone creates a root level folder outside the scope of a folder already owned and controlled by an account that has extra storage allocated to it, then anything placed inside that new directory will be counted against the respective owner’s storage quota. So as your organization grows, and people start using Drive the way that it was meant to be used — in a laissez faire kind of way — then you better hope all your users have a good handle on how Google Drive storage allocation works behind the scenes. If not, you’ll be falling into such “who’s storage are we using?” holes. The K-12 sector taking up Apps in droves is running into these headaches head on, I’m hearing.

SharePoint Online and SkyDrive Pro in Office 365 skip that mess altogether. If you’re working in true shared folders, or document libraries as SharePoint calls them, you’re working off pooled storage space available to all permitted users in your domain. By default, all E-level Office 365 plans (the only ones we recommend to clients) come with a 10GB base of shared space, with an extra 500MB of space added for each extra paid SharePoint user on your account. So if you are a company with 15 users and have SharePoint rights, you have 17.5GB of SharePoint space for your document libraries in the cloud. Simple as that.

SharePoint Online works hand in hand with SkyDrive Pro and allows me to securely sync our company’s entire cloud file shares to my laptop, which is locally secured by BitLocker in case of theft or loss. I have access to the exact same files on my desktop SSD (right side) as I do in the cloud and via web browser (left side). This is the cloud file storage nirvana I dreamt of with Google Drive starting back in 2012, but Google has thus far failed to deliver. As much as they claim otherwise, I can’t live in a Microsoft Office-less world … yet.

And SkyDrive Pro offers a completely distinct 25GB of space per person for their personal data storage needs. Think of it as a My Docs in the Cloud. It works more akin to the way Google Drive does, but for good reason: that space is ONLY meant for you, not to be shared with others in a file share environment. You can freely share docs out via Office 2013 or in Office Web Apps, but this is meant to be done on a limited basis with a few people. More formal sharing should be handled in document libraries in SharePoint sites.

Specs aside, have we lost any functionality on SharePoint? Not one bit. Usage of SharePoint and SkyDrive Pro is better in our organization now than under Google Apps on Drive previously, mostly due to there being no more need to juggle between what files can be Office documents and which ones have to be Google Docs. All of our Office documents (Word, Excel, PowerPoint, OneNote) can be shared and worked on offline and online equally well. Office Web Apps don’t have 100 percent feature fidelity yet, but they’re on-par with what Google Docs offered for getting work done quick and dirty in a web browser.

And we’re going to be leveraging SkyDrive Pro heavily soon with a new way of digital work orders that we are going to roll out for our techs which mixes SDP along with Excel, topped off with our new chosen endpoint devices: Lenovo Thinkpad X230 Tablets. A technician’s workhorse laptop hybrid with full stylus-driven tablet functionality. Initial tests have been working out real well for us.

I plan on writing a longer expose on how we made the move to SharePoint, but at face value, we are enjoying  SharePoint Online due to the numerous benefits it has provided. We erased duplicity headaches, streamlined our file storage into one platform, and combined what Google Sites and Drive had to offer in combination with what SharePoint now does in one single interface.

I’m not saying Google Drive is a bad product in any way. But it didn’t solve our needs the way I had expected it to back when Google rolled out it in 2012. After loathing SharePoint for how complex it has always been, the new 2013 iteration is a refreshing product that when coupled with Office 365 is a no-brainer option for moving your file server to the cloud. And clients who we have been moving to SharePoint Online have been equally impressed.

Lync is for the Workplace; Hangouts is for Friends

Another huge tool we’ve grasped onto quickly is Lync for our intra-company communication needs. Our office manager is pinging our techs in the field with info and vice versa; I’m able to discuss problems with my staff over IM or free voice calls even in areas where cell signal is dead. And as I wrote about last year already, we ditched GoToMeeting a while ago in lieu of Lync for our online conferencing needs.

The battle of the ecosystems between Google and Microsoft is on full display in the video/voice/IM arena. Google is trying its best to transition a muddled 3-pronged offering landscape on its end into one single system. But for all their efforts, I’m still just as confused with their intentions because as of Jan 2014, we still have three distinct options for unified communications in the land of Google Apps.

Google Talk, the previous go-to option for intra-Google communication via IM and voice chat still exists in some remnants. My Google Apps Gmail account still offers it, for example. And then you have Google Voice, which has been Google’s POTS-enabled offering for more than a few years now for softphone telephone needs. But some of that functionality is being tangled into Google Hangouts, which is their bona-fide video and voice chat platform going forward.

If you asked me what Google’s UC strategy looks like in one sentence, I wouldn’t be able to answer you succinctly. It’s because at Google it feels like the left arm is on a different page than the right leg, and so you get the picture. They have a fractured array of offerings that all do a little something different, and many have overlapping features — and so the Google Apps proposition is confounded by too many choices, none of which present a single solid solution for what companies are yearning for in unified communications.

Stop the madness. Since our move to Office 365, Lync has been the answer to our frustrations. I don’t have to juggle between Talk and Hangouts for my conferencing and IM needs. I have one single app, one single set of functions to learn, and a tool which arguably ties into the rest of Office 365 very nicely.

Whereas Google relies on Hangouts, a tool that is for all intents and purposes a function deeply rooted in their social network Google+, Lync is an all-inclusive app that can stand on its own via various Lync desktop/mobile apps, but is also present in some facets in the web browser as well. As a heavy user of Outlook Web App, I can see presence information for my staff in emails that they send me, and the same goes for docs on SharePoint document libraries. It seems that I’m never more than a click away from starting a conversation over IM or voice with someone on Lync, reducing the barriers to getting the answers I need fast.

My favorite aspect of Lync has to be the universal access I have to the app no matter what device I am on. If I start a Lync conversation on my laptop at the office, I can head out on an emergency client call and continue the conversation on my smartphone (I use a Lumia 925 now) without a hitch. This was only possible on Google Apps when I was within the web browser and using an Android phone previously. Google doesn’t offer much of anything for Windows Phones; but Microsoft offers almost everything for Google’s and Apple’s platforms. Who’s playing favorites in reality?

Google Hangouts limits you to measly 10 person meetings. Lync allows us to max out at 250 participants, and we can even tie into fancy Lync Room Systems as shown above for formal conference style gatherings. It doesn’t cost any extra than the price of an Office 365 E1 account or higher. A darn good deal in my eyes — especially if you can ditch GoToMeeting/Webex altogether. (Image Source: TechNet)

Yes, I will admit Microsoft’s approach to Lync for the Mac desktop platform is still a bit pitiful, as numerous features in the Windows version are not available on the Mac side yet. And I called Microsoft out on it in a previous post. But from everything I’m hearing in my circles, Office 2014 for Mac (as we expect it to be called) will bring Lync and the rest of Office up to speed with what Windows users are afforded.

Another area that I am anticipating to launch any month now is Microsoft’s first-party Lync enterprise voice offering that will enable us to use regular SIP desk phones with the Lync service for full telephone capabilities. While we are using RingCentral right now without a hitch and love it, I think Lync-hosted cloud voice is the holy grail of unifying communications for my small business. And judging from the number of people that email me asking about this functionality, I’m not alone in my wants. Seeing what Microsoft reps stated last May about this coming-soon feature, all signs are pointing to an inevitable 2014 launch.

Is Lync perfect? Not by a long shot. Mac support is still dodgy and behind the Windows client. I deal with off and on bouts of messages that refuse to reach my staff with errors pointing to behind-the-scenes goofy Lync network issues. And Microsoft needs to vastly improve the Outlook Web App integration of Lync to the level of what Google Talk has in Google Apps; the rudimentary support enabled right now is a bit of a disgrace compared to what it could offer users like me.

But unlike Google, which continues to distribute its efforts between three hobbled apps (Hangouts, Talk, Voice), Microsoft is 100 percent committed to building out Lync. And that’s a ride I am comfortable sticking around for, as it’s serving us well so far.

The Truth Behind Google Apps and HIPAA Compliance

One of the primary reasons I decided to take a swim in Office 365 land is due to Google’s lackluster adoption of HIPAA compliance for their suite. If all you use Google Apps for is email and document storage, Google’s got you covered.

But is your medical organization interested in building out an internal wiki or intranet on Google Sites? Sorry, that’s not allowed under their HIPAA usage policy. Or are you looking to perhaps do some video conferencing with Hangouts between other physicians or even patients? It’s a hot and burgeoning sub-sector of healthcare called telemedicine, but don’t plan on using Google Apps’s Hangouts for it — Google says you must keep Google+ shut down in your Apps domain to stay compliant with HIPAA. The list of dont’s ‘doesn’t end there.

I reached out to Google Enterprise Support to get some clarification on what they meant by requiring us to have core services enabled to keep HIPAA compliance, and a Patrick from their department replied to me via email:

My apologies for the misunderstanding, you are indeed correct. If you are under a BAA, you can turn off non-core services but core services such as Gmail, Drive, Contracts etc must remain turned on

This is another unpleasant necessity for organizations that want to, for example, merely enable Google Drive for cloud document sharing between staff members but do not wish for Gmail to be turned on. Coming from the public education sector before going on my own, I know full well that the picking and choosing of services in Google Apps is a highly desired function and one of the biggest selling points for Apps to begin with. So what the heck, Google?

Don’t get me wrong. HIPAA compliance with Google is now fully possible, but only if you’re willing to bow down to Google’s backwards requirements of what you can and can’t use on their suite.

I had full HIPAA compliance with Office 365 on the first day we went live, and I didn’t have to sacrifice SharePoint, Lync, SkyDrive Pro, or any of the other value-added benefits that come with the ecosystem. Seeing that Google Apps has been on the market for over 3 years more than Office 365, I find it quite unacceptable for Google to come to the game with one hand tied behind its back, and late at that.

I’m calling Google out because I know they can do much better than what they are advertising now as HIPAA compliance with Apps. And until that happens, I’m refusing to recommend Apps for any clients even remotely associated with the healthcare industry so they don’t have to go through the pains I described.

Office 365: Still Not Perfect, But A Value Proposition Better than Apps

There’s a lot of things I love about Google Apps. Its release schedule for new features is blazing fast; much quicker than what Office 365 has. Google has a knack for releasing innovative features, even if they don’t fill needs gaps for what I am yearning for. And their all-you-can-eat price point of $50/year USD for Apps is a hard price point to beat even for Office 365.

But I’ve come to learn that wading through marketing speak and engrossing yourself in a product as massive as an email suite is the only way to truly uncover what each platform has to offer. No amount of consulting for clients could give me the insight on these two suites as actually using them day to day has afforded me, in direct knowledge and purpose-driven understanding.

I don’t regret for a minute the four years we spent on Google Apps. It’s a solid, good product. It’s second only to Office 365 in my eyes. Hosted Exchange, Lotus, Groupwise, and all the other second-tier options are far behind in contrast to these two suites in pricing, bang for the buck, and security/compliance standards. But Microsoft’s value proposition is one which I can relate to better.

Splitting the apps apart, you will likely find areas where Google’s respective apps do a better job at this or that. But an email platform investment is a two-foot dive into an all-encompassing experience that goes beyond the inbox today moreso than ever before. And that’s where I find Microsoft to be winning the ecosystem battle: in providing an immersive experience that doesn’t have rough edges drowning in engineering experimentation.

At the end of the day, I have a tech consulting business to run. While I enjoy fiddling with the ins and outs of features for my customer needs, when I come back after a ten hour day onsite, the last thing I want to be doing is bending over backwards to work the way a suite expects me to. And that is increasingly what I was feeling with Google Apps. Google’s vision of cloud computing is markedly different than most others’, and if you can’t abide by their rules, you will pay the price in lost time and functionality.

My customers have learned this very fact with the Google Apps Sync for Outlook tool. I’ve experienced this with our frustrations with Drive/Docs. And most recently, Google’s HIPAA compliance stance leaves me scratching my head. So for the time being, we’ve bid Google farewell for our own internal needs.

Will we return someday? I hope so. But for now, Office 365 is doing a darn good job and I’m more than pleased, even if Microsoft has its own kinks to work out with Lync and Outlook Web App. I’ve brought my company onboard for the ecosystem, not purely for an email inbox. If you can step back and objectively compare email platforms in the same manner, you may come to a very different conclusion as to what vendor you should be sleeping with tonight.


Microsoft's Universal Windows App Store Is Huge For Developers—And Consumers

Windows is now truly one operating system, whether you’re on a smartphone, tablet or PC.

Windows Phone 8.1, Windows 8 RT and Window 8.1—that is, the phone, tablet (sort of) and PC flavors of Windows—are no longer distinct operating systems that largely look alike but vary wildly under the hood. Microsoft has spent the last couple of years updating its disparate Windows versions so that they work together with the goal of letting developers write one app and deploy it—after some tweaking to the user interface—to Windows PCs, tablets and smartphones.

True, Microsoft’s operating system naming conventions are still awful. But that shouldn’t obscure the major step forward this code-base unification represents to developers, nor the benefits that will flow to users as a result.

All three flavors of Windows now run on a common software core, or “kernel,” with a common runtime (i.e., the set of tools necessary to run programs). The major remaining differences between them have mostly to do with how they handle user-interface issues across a variety of devices, input methods (think touchscreens vs. mouse and keyboard), hardware (not just CPU and memory, but graphics processors, accelerometers and other sensors) and screen sizes.

Microsoft knows that those differences still present obstacles for developers, and hopes to address many of them with an update to its integrated developer environment, Visual Studio 2013, which it announced at Build 2014 this week.

Kevin Gallo, Microsoft’s director of the Windows Development Platform, describes it in a post on the Windows blog:

We’ve designed Windows for the long term, to address developers’ needs today, while respecting prior investments. We do this with one familiar toolset based on Visual Studio 2013, with support for C#, C++, JavaScript, XAML, DirectX, and HTML. The tools and technology stacks already used by hundreds of thousands of developers extend app development across Windows devices. Developers who have built apps for Windows 8.1 will find it fairly easy to reuse their work and bring tailored experiences to Windows Phone 8.1. Windows Phone 8 developers can use the same code, and also access new features, when they develop for Windows Phone 8.1.

Write Once, Deploy To All The Windows

The Visual Studio update allows developers to port existing apps across devices and their specific versions of Windows. For instance, if you have a Windows 8.1 app, you can use settings in Visual Studio to target smartphone-specific capabilities in Windows Phone 8.1. Visual Studio is designed to let developers use the same basic app code across different devices and Windows flavors, and allows them to emulate how an app will behave in each case.

From Microsoft’s perspective, the two most important takeaways for developers are these:

  1. You can build universal apps and share all the code while just making tweaks to the user interface
  2. Visual Studio offers a variety of diagnostics tools to optimize apps for use on different device—smartphones running Windows Phone, laptops running Windows 8.1, etc.

Essentially, Microsoft wants to make it as easy as possible for developers to build Windows apps. Given Microsoft’s minuscule share of the mobile market to date, you can hardly blame it.

In practice, this means Windows Phone developers—and you know who you are— essentially have three options. If you’ve built your apps using the Silverlight Phone 8.0 development tool, you don’t have to do anything; they’ll continue to work as is on Windows Phone 8.1.

Alternatively, you can update your apps to Silverlight Phone 8.1 to access the new features in Windows Phone 8.1, such as the Cortana personal assistant and customizable homescreens. Or you can migrate your apps to the universal Windows app platform with the new tools in Visual Studio. Of course, if you prefer, they can also just start from scratch and build a “universal” Windows app to Microsoft’s specifications, which would theoretically optimize it for the new unified Windows code base.

One of the biggest bits of news is that Microsoft is encouraging developers to use whatever tools they want. Whether a developer chooses to use C# or Visual Basic (VB)—or C/C++—to write native apps, it’s all good. Microsoft is also actively encouraging developers to build cross platform apps with JavaScript and HTML5/CSS and has promised an update to Internet Explorer 11 with hardware accelerated graphics support that takes advantage of a device’s GPUs while leaving the CPU untouched.

Buy Once For All Of Your Windows

For consumers, Microsoft aims to make the process of buying an app easier. If you buy an app for your Windows 8.1 laptop, you can automatically download it to your Windows Phone or vice versa. Microsoft insists that you won’t need to buy separate apps for separate versions of the operating system because, essentially, Windows is now all one big operating system now. The same is supposed to hold true for in-app purchases within these apps—they should migrate from laptop to tablet to smartphone as well.

Apple doesn’t do this. If you buy an app on Mac OS X for your iMac or MacBook, you will still need to download or buy the same version for your iPhone or iPad. Google doesn’t do this, either. If you buy an app or extension for Chrome OS, you will still need to buy that app for Android on Google Play.

Some individual apps for Android and iOS, of course, do let customers download versions for different devices—for instance, via a subscription service or universal login. But that’s up to the app developer. It’s not required by Apple or Google.