Long-Term Employment Is Dead

A few decades ago there was an implicit contract between employees and the organizations they worked for.  As an employee, you would be loyal and committed to the organization you worked for – giving it your labor and your attention.  In exchange that organization would take care of you for life.  In fact, working at an organization for 20, 30, or over 40 years was really not that uncommon.  Then, once you would leave that organization you would get a pension.  Some of these long-term employees still exist but they are a rare dying breed on the verge or retirement (or have reached a point where they too now need to switch jobs). Contrast that with people like my 22 year brother who doesn’t even know what a pension is, and neither to most other people his age.

Long term or life-time employment as we know it today is completely, utterly, and unequivocally, 120% dead, and it ain’t coming back…ever.

The latest numbers from the bureau of labor statistics show that the average tenure was 4.6 years based on data released in January 2012.   For workers over 65 the average tenure was 10.3 years  and for workers between 25-34 the average tenure was 3.2 years.   If you combine this data with the fact that large organizations are more rapidly getting replaced by other incumbents, then you have a bit of an “interesting” situation.

But why is this happening?  There are a few reasons.

Globalization

Our access to opportunities is no longer limited to our physical location like it was decades ago.  Today, you can be based in Beijing yet still have the opportunity to work for the hottest start-up in the Bay Area (which for many eventually means re-location).  More opportunities and access to organizations all over the world means more choice, more freedom, and less commitment to one organization.
Big company doesn’t mean job security
I wrote about this a few weeks ago, but with the massive sweeping layoffs that many larger organizations go through, the incentive to work there shrinks dramatically.  This means that the idea of seeing companies as “stepping stones” to better opportunities is pretty much the norm.  The new workforce is growing up seeing these things happening first hand so they are very aware of this and preparing themselves accordingly.
Accelerating rate of change

Not only is the world of work changing but the rate at which it is changing is accelerating .  This creates more market turbulence and instability.   This heavily ties into the theme above about large organizations and job security.  However, market turbulence affects all organizations and all industries.  In more dynamic and turbulent conditions there is no such thing as long term stability. People get tossed around from place to place, companies go under, new companies emerge, and this cycle happens increasingly more often.

New loyalties

Instead of being loyal to a particular company, many are now loyal to specific people they work with .  When these people leave they usually bring their “friends” with them.

Changing expectations

Today the levels of employee disengagement appear to be staggeringly high.  This means that new employees start out working for an organization with a very positive outlook.  However, once they start to feel like they are not appreciated, not enjoying their jobs, etc.  They start to look elsewhere.  Consider the many opportunities that are now present through crowd-funding sites like Indiegogo, disrupting businesses like Uber or Lyft, the ease of starting your own business, or even becoming a full-time freelancer on sites like oDesk or Elance.  The mind-set of lifetime or long-term employment doesn’t even exist for many employees today. They simply expect to be at an organization for a few years and then move onto the next thing.

So, now that we know that long-term employment is dead, what should organizations be doing to adapt?  This is a topic that I’m going to explore in my next post.  In the meantime I’d love to hear your thoughts, experiences, and feedback around this.  Are you noticing this trend at your organization and if so, what are you doing about it?

http://www.forbes.com/sites/jacobmorgan/2014/03/20/5-reasons-why-long-term-employment-is-dead-and-never-coming-back/

Database Build and Release with Jenkins

Continuous integration: Our current workflow

We currently use Jenkins (Hudson) as our CI build server.  All database development is done using Red Gate SQL Source Control and Subversion. SQL Source Control handles both the schema and the data changes.

Our workflow is as follows:

  • Changes are committed to Subversion using SQL Source Control.
  • Jenkins polls Subversion for changes and triggers a build
  • Jenkins creates a brand new database and rebuilds everything from scratch and deploys all the schema objects to it via SQL Compare.
  • If there are any problems caused by missing dependent tables, columns, procedures or functions, then the build will fail and the developer who committed the offending change will get emailed.
  • We also use the Jenkins “Promotions” plug-in to deploy database builds to our Integration, QA and Staging environments. In doing this
    • Red Gate SQL Compare is used to deploy the schema
    • Red Gate SQL Data Compare is used to synchronize the contents of lookup tables
    • We have custom scripts to Stop and Start replication in environments where replication is in place. The script dynamically loops through all entities in the database and attempts to drop the subscription of any replicated object
  • When we are preparing deployment to our Staging and Production environments, we do not blindly deploy the changes.  Instead, we use Red Gate Compare to generate the alter script, which is then inspected manually by our DBA team to ensure we are making appropriate changes (e.g. make sure we are not needlessly creating a large index)
  • We use the Jenkins versioning module to create a build number for every build.  This build number gets baked into a user defined function and if you want to know what version is installed, you can just call SELECT dbo.DATABASE_VERSION() and you will see the formatted version number

The problem that we fixed

Inconsistent source

In the past, everything was managed by developers using hand crafted SQL scripts.  Developers were inconsistent in how they authored the scripts.  Some scripts could be run multiple times (e.g. It performed a CREATE if the object did not already exist or an ALTER if it did), some only allowed for forward moving updates.  Some developers would put everything into a single script that was hard to maintain and diff between updates; others would break apart the scripts in order to have one object defined in its own file, making the deployment task more difficult.  Sometimes the files were checked into our version control system, sometimes these files lived only on the developer’s workstation.

Versioning problems

Our previous attempts at versioning our databases were all clunky and were not scalable.

We had no way of determining what database version we had running in the various environments.  Every release involved a lot of finger-crossing because we could never be 100% sure what we were testing was representative of what the production environment would look like when we deployed.  Deploying database changes was a nightmare and relied on developers executing scripts that they usually stored locally on their hard drives.

Problems of a manual process

The team was getting too big.  A release would contain dozens of files that were written in an inconsistent manner and had to be opened up and run from SSMS during a release.  If there were changes to scripts, the QA team would have to holler out to the Developer teams when they wanted the scripts deployed.  Now it’s just a button click in Jenkins to get the changes they desire.

Lost database updates

We also did not have confidence that what we were testing in our development environments would resemble what we were releasing to production.  Smoke testing database updates often revealed missing objects that were deployed manually to our development database servers, but never captured in a deployment script.  This was probably our biggest headache that forced our hand in finding a solution.

Search for a solution

We tried using Visual Studio Database Project.  At one point, I got it working, but it was really fragile and it was a nightmare to manage.  It did not have the flexibility that we needed.  For example, we had no way to determine what changes were being made behind the scenes.  The deployment process was done blindly and it seemed to break very easily and without explanation.  Searching the internet was usually a fruitless effort since almost no one seems to be using Visual Studio Database Project.

Our company’s engineering architect gave me the task of finding a way to version control our database.  He was terrified because if we ever needed to roll back a release, we’d have no idea what state the database was previously in.  I’ve known about Red Gate SQL Compare since around 2006, and discovered SQL Source Control in May 2011.  By June 2011, we were using Red Gate to deploy our database changes to production, primarily using SQL Source Control, but also using SQL Compare and SQL Data Compare.  We use SQL Source Control to make Subversion the source of record for all database entities.  We also couple it tightly with our Continuous Integration server to validate builds and promote (deploy) changes to our many environments.

Introducing the new process

It’s always a challenge to introduce a new process to a development team.  Some developers were more enthusiastic than others. A few of the developers who were used to the status quo had a harder time understanding the “why” regarding the change. After an hour-long demonstration on how Red Gate Source Control works the team seemed satisfied that checking in their changes would result in saving time as opposed to having a tax on their workload.

We have seen a lot of benefits from using SQL Source Control.

  • We now have a source of authority on the state of a database build — We can be confident that what we are developing and testing is what is going to be deployed to production.  That by itself is priceless.
  • Developers don’t need to waste time hand-crafting database alter scripts
  • Deployment is now automated — Developers don’t need to be bugged by QA every time QA is ready to accept a new version of the application.
  • We have identified holes in our process.  In many cases, engineers were making changes directly to production, which is a big no-no.  Red Gate helps identify these unauthorized changes.

Database Source Control has given us the following benefits

  • We now have a definitive build of our database in source control
  • Database deployment is now automated — We can deploy any database version to any development environment with a few mouse clicks and by both developers *and* QA engineers
  • Developers have more time to develop, since they no longer need to manage deployment scripts and they no longer need to deploy them
  • We were able to identify differences between instances of our databases in different environments.  In some cases the changes were minor (e.g. rarely used indexes), in other cases the changes were quite large (e.g. missing columns and different stored procedure parameters).

We now have fewer issues when we deploy changes to production.  We have the ability to roll back changes.  We can now perform a simple SELECT statement to determine what version the database is running.

https://www.simple-talk.com/sql/sql-tools/database-build-and-release-with-jenkins/

Job Hopping Is the 'New Normal' for Millennials

The average worker today stays at each of his or her jobs for 4.4 years, according to the most recent available data from the Bureau of Labor Statistics, but the expected tenure of the workforce’s youngest employees is about half that.

Ninety-one percent of Millennials (born between 1977-1997) expect to stay in a job for less than three years, according to the Future Workplace “Multiple Generations @ Work” survey of 1,189 employees and 150 managers. That means they would have 15 – 20 jobs over the course of their working lives!

So what would all this job-hopping do for young workers’ careers? For applicants, job instability on a resume could come at the cost of the dream job. For years, experts have warned that recruiters screen out chronic job-hoppers, instead seeking prospective employees who seem to offer longevity.

Talent acquisition managers and heads of Human Resources make a valid case for their wariness of resumes filled with 1-2-year stints. They question such applicants’ motivation, skill level, engagement on the-job and ability to get along with other colleagues.

These hiring managers worry they’ll become the next victims of these applicant’s hit-and-run jobholding. For companies, losing an employee after a year means wasting precious time and resources on training & development, only to lose the employee before that investment pays off. Plus, many recruiters may assume the employee didn’t have time to learn much at a one-year job.

The Upside of Job Hopping

But for newly minted college graduates, job-hopping can speed career advancement. According to a paper out of the St. Olaf’s Sociology Department entitled “Hiring, Promotion, and Progress: Millennials’ Expectations in the Workplace,” changing jobs and getting a promotion in the process allows Gen Y employees to avoid the “dues paying” that can trap workers in a painfully slow ascent up the corporate ladder.

Job hopping can also lead to greater job fulfillment, which is more important to Gen Y workers than it was to any previous generation: A 2012 survey by Net Impact found that 88 percent of workers considered “positive culture” important or essential to their dream job, and 86 percent said the same for work they found “interesting.” Job-hopping helps workers reach both of these goals, because it means trying out a variety of roles and workplaces while learning new skills along the way.
And economic instability has erased, especially for younger workers, the stigma that has accompanied leaving a job early. That’s because strategic hopping been all but necessary for as long as they can remember. Workers today know they could be laid off at any time – after all, they saw it happen to their parents – so they plan defensively and essentially consider themselves “free agents.

If that freedom seems an undue privilege, think again. The downside to the freedom they enjoy is financial insecurity worse than any other generation in the past half-century. That’s a sufficient price to pay.

So while Baby Boomers started working with an eye on gaining stability, raising a family, and “settling down,” today’s young workers take none of that for granted. Instead, as shown by Net Impact’s survey, they are more concerned than their predecessors with finding happiness and fulfillment in their work lives

Indeed, since humans have been proven to be terrible at predicting what will make us happy (as shown by Harvard happiness guru Daniel Gilbert), it’s crucial that we find it through trial-and-error.

So what does a Chief Human Resource Officer do in the face of the perceived advantages of job-hopping amidst the potential cost to the organization? Here are three tips:

1.     Offer Workplace Flexibility

According to research by Future Workplace, flexible hours and generous telework policies are even more important to younger workers than is salary. To keep your employees around for more than a year, give them the chance to adjust their schedules when the situation calls for it.

Understand the future of work and the demands prospective employees place on employers today. In the Future Workplace study “Multiple Generations @ Work,” workplace flexibility trumped both compensation and career progression in importance. Yet managers interviewed did not rate this as one of the “perceived” top five levers of attractiveness. So ask yourself, do your managers understand the importance of workplace flexibility to engage new hires? Are your employees leaving for reasons other than job promotions?

2.     Listen To Your Employees

More than previous generations, Gen Y workers crave the chance to contribute creatively to the company and have their ideas heard, according to survey results from Future Workplace. This helps them grow professionally in each position, which will entice them to stick around longer, since personal development is a main reason workers job hop in the first place.
3.     Communicate The Company’s Mission  & Values

Increasingly; employees want to work at a company whose values match their own. The same Net Impact survey mentioned above found that 58 percent of respondents said they would take a 15% pay cut in order to work for an organization “with values like my own.”

In order to maximize the number of your employees who achieve that goal, and therefore stick around for the long haul, make sure to communicate your company’s values during the recruiting process. If applicants know what they’re signing up for when they pursue positions at your company, the ones who would leave due to value differences will weed themselves out.

A Marker of Ambition

But above all else, keep an open mind about job-hopping applicants. Many forward-looking hiring managers have done so, and have embraced a positive outlook on job-hopping. Barrie Hopson, an author and career consultant has noticed this in his own research. “Before, if you wanted something other than the single-track you were considered unreliable, a dilettante,” said Hopson in a recent interview. “But now employers are beginning to understand the benefits of employees who wish to develop a broad skill set.”

So, hiring managers, before dismissing a scattershot resume, consider the context; it may demonstrate ambition, motivation and the desire to learn new skills more than it shows flakiness. More employers are realizing that this is the new normal, and coming around to appreciating its advantages.

Readers: If you are an applicant, do you worry about having too many positions on your resume? Human Resource Officers: have you ever hired a job-hopper? Did his/her previous work experience predict outcomes at your company?

Scale-Out’s Demise Are Greatly Exaggerated

A recent blog post highlighted a Microsoft technical report which asserts that most Hadoop workloads are 100 GB or smaller.

That is funny.. I used to work on a ten – terabyte database on Pentium 3 with the 32bit limit. Lol.

I still think that ‘more memory’ isn’t always the best answer.  SQL Server Analysis Services, combined with a somewhat greater latency.. is clearly the most affordable solution..  And it has plenty of performance for most people.

I still think that Microsoft’s move into Tabular Model.. Is a great waste of energy. We don’t need more memory usage.. We need better wizards, better more extensible, open architecture.. And what the world REALLY needs is more people who speak MDX.

http://blog.scaleoutsoftware.com/reports-of-scale-outs-demise-are-greatly-exaggerated/

Google's 'right to be forgotten' requests begin after EU court rules that personal data must be deleted from search results | Mail Online

So I guess I need to be a European citizen in order to qualify to do this? Does anyone know how exactly I become a dual citizen? Dead serious.

I would love to petition to have Usenet history cleaned up.

http://www.dailymail.co.uk/sciencetech/article-2629050/Google-receiving-right-forgotten-requests-EU-court-rules-personal-data-deleted-search-results-request.html

NoSql .. hilarious comment from reddit

One consideration should be whether or not it’s a good idea to have folks who lack expertise in database systems choose the database. One might consider that a risky approach.

Kinda like having database folks choose your web framework, we now have legacy systems in which the database was chosen by the web developers….and we ended up with MongoDB. Since the web guys weren’t database experts they predictably under-estimated the value of:

• ability to run adhoc queries on your database – if you can’t find corrupt data it must not exist
• ability to enforce data quality – so, yesterday’s data standards are different from today’s…you don’t want to look at old data do you?
• ability to avoid keeping 1000s of redundant copies of the same piece of data – what do you mean your last name changed?
• ability to avoid having 1000s of schemas – soooo, how do we now test your new query against every undocumented schema that exists in this collection?
NoSQL sounded better when it was more theoretical. Now that we have more experience with it, people are realizing that they have often given up more than they gained.

http://use-the-index-luke.com/blog/2013-04/whats-left-of-nosql

Database CI and automated deployment with Jenkins: a Step by Step Tutorial

Up until last week I hadn’t used Jenkins. However I got it working in less than an hour and documented the process. So here’s a step by step tutorial on how to do it yourself.

The point is that CI reduces your feedback loop such that changes that may break the build are caught as soon as they are committed to source control. Additionally, it is very easy to automate unit or integration tests but that is not what I am discussing now. While CI started with application code you should apply the same principles to databases by automating your standard deployment tool to deploy your source control commit to a CI database.

In this walk-through I will explain how to get the acknowledged benefits of CI for your databases.

Before you start, here’s what you need:

In my case I do everything locally on my own machine but you can set things up as you like. In either case, these are the tools and components you’ll need to have in place:

  • A SQL Server that you can connect to.
  • SQL Source Control (Red Gate tool) to connect your database to a source control repository. I’m using Tortoise SVN which is open source. (Database source control is a prerequisite for database CI. If you don’t source control your DB already you should sort that out first. If you work in SSMS, SQL Source Control will make versioning your database trivial.)
  • A Deployment Manager server and agent. (You can get it here and the Starter Edition is completely free.)
  • The SQL Automation Pack for the Red Gate command line tools. (You can use the free 28-day trial to follow this tutorial.)
  • And, finally, the latest build of Jenkins CI (open source) which you can get from the Jenkins website.

This post assumes that you already have SQL Source Control and Deployment Manager set up and that you broadly understand how they work. If you have these bits and pieces installed and in place you’re ready to follow in my footsteps!

Step 1: Create & configure a new job in Jenkins CI

  1. In Jenkins click New Job (top left) and give it the name “1. Database CI and Publish DB Package to DM”, and select the radio button to Build a free-style software project.
  2. Add the root of your source control repository where it says “Repository URL”
  3. Finally set up a build trigger. The easiest way is to set it up to poll your SVN repo once a minute (use “* * * * *”). (You may want a push trigger to improve performance but this is a great way to get started.)

Step 2: Copy scripts from the SQL Automation Pack

  1. Fire up the SQL Automation Pack and click the Open Folder button under Build Scripts section.
  2. Copy the contents of your Build Scripts folder into the SVN repository containing your code and commit everything. After this step my folder now looks like this:

    1-screen-shot-1

Step 3: Configure variables in “sqlCI.targets”

Open the “sqlCI.targets” file in a text editor so that you can configure variables to get Jenkins to update a CI database based on the code you just committed to source control and, if successful, generate a package for Deployment Manager. (Note: you shouldn’t need to change anything after the “<!– You shouldn’t need to change anything after here. –>” marker.)

Set the following variables in this file:

  1. Database folder pathThis is the relative path from whatever you set up as your repository in Jenkins earlier. In my case, the full path that I use for SQL Source Control is: “file:///C:/Users/alex.yates/Desktop/Repositories/DemoRepositories/WidgetShopWithJenkins/trunk/DB/Create Scripts” and I have already told Jenkins that my source control location is: “file:///C:/Users/alex.yates/Desktop/Repositories/DemoRepositories/WidgetShopWithJenkins/trunk”. So for this variable I entered just the extra bit: “DBCreate Scripts” which will probably work for you too.
  2. Database serverThe SQL Server instance you’ll be connecting to.
  3. Target database nameThe database you want to deploy to on the above server
  4. Either Windows Authentication or SQL Username and SQL PasswordI tend to use the second option as it’s easier to setup if you’re just testing
  5. NuGet package nameThis is the name Deployment Manager will use whenever it refers to the package for your database.
  6. Build numberThe correct syntax for the build number variable in Jenkins is BUILD_NUMBER. So I used the following convention for my package versions: 1.$(BUILD_NUMBER). This way the version number of the package will reference your build number and increment each time the build is run.
  7. Output folder pathThe directory that you would like to output packages to. (NB: This should be different from the directory that you are building from to avoid packaging up previous build output.)
  8. Deployment Manager URLThe URL of your Deployment Manager server.
  9. Deployment Manager API keyThis can be found by clicking your log-in name in the top right corner of Deployment Manager.
  10. Automation licence serial numberThis is the license key for your SQL Automation Pack if you have one. If not, you can use the 28-day free trial to get started.

You’re done with this bit! Save the file and commit your changes to source control.

Step 4: Tell Jenkins to run your build

  1. Download the Jenkins MSBuild Plugin. You can find it by going to Jenkins > Manage Plug-ins and scrolling down the list of available plug-ins as shown below:

    3-screen-shot-3

  2. You also need to tell Jenkins where your MSBuild.exe files are located by going to Jenkins/Manage Jenkins/Configure System and adding my MSBuild installations as shown below:

    4-screen-shot-4

  3. Now select the “Build a Visual Studio Project or solution using MSBuild” build step and point it at your sqlCI.proj file (which references the sqlCI.targets file).
  4. Click Save.

And that’s that.

Now every time you commit a database change with SQL Source Control, Jenkins will (within a minute) run the sqlCI.proj file which will run the SQL Compare command line to update your CI database. This will fail if your database is in an inconsistent state but if it is successful Jenkins will generate a package for you and publish it to Deployment Manager. This means that all your database packages are generated automatically, they’ve been through at least some automated tests to ensure that the database is deployable, and all your packages can be traced back to source control.

That’s database CI. Now for the final lap…

Step 5: Setting up automated deployment

The really nice bit is what comes next. By adding a separate job with a build step to “Execute a Windows batch command” using the following syntax – you can create a release in Deployment Manager using your latest packages and deploy it to a LATEST environment.

While similar, the CI job and the Automated Deployment jobs serve different purposes. Remember: CI is for testing by computers; LATEST is for testing by humans and will contain your entire stack (not just the database). Also, if you automate a deployment to LATEST, promoting your entire stack to STAGING or PRODUCTION is staggeringly simple. So here’s what to do:

  1. Create a second job using the Execute a Windows batch command option in Jenkins and call it “2. Create a release in DM and deploy to LATEST”. For Source Code Management selected “None” and here is the syntax you need for the command use this command line syntax in the build step:
    "C:Program Files (x86)Red GateDeployment ManagerToolsdeploymentmanager.exe" ^
    create-release ^
    --server=<> "--project=<>" ^
    --version=1.%BUILD_NUMBER% ^
    --deployto=<> ^
    --apiKey=<>
  2. Then set up the build trigger as follows:

    5-screen-shot-5

  3. Now set up a post-build action on your first project to tell Jenkins to run this new project:

    6-screen-shot-6

  4. Finally add an email notification as a post-build step to both jobs:

    7-screen-shot-7

And done.

It isn’t that hard and doesn’t take that long to set up in practice. Now you I can catch mistakes early using Jenkins and the SQL Automation Pack, and easily deploy your code and database with Deployment Manager.

http://thefutureofdeployment.com/set-database-continuous-integration-automated-deployment-jenkins-lunch/

SSIS 2012 Continuous Integration using Jenkins and Octopus

Finally I was able to putting each component together. As I implement CI and Deployment system for the DB, I could understand why it is hard to find a novice friendly guide which explains all the basics and also provides step by step instructions. There are way too many options that you need to pick and choose. Furthermore, it’s not that simple to build one. My implementation isn’t even close to the ideal system. However, I’m sharing this hoping this lightens up those who are planning to create a DB CI system by looking at the working system.

Big Picture

DB CI Process

This diagram shows how I set up deployment system for SSIS. I’m using the same structure for SSRS and DB schema too.

Each Component

Component Tool Comment
SCM Visual SVN There are many SCM (Source Control Management) tools and you may choose different solutions. I used visual SVN because of the following reasons. Frist, SVN has better integration with the red-gate tools that I’m using. Second, Visual SVN is free. Having everything checked in is the one of the basic starting point for CI (Continuous Integration) and deployment. I used AnkhSVN as a SVN client since it’s integrated with visual studio. Key benefit of this is that it allows you to perform most of the version control operation directly from the visual studio.
CI Jenkins Jenkins is an open source continuous integration tool. If you are not accustomed to CI, I highly recommend you to read David Atkinson’s article Continuous integration for databases using Red Gate tools. This will give you a good overview of how it works and what’s involved in it.
Build MSBuildx32 As I mentioned in my previous post (Building SSIS 2012 using MSBuild), MSBuild doesn’t build SSIS by default. So you need to install SQL Data Tools from the SQL2012.ISO and copy Microsoft.SqlServer.IntegrationServices.Build.dll from the CodePlex project (Microsoft SQL Server Community Samples: Integration Services).
Testing X I searched for SSIS testing solution but wasn’t able to find good one.
Packaging Nuget NuGet is the package manager for the Microsoft development platform including .NET. You may consider a NuGet package (a .nupkg file) as zip file containing files to deploy and a manifest file. A manifest file describes the contents of the package and what needs to be done to add or remove the library.
You can get more details from the presentation of Damian Edwards “NDC 2011: NuGet in a caramel coated nutshell
Artifact Repository Nuget I used local NuGet server to keep packages. I’m currently reviewing Artifactory and Nexus.
Deployment OctopusDeploy Octopus is an automated release management system for .net developers. I chose this because, first, web developers were using this for their deployment, second, it’s easy to run command on a remote machine due to its agent, third, it keeps release on the target machines to make rollback easier.

CI Server

Below is the list of installed applications and files on the CI server.

  • Jenkins
  • Jenkins plug-ins
    • build-name-setter
    • Email-ext plugin
    • Extensible Choice Parameter plugin
    • Hudson PowerShell plugin
    • Jenkins Dynamic Parameter Plug-in
    • Jenkins Mailer Plugin
    • Jenkins Subversion Plug-in
    • MSBuild Plugin
    • Scriptler
    • Role-based Authorization Strategy
  • Build
    • MSBuild
    • Microsoft.SqlServer.IntegrationServices.Build.dll
    • SQL Data Tools from the SQL2012.ISO
    • SSIS.MSBuild.proj
  • Nuget Server
  • Octopus depoy

I’m not going to cover how to install Jenkins and other applications in the post. I may cover that in later post though. The main focus of this post will be how to create Jenkins and Octopus project to deploy SSIS.

Jenkins Project

Overview

Below shows how the project works.

Jenkins SSIS project

Global Setting

Scriptler

Jenkins –> Scriptler –> Add a new Script

Scriptler can be used if you want to create value dynamically or handle some complex logics.

DynamicChoice_OctopusUserKey.groovy

I need to pass userkey value to the Octopus using API. API is executed under the user’s privilege tied with userkey. This groovy script will convert current Jenkins user to a proper Octopus userkey.

Jekins.Global.DynamicChoiceOctopusUserKey

Script

def result = ["-------"]

def auth=jenkins.model.Jenkins.instance.getAuthentication()

if ( auth == null ) return result

def userId=auth.getName()

if ( userId== "admin_speaksql" ) return "JNN2005CGJVGTVJZR3CH1GTKZ9"

if ( userId== "admin_01" ) return "KUE2489CGJVGTVJYE3CH1GKHZ8"

Global Choice Parameter

Jenkins –> Manage Jenkins –> Configure System

deploy_target

Environment list

Jekins.Global.GlobalChoiceParameterSSIS

Extended Email Notification

Notification email format setting

Jekins.Global.Extended E-mail Notification

Project Setting

Create a new Project(Job)

You can create a Project by selecting “Build a free-style software project” or choose “copy existing job” if you are going to create a similar job.

Jenkins –> New Job

Jekins.Project.NewJob

Configure SSIS Project

Jekins –> Project name(DW.SSIS.Template) –> Configure

Build with Parameters

Jekins.Project.Parameter00

Project Name

I declared Project Name parameter since Jenkins Job name is different from the SSIS project name.

Jekins.Project.Parameter01_ProjectName

Version Number

I’m currently using static version number setting

Jekins.Project.Parameter02_VersionNumber

Deploy Target

Deploy environment selection

Jekins.Project.Parameter03_DeployTarget

Release Note

Release note which will be passed to the Octopus and included in the deployment success notification email

Jekins.Project.Parameter04_ReleaseNote

SSIS Protection Password

I use EncryptSensitiveWithPassword as a protection level of SSIS project. So I need to provide protection password.

Jekins.Project.Parameter05_SSISProtectionPassword

Octopus UserKey

This will pick up UserKey of Octopus using “DynamicChoice_OctopusUserKey.groovy” that we set globally from the Scriptler menu.

Jekins.Project.Parameter06_OctopusUserKey

Source Code Management

Source code repository URL

Jekins.Project.Parameter07_SourceCodeManagement

Build Environment

Define build name format.

Jekins.Project.Parameter071_BuildName

Build

Copy MSBuild script for SSIS(SSIS.MSBuild.proj)

Since the same build script will be used for all SSIS project build. I keep that file in a folder and copy it to the working directory of that project.

Jekins.Project.Parameter08_Build01.CopySSISMSBuild.proj

You can get SSIS.MSBuild.proj file from this link : https://gist.github.com/kulmam92/6433329

Prepare to create a Nuget package using MSBuild

You need two things to be able to create a Nuget package using MSBuild. First, spec file that describes package name, dependency and etc. Second, Build script for MSBuild. ImportTargetsForDB.ps1 file does that two things.

Jekins.Project.Parameter08_Build02.CopyNuspecFileImportNugetbuildscript

You can get related scripts from this link : https://gist.github.com/kulmam92/6433645

Build using MSBUild

This will build project and create a Nuget package using build output

Jekins.Project.Parameter08_Build03.MSBuild2

Command Line Arguments

/t:SSISBuild,DeployNugetForDB /p:SSISProj=%SSIS_project_name%,Configuration=Development,ProtectionLevel=EncryptSensitiveWithPassword,ProjectPassword=%SSIS_protection_password%,DeployPackageName=%JOB_NAME%

Call Octopus deploy API to deploy Nuget package

Jekins.Project.Parameter08_Build05.CallOctopusAPI

c:octopusocto create-release --server=http://dbdeploy.XXXX.XXX:8880/api --project="%JOB_NAME%" --deployto="%deploy_target%" --waitfordeployment --apiKey=%UserKey% --releasenotes="

  • [Updates] : %release_note_update%
  • [Impact] : %release_note_impact%
  • [Jira Tasks] : %release_note_jira%

"

 

Post Build Action

Send notification Email

Success notification will be sent out from the Octopus deploy. The will send notification when the job fails.

Jekins.Project.Parameter09_NotificationEmail

Octopus Deploy Project

Global setting

Environment

Create necessary environments and add servers. You can add tags to the server and that tag will be used to determine deployment target. Since I set up single centralized ETLDB for all environment, the same server is appearing in multiple environments.

Octopus.Environment

Create a SSIS Projects

Variables

Below is the list of the variables that I declared on each project.

Octopus.Project.Variables

Steps

This is the actual actions that the Octopus does. SSIS deploy project is consist of three projects.

Octopus.Project.Steps

File deploy

This step will grab Nuget package from the Nuget repository and deploy it to the target. Deploy role is defined using tag that you chose for the server.

Octopus.SSRS.Project.Step01.FileDeploy

Octopus.SSRS.Project.Step01.FileDeploy2

PS Script to load SSIS package to SSIS catalog DB

Powershell script in this step will deploy SSIS project to the SSIS catalog DB. SSIS project’s deployment model should be project based model since SSIS catalog DB only accept that model.

Octopus.Project.Step02.PSScript

powershell.exe -command e:project_ssisInstall-ISProject.ps1 -IspacFullName "$LocalPackageDirectoryPath$LocalProjectName.ispac" -ServerInstance "ETLDB1" -CatalogFolderName "$LocalEnvironmentName" -ISProjectName "$LocalProjectName"

You can get powershell scripts from this link : https://gist.github.com/kulmam92/5939944

Send Email Notification

Octopus will send out notification if all steps are finished successfully.

Octopus.Project.Step03.emailNotification

Subject

[CI] #{OctopusProjectName} – #{Octopus.Release.Number} deployed to [#{Octopus.Environment.Name}]

Body

<h2>[Notes]</h2>
< p>#{Octopus.Release.Notes}</p>
< h2>[ReleaseLink]
< p>Check console output at https://dbdeploy.AAAA.CCC/#{Octopus.Web.ReleaseLink} to view the results</p>

Deployment

You are now ready to deploy a project using Jenkins and Octopus. You can start deploy from the following location.

Jenkins –> Project name(DW.SSIS.Template) –> Build with Parameters

Below is the screenshot of the page.

Jekins.Project.BuildwithParameters

Clicking the build history will take you to the build status page.

Jenkins.BuildwithParameters.Status

If you want to check the detail log, you can check “Colsole Output”

Jenkins.BuildwithParameters.ConsoleOutput

Summary

I covered how I implemented SSIS project deployment system using Jenkins and Octopus deploy. This is not a continuous integration system and still has many rooms to improve. However, I hope you can get some tangible idea about the working SSIS deployment system.

http://speaksql.wordpress.com/2013/09/04/ssis-2012-continuous-integration-using-jenkins-and-octopus-a-journey-to-db-deployment-automation/

I am in love with Linux. TurnkeyLinux.org

There is a reason that you are seeing this message today. Aaron is madly in love with WordPress. I want every single one of my friends and enemies lol.. to stop what you are doing.. and go setup a blog.

I did database work for the last fourteen years.  I loved working in that industry.

But today.. i see more value created In ten minutes with WordPress.. than any of the million dollar projects that i have worked on.

The feature that caused me today.. to take a break from my normal programming.. and talk to you guys about WordPress..  is that I have been finding literally too many good articles to NOT share them with you all.

Being able to schedule a post.. for tomorrow or Next Tuesday.. is a fantastic feature..

I have always known that logic stored in a database.. is the most beautiful thing in the world.

I just didn’t forecast that it would come with this piece of database called mySQL.

I still can’t get this database to perform well.. when it has 100k records.

I guess that means that long term.. I should store the posts and media on a large drive.. and everything else in either RAMdisk or.SSD.

I hopefully will learn soon.. how to split mysql databate onto two seperate drives..  I have been doing that on most SQL Server databases for the past fifteen years.

Maybe I will get better understanding of storage tiering in Linux sometime..

I love mySql like he was my stepbrother. It has a lot of neat stuff. But I would never choose mySql over Microsoft. I really wish that Microsoft pushed WordPress to fully support MS SQL Server..  but until then.. I’m cool.

Just don’t sit here and explain to me how caching is going to make this all better.  It is amazing that we have a whole generation of developers.. that think that databases are too slow for us to rely on.

YOUR database.. mySql, well it just plain sucks compared to Microsoft Sql Server.

That doesn’t mean that we need to spend twenty five grand per socket .. or run away to a slow platform that only handles ok when we have a hundred nodes.

I still prefer Microsoft Sql Server Express for almost every project.  It has everything that I need to slaughter mySql.

But until you guys see the light.. I am going to focus on Hadoop and mySql and Docker and CI/CD .. and yes.  WordPress.

I just love WordPress.  I have to get into mySql and Hadoop and everything else.. because it complements WordPress better.

Microsoft.. you guys need to get Microsoft Sql Server to be treated as fist rate citizen when it comes to WordPress.

Microsoft should just purchase Automatic / WordPress.. and transition them to MsSql.

Only in my dreams would I get to use this awesome PHP application.. with tons of plugins and themes.  it would be ideal if WordPress wasnt saddled by a crappy database.

Someday.  I guess i will be better at mySql tuning.   For now.. I think that Microsoft Sql Server is easy to get to be three or four times faster than MySql.

Go right past the Standard and Enterprise licenses..  and use the Express Edition.

i think that the way we can get better ROI by just skipping the licensing at twenty five grand per socket.. i have always loved Sql Server Express instead of the paid licenses.

I have a friend in Bellevue that uses Views in order to flatten the sharded data.. into a reasonable structure for querying,  This is a very effective way to get around the 10Gb limit in SQL express.

So we can go to about a hundred GB on a fast desktop or server.. and that meets the needs of 50% of the customers i have had in the past fifteen years. Half of my customers these past ten years could have saved considerably.. by embracing Sql Server Express as their platform of choice… because for most projects they would be better off because 100Gb of space fits almost everyone’s needs.

Aaron