Friday, August 29, 2008

Does Velocity change when we look at software from a Lean perspective?

  Most Agile methodologies I have tried, read about, etc... include the concept of Velocity.  Velocity essentially is a historical measurement of the rate at which a software team produces value.  I am currently reading Agile Management for Software Engineering by David Anderson.  I believe the term he uses to describe the philosophy of using velocity to predict future output is Empirical Process Control.  I haven't finished the book yet, but as I was thinking about empirical process control and velocity in the context of Lean and the Theory of Constraints I had the thought that I have been looking at velocity all wrong.

  The teams I have been a part of have generally looked at velocity in one of two ways, either it was the measure of how many work items went from Resolved to Closed (meaning that QA successfully completed testing them) or it was how many went from Active to Resolved (meaning that Dev finished coding and unit testing and was ready for the feature to be run through QA).  When reexamining that strategy in the light of Lean and Theory of Constraints we actually should have been measuring velocity for both of those functions (QA and Dev).  Theory of Constraints is all about identifying the bottleneck and managing your input into the system to make sure you 1.  Feed the Constraint as effectively as possible (don't let it run dry) and 2. Manage inventory so you don't overload the pipeline with excess inventory. 

  Before diving into how and why you would measure velocity at different points perhaps a little aside into the dangers of excess inventory is in order.  Say for example Developer Tom finishes Story A and marks it as Resolved so that it moves into Tester Tim's queue to QA Tom's work.  Tim is currently the constraint in the system so work is piling up in front of him.  He isn't able to QA Tom's work for a week.  Tom of course doesn't sit around for that week and he moves on to a new story.  A week later Tim discovers that something isn't working right with the feature Tom completed and reactivates it.  Now several bad things happen as a result of the delay - the work Tom did on Story A is no longer fresh on his mind and so he has to spend time spinning up on it again.  The work he started on the new story gets interrupted and he will have to take some time to reacquaint himself with it when he finishes the rework on Story A.  Additionally we measured velocity by how how much the Devs marked as Resolved and so the previous week we reported X number of story points completed when due to the rework on Story A it was only X minus the number of Story Points for A.  We have a mess on our hands.

  Now you can't eliminate rework in the system no matter how hard you try. QA will identify issues with the code Dev delivers and Devs will find issues with the story details delivered to them by the Analysts or Customers, etc...  By reducing the excess inventory in the system and managing to the constraint you can reduce the cost of the context switch that the rework incurs.  In our situation Tim is the bottleneck and we shouldn't keep piling up inventory on his doorstep.  We need to slow the cadence of the development line to match his production.  With the excess time in other queues you look how you can leverage them in the meantime perhaps developers use it for training or if excessive enough leverage them on a different project.

   In order to manage all this we need to measure the velocity at each queue in the process (a queue being Dev, Test, Requirements Generation and Development, etc...).  When we do that we can then look at the software process, identify the constraint, and then work the constraint to improve it if improving the constraint makes business sense.  If for example development is the constraint, at some point you hit the Law of Diminishing Returns where adding more devs or improving their productivity is more costly than the resultant throughput improvement.  Measuring throughput at the different queue levels is critical if you want to be able to accurate assess the health of your development efforts. 

  I think of Agile Management as Active Management or perhaps even better Proactive Management.  An Agile Project Manager is actively engaged in analyzing the system and determining where improvement is needed.  In so doing he is much more effective than the project manager that runs around checking with everyone if they are still on track with what the Gantt Chart says.

  Metrics are certainly an important part of the Agile Management process.  The metrics considered important by Agile/Lean are certainly differently than the ones historically used by standard methodologies, but still critical to guiding projects successfully.

 

My System Build Software Specs

Base Build

  • OS (Vista preferred, but XP SP2 otherwise)
  • IE8
  • Firefox 3
  • Google Reader Note Bookmarklet
  • Live Mesh
  • Office 2007 SP1
  • Zoomit
  • VPN
  • Virtual PC
  • iTunes (do I have to??)
  • Windows Live Suite
  • Twirhl (everybody says use Tweetdeck, but I persist frankly because I don't need anything more than Twirhl)
  • Verizon Access Manager

Visual Studio

Oracle

Thursday, August 21, 2008

Oracle's stance on .NET?

In a word, Puzzling, over the years they have been historically late delivering support in their ADO.NET provider for new features in Oracle.  That said ODP.NET was generally the better option for accessing Oracle's more extended features than the MS based provider.  They were there giving .NET developers something.  Oracle Developer Tools for .NET (ODT) came later as a plug-in to Visual Studio to improve the development experience. It was a great move by Oracle at the time to improve the Visual Studio experience.  The integrated debugging (from .NET code to PL/SQL) was nice although honestly the tooling had enough usability issues and missing functionality that I had to go back to PL/SQL Developer to be productive even though I didn't want to switch between tools. 

  With the rise of LINQ, Entity Framework, and Visual Studio Team System Database Edition (VSTSDB) there is a lot of activity happening in the .NET Database world.  The Database Edition made news in June when it was announced that they would be exposing a provider model of sorts to support other database technologies to plug-in and use the tooling.  Deployment, Builds, Code Analysis, and Unit Testing are areas of database development that are extremely immature compared to what we see available in the Java and .NET worlds.  Data Dude as VSTSDB was called has given database developers tools to start making up the gap in those areas.  Oracle support for VSTSDB would of course obsolete their investment to some degree in ODT and I would be surprised to see them do that unfortunately.  I would personally prefer to see VSTSDB extended to support Oracle rather than have Oracle build a different development experience within VS.

  With the VSTSDB vs ODT argument Oracle is at least doing something with ODT to support .NET developers.  The lack of information and commitment from Oracle in regards to LINQ and the Entity Framework is the most puzzling piece.  LINQ and Entity Framework are at the forefront of many discussions these days.  Many database vendors are supporting it (including IBM who along with their move to support VSTSDB seem to be positioning DB2 as a .NET development option much more than they have in the past - or at least much more than they have seemed to in the past).  If Oracle's position is that they will let third-parties provide this capability (of which there are several doing just that) - then fine - but at least communicate.  There is a hesitancy by some to go the third party route (that is certainly the case at my company) and a preference to go with what Oracle provides if they were to provide something.  Oracle saying they aren't going to do it will allow people to plan accordingly or saying that they will but the timeline is lengthy still allows for dev teams to plan accordingly.  In the absence of information developers wonder about Oracle's commitment to .NET development and in a database world where for the large majority of apps could run just fine on any number of database platforms losing the hearts and minds of developers and architects doesn't seem to be a wise thing to do.

Tuesday, August 12, 2008

Agile - Micro-Management in a different package

I just finished reading Agile Project Management with Scrum by Ken Schwaber.  As I was reading it (and enjoying it - I find a lot to like in Scrum) I was struck at how easily Agile methodologies like Scrum or XP could fall into the trap of Micro-Management. 

Wiktionary defines micromanagement as

The direct management of a project etc to an excessive degree, with too much attention to detail and insufficient delegation

http://en.wiktionary.org/wiki/micromanagement

Freedictionary.com defines it as

To direct or control in a detailed, often meddlesome manner.

 http://www.thefreedictionary.com/micromanage

Anybody been in a stand-up that felt like that?  Now that Agile has "crossed the chasm" and is the popular method to use to improve software development it is likely that we will see micro management deployed in the disguise of Agile more often. 

David Laribee in a recent blog post characterized Agile differently than you usually hear it - he spoke of Agile as a way you do business disconnected from the many software practices that often come to mind when Agile is referenced (TDD, Stories, XP, Continuous Integration, Stand-up, Pair Programming, etc...).  All too often the software development community looks at the engineering practices that need to be implemented to be "Agile" and fail to appreciate the management practices that have to take hold.  Management practices are more than stories and iterations.  They are more than just involving customers in sprint planning meetings.  It affects how you approach decision making, prioritization, and organizational strategy.  That is one of the reasons that I am reading - Agile Management for Software Engineering - to get a better feel for the management mindset that needs to change to support Agile and what that different mindset looks like.  Because if Agile's long term answer to questions of how to manage Agilely is to point people to the XP books we've got problems.

Agile is no silver bullet.  It takes a lot of commitment for an organization to be Agile and characterizing your organization as "Agile "before it truly is - is detrimental.  In some cases misapplication can lead to micro-management and in other cases it leads to the absence of project management.

Technorati Tags: ,,,

Intel says "Skip Vista"

  Ed Bott covered this weeks ago when the news was made public that Intel was planning on skipping Vista and he provides good insight into the history around that and why it isn't super significant.  Being a former Intel employee the move doesn't surprise me.  It has been a little under a year since I left the company and they were already setting up that decision already.  A lot of the engineering effort to roll it out was slowed way down if not iced completely.  There is a lot of things I loved about Intel, but this Vista decision highlights one that always frustrated me.

  Intel seems to promote a "Do as I say not as I do" policy.  They tried mightily to get Microsoft to certify certain platforms as Vista Capable to promote hardware sales even when the hardware specs made the Vista experience sub-optimal (plethora of articles).  They also obviously promote people buying newer computers to get the latest processing power at a time when the average email reading, Internet surfing consumer has an average CPU usage of below 10% (a personal guess that I believe to be in the ballpark - if anyone knows a source for data like this I would be interested in it).  They want Vista in the consumer market because of the extra CPU requirements that features like the Aero interface bring yet internally they don't drink their own koolaid.

  Intel is a company and it is about making money fundamentally and I am sure they looked at the business bottom line and made this decision.  But it is nice to see companies that align their public message with their internal image.  In this case I think there is an mismatch between the message that Intel sells and the one they practice.  There is basically one option for a laptop through Intel's IT department (I don't count the ultra-thin as an option for most but technically there was the ultra-thin and then the laptop for everyone else) .  No Core 2 Duo options, 2 GB of RAM, slow HDD, small screen.  You can read in many places on the web what an appropriate developer machine is (like on Coding Horror), but those specs don't come close.  In my new job I was immediately handed a Core 2 Duo machine with 3.5 GB of RAM, good sized laptop screen, plus a large LCD. 

  I always got the feeling that even though Intel sells IT as an investment to your company that will bring good ROI internally they see it simply as a cost center.  My manager at Intel realized the insufficiency of the machines we were given and allowed budget for us to go out and buy machines that we could really use.  So for all of Intel's attempt to be cost conscious by providing a very limited set of IT supported hardware all they really did is promote more expense by forcing people to pay for an IT machine plus another machine that would really do what they needed.

 

Technorati Tags: ,,

What happens between Sprints

  I have been working with Agile methodologies for a couple of years now.  I have had a fair amount of practice on real projects and read a lot on the subject.  Lately Scrum has been on the top of my list.  I like some of the added direction for product/project management that I felt was missing in eXtreme Programming.  One thing that I haven't yet figured out is the cadence or flow between iterations (or sprints to use Scrum terminology).  You have a Sprint Planning Meeting, then Sprint, have a Sprint Review, and then a Sprint Retrospective.  After that is it a simple Lather, Rinse, Repeat process - is there some dead time - what happens?  I am curious about others strategies in this space. 

  In the past we have kept the flow going between iterations (one iteration right after another) and after the release there was down time for a couple of weeks before the iteration drumbeat was started again.  Iterations tended to be high-energy, high focus times and while that made us very effective - maintaining that could lead to burn out.  The alternative I suppose is to have lower energy iterations, but to me that would start to drain some of the effectiveness of Agile.  With Scrum it is a little different - perhaps after every Sprint there is a week down time before the next Sprint starts. 

  With high energy iterations we try to minimize context switching by maximizing focus.  I like Scrum's thinking in this space with the 3 week to month long Sprint versus XP's iteration concepts - you could do XP iterations of the same length, but I like the Scrum philosophy about how customers and dev teams interact during that time better.  At least from my interpretation Scrum frames the discussion better with the customer on the costs of changing their mind.  Sometimes I feel like the XP material waves on that - there is a cost to changing your mind that customers need to know.  Whether that cost is due to rework, dev team context switching, etc..

  Now to close let me say that by down time I don't mean slacking off etc..  By minimizing context switching and maximizing focus sometimes organizational things and even some project things get pushed to the side.  The downtime is a good time to clean up loose ends in my opinion.  By loose ends that doesn't mean you are tying up last minutes bugs or finishing testing though!