Tuesday, November 4, 2008

Requirements Definition : The Danger of Failing Before You Have Really Started

  Requirements, Stories, Use Cases, or whatever a team wants to call them form a key part of the execution plan for a development team.  It should tell them what a customer wants.  The artifacts in whatever form they are tend to have varying degrees of detail.  Getting the right amount of detail at the right time is critical to the process of producing successful software.  My thoughts on Requirements definition form around a process with three key elements.

  1. Development
  2. Prioritization
  3. Definition

  Development is first because you have to develop or brainstorm something to start with.  This generally is initially tied to a vision of what the software is supposed to do.  Over time as a product matures this phase often happens through user trials, surveys, feedback, and telemetry from the app that describes its usage and provides insight into where the app needs to evolve. This also happens as analysts evaluate the market the product fits in and based on the market’s evolution or the assumptions about where it will evolve.  In evaluating they determine what features needs to be added in order to allow the software to continue to be competitive in the marketplace.  With internal software this phase is often underappreciated and underutilized.  This is unfortunate because mistakes here whether in internal or commercial products can cause teams to miss the target market tremendously.  A mistake of one degree in a flight plan early in a flight plan causes a much greater deviation than a mistake relatively close to the target.  The Agilist in me openly admits that it is impossible to know everything upfront.  The development effort isn’t about deep detail, but broad strokes of strategy that guide the more detailed planning that occurs later.

  Prioritization comes second as the ideas developed get prioritized.  This act pares down the list of items that need to be defined in detail.  I like Scrum’s backlog analogy but I think the Product and Sprint backlogs might not be enough.  To use a baseball analogy I think you have an At-Bat Backlog, an On-Deck Backlog, and an In the Hole Backlog.  At bat would be your Sprint Backlog representing what is in play now.  On Deck represents perhaps a release backlog or something of that sorts.  These are items that are going to be in play and will need to be defined fairly soon.  In fact during a Sprint it wouldn’t be uncommon for the PM, Business Analyst, etc… to be very active in defining those items near the top of the On-Deck list.  The On-Deck list becomes a focal point in the prioritization process.  The Sprint is work committed to and while some teams might want to have a relative priority there I don’t have a firm opinion on it.  The On-Deck list is key – what will the team work on next is the question that needs to be answered by the prioritization process.  Once we have a prioritization there we can move forward with the third step in the process – Definition

  Definition in this context is providing the detail necessary to move the concept forward in the planning process and ultimately moving it successfully into the implementation phase.  Steps 2 and 3 in the process end up being a rinse and repeat type of deal.  Items will be prioritized onto a list and will need to be defined to some degree.  For example when an item is prioritized onto the On-Deck list it is likely that definition needs to happen in order for some estimates to be provided as to how long the items on the On-Deck list will take to complete (thinking of the On-Deck list as representing the functionality in a release).  As On-Deck items in the list move up – more definition is added so that the customer needs can be clearly identified and the developer can have in hand as much detail as possible when that item pops onto the At Bat Backlog.  It is here that we get to the crux of the matter that caused this post to be written.  I have seen two common behaviors that cause teams to fail relative to requirements definition.  The first is that requirements are defined too early in too much detail and not revisited effectively and as time goes on customer needs shift or change and the requirement as it was written months ago no longer accurately reflects their needs.  The second behavior is too little detail.  A story is defined with a title (which I think the whole story card/post-it thing encourages) and little else and exists like that all the way to the developer.  He has in his mind what the title means, the customer has another thing in mind, and program management has yet another.  Agile methodologies advocate the definition happen through consistent customer interaction and if that happens that can work.  All to often in practice it doesn’t work that way.

The takeaway is don’t under value the requirements process it is perhaps the most difficult thing to get right in the whole software process.  Too much detail, too soon or too little detail, too late – what is too soon and what is too late and how much is too much or little – not easy questions – the process above has worked well for me in addressing the challenges with getting requirements right.

Friday, October 10, 2008

Blu-ray is in trouble

This news article reflects my same sentiments.  Blu-ray may have won the format war, but it has already lost the overall battle.  Streaming videos into your home is already a reality.  You can do it with Netflix (their partnership with Starzz makes their list of movies to watch now much more interesting), Movelink (partnered with Blockbuster), Amazon, and Apple.  Netflix launched their set-top box this year and Apple has had the AppleTV out for quite a while now.  You also have Media Center extenders that take XP Media Center and Vista Media Center and allow you to watch movies from Amazon, Netflix, and Movielink including the renting and downloading all from right at your TV. 

People are ripping their movies from DVD onto HDDs at an increasing rate as well (I can’t count how many Disney DVDs we have had to replace because of scratching).  These HDD based movies are available through some of the mediums discussed above (notably Media Center) to be viewed without ever having to put a disc in anything.  I see this mode increasing especially when you add to it the home movies, photos, and music collections that people are accumulating.  As an aside the increasing amount of digital storage that the average user has puts increased emphasis on having a solid backup strategy.  Even with the likes of Time Machine, Windows Home Server, Mozy and other online services, and even the improved backup capabilities in Vista there is a lot of room for improvement here in the consumer market.  In the end Blu-ray will undoubtedly stay around for a few more years until the streaming options include HD quality, but its demise is already fixed.  Let the streaming wars begin!

Wednesday, October 1, 2008

How many choices are too many choices?

Americans are constantly faced by a myriad of choices.  When we go out to eat we have to pick between Italian, American, Chinese, Greek, Thai, Mexican, Indian, etc….  Once you have picked a type then you have to choose between Applebees, Chili’s, Outback, etc…  The same thing happens on trips to the grocery store, picking a dentist, or what movie to go to.  Many times we sit and spin on making a choice about something that likely doesn’t truly matter that much.

In the technology world this problem continually manifests itself.  Most techies have probably seen projects where the project deadline was hit before all the architectural choices were made because too much time was spent weighing the pros and cons of the variety of choices available.  That isn’t to say that decisions shouldn’t be weighed and measured, but that this activity should be time-boxed and at the end of the time the decision be made based on the data available.  One common example I use as evidence for this is citing the variety of technologies used by the big sites out on the Web.  Facebook uses PHP, MySpace and Microsoft use ASP.NET, Google uses Java (I believe) as do many, many others.  The same argument could be made about what server OS to use, what database to use, and so on.  The fact of the matter is with good people most technologies can be made to meet the need.  The argument that many people will try to cite is the productive improvements that technology X will bring.  The problem with that argument is that productivity is very difficult to measure (many smart people have tried and I have yet seen anyone to trumpet a truly successful way to measure) and so that argument is easy to make, but very, very difficult to prove correct (and often not worth the cost of doing so).

The danger comes when the new technology of the day or moment causes continual churn in an organization.  The seduction of always looking for best of breed (assuming for a moment that there was someway to truly determine best of breed) is that you are then set up to become a technology merry-go-round.  Invest in your technology selections, build expertise, and go deliver value.  Choose to get off the merry-go-round and make a commit.  Change of course will come over time, but when it does it should be obvious and done for obvious reasons.  In most cases change should be made because it will be a game changer either in dollars saved or dollars earned or provide obvious (emphasis on obvious) productivity gains.

Tuesday, September 16, 2008

Programmatically setting the version of the Enterprise Library Configuration Tool for Visual Studio

I previously wrote about the issue we ran into using Enterprise Library and Unity that caused us to have to roll our own version of the binaries.  The procedure to get the config tool to reference our custom binaries involves changing the EnterpriseLibraryConfigurationSet solution property.  Since we are developing a Starter Kit to be used for new projects starting up here at the Church I wanted to add to our Starter Kit automation the setting of the property to the appropriate value.  Tried as hard as I wanted I couldn't get it to work until I accessed the Solution Property using EntepriseLibraryConfigurationSetPropertyExtender.EnterpriseLibraryConfigurationSet which is how it was listed when I enumerated the property collection. 

So the one line piece of code to do the job is

Dte.Solution.Properties.Item("EntepriseLibraryConfigurationSetPropertyExtender.EnterpriseLibraryConfigurationSet").Value = "StackV1EntLib";




Of course that needs to be accompanied with an entry in the Registry that the StackV1EntLib string can point to which is listed below


[HKEY_CURRENT_USER\Software\Microsoft\Practices\EnterpriseLibraryV4\ConfigurationEditor\StackV1EntLib]

"ConfigurationUIAdapterClass"="Microsoft.Practices.EnterpriseLibrary.Configuration.Design.UI.SingleHierarchyConfigurationUIHostAdapter"


"ConfigurationUIAssemblyPath"="C:\\Program Files\\MSStack\\V1\\StackEntLib\\Microsoft.Practices.EnterpriseLibrary.Configuration.Design.UI.dll"


"ConfigurationUIPluginDirectory"="C:\\Program Files\\MSStack\\V1\\StackEntLib\\"



 



Technorati Tags: ,,

Programmatically setting multiple startup projects on a Visual Studio solution

  A month or so ago I was waist deep in Visual Studio automation code trying to figure out how to create a solution programmatically with multiple startup projects.  I searched and searched on the Internet, but could never find the answer.  I knew that the code I was writing was very, very close, but it wasn't working.  At the time I had to step away and work on other things that were more important, but today I was in and around that code and tried again.  I found the answer on good old Google in less than 10 minutes - http://www.dotnetmonster.com/Uwe/Forum.aspx/vs-ext/1609/Editing-DTE2-Solution-SolutionBuild-StartupProjects .  It is as simple as setting the StartupProjects property of the SolutionBuild object to an array of objects populated with the unique name of projects in the solution.  I had been trying to set it to a string array of the same thing!  Talk about close.  Either way job accomplished.  Our VS automation is now just a little more polished as a result.

Technorati Tags: ,

Friday, September 5, 2008

That darn backlog there is so much stuff in it!

  I used to be a believer that when a good idea came around you should throw it on the backlog.  Those ideas (or bugs) live on the backlog indefinitely waiting for their chance to "dance".  As I re-evaluate my thinking from a Lean perspective I am reconsidering that approach.  In Lean maintaining large amounts of inventory (queued inventory) is frowned up.  Without going into all the gory detail of Lean (if you are an Agilist, want to be an Agilist, or develop software go read about Lean and think about it in terms of your software development process - it will be good for you) large amounts of inventory lead to "inventory rot".  In the software world this amounts to requirements that were scoped two years ago by a Business Analyst that is no longer around or a bug reported by a tester two releases ago.  The information relevance in those backlog items has dropped considerably during the time that they were idle.  As time goes on and it becomes obvious that prioritizing them won't happen - close them.  Dump them.  If they are important they will show up again.

  In addition to trimming the backlog appropriately it may make sense to have tiered backlogs.  There will be a Sprint backlog (or Iteration Plan) that represents the work in flight.  It is useful to have an "on deck" backlog that represents the work coming up in the next 6 months or so.  The "long range" backlog represents those ideas and concepts that are more "out there" and likely are fairly vague, large, and inestimable.  Each backlog requires different types of care and feeding and involves different sets of people in the care and feeding process.  The Sprint backlog is the focus of the team, talked about in Standup as people report progress, and the tactical focus.  The On-Deck Backlog should be actively worked by the Business Analyst, Customers, Technical Lead/Architect, and Project Manager.  Perhaps this is a weekly, bi-weekly, or monthly meeting.  The Long Range backlog is likely discussed perhaps monthly or quarterly as needed to identify items ready to move on-deck and be fleshed out, identify new directions that need to be captured, and obsolete ideas that are no longer applicable (don't forget this one!).

  With bugs I subscribe to the Broken Window theory.  The more you have the less likely you are to care.  You have to be careful with that of course - take it too far and you end up with a product with no bugs, but no features either because you have spent all your time fixing bugs that weren't important.  Once a bug has been identified as costing more to fix than it is worth you start to have a case for closing it and dropping it off the bug backlog.

  The goal is to make sure your focus is where it needs to be.  Your Sprint Backlog efforts should be  focused on execution.  Moving those items through the development process as effectively as you can.  On-Deck Backlog efforts should be focused around locking down the requirements for the items so that they are ready to be executed on (a blog on how critical it is to do requirements definition is in the works).  The Long Range backlog efforts should be focused around making sure that you have outlined the key strategic elements for the future and analyzing them in the context of the market you are serving to see what value propositions make the most sense to pursue.

Using Resources with WPF and Winforms

I had a developer ask me how to use Resources the other day.  Honestly I had never used them for any production system and so I didn't know.  So I decided to find out.  Below is the code on how to do it in WPF and then after that how you would do it in Winforms.

WPF

I borrowed a lot of  this example from http://mostlytech.blogspot.com/2007/09/enumerating-xaml-baml-files-in-assembly.html.  The article I link to showed how to iterate through Resources files that are in your solution with the Build Action set to Resource.  I then use that to put an image on a button that alternates every time it is clicked.  At http://forums.msdn.microsoft.com/en-US/wpf/thread/1bb025e8-a20a-43c4-a760-8666c63ff624/ it explains how to work with Resources that you define in the Resource tab in the Project Properties.  You can also set an image directly in XAML as shown below

using System;


using System.Collections;


using System.Collections.Generic;


using System.IO;


using System.Reflection;


using System.Resources;


using System.Windows;


using System.Windows.Media.Imaging;


 


namespace WpfApplication1


{


    /// <summary>


    /// Interaction logic for Window1.xaml


    /// </summary>


    public partial class Window1 : Window


    {


        private int ButtonClicks = 0;


        List<object> EmbeddedResources = new List<object>();


        public Window1()


        {


            InitializeComponent();


            Assembly asm = Assembly.GetExecutingAssembly();


            Stream stream = asm.GetManifestResourceStream(asm.GetName().Name + ".g.resources");


 


            using (ResourceReader reader = new ResourceReader(stream))


            {


                foreach (DictionaryEntry entry in reader)


                {


                    if (entry.Key.ToString().Contains(".jpg"))


                    EmbeddedResources.Add(entry.Key);


                }


            }


 


        }


 


        private void Button_Click(object sender, RoutedEventArgs e)


        {


 


            ButtonClicks++;


            ButtonImage.Source = new BitmapImage(new Uri(EmbeddedResources[ButtonClicks % 2].ToString(), UriKind.Relative));


        }


    }


}




<Window x:Class="WpfApplication1.Window1"
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
Title="Window1" Height="300" Width="300">
<
Grid>
<
Button Click="Button_Click">
<
Image Name="ButtonImage" Source="Images/IMG_7680.jpg" />
</
Button>
</
Grid>
</
Window>


Winforms



I didn't spend as much time looking at the Winforms side of things, but here is a snippet of code that can be used.  Note that I set the Build Action for the Image file to EmbeddedResource because that is what all the examples said to do.  Bitmap has an overload that allows it to resolve Resource References.  The name of the resource becomes <Namespace>.<Path>.<Filename>.



pictureBox1.Image = new Bitmap(typeof(Form1), "Images.IMG_7680.jpg");




Technorati Tags: ,,,

Friday, August 29, 2008

Does Velocity change when we look at software from a Lean perspective?

  Most Agile methodologies I have tried, read about, etc... include the concept of Velocity.  Velocity essentially is a historical measurement of the rate at which a software team produces value.  I am currently reading Agile Management for Software Engineering by David Anderson.  I believe the term he uses to describe the philosophy of using velocity to predict future output is Empirical Process Control.  I haven't finished the book yet, but as I was thinking about empirical process control and velocity in the context of Lean and the Theory of Constraints I had the thought that I have been looking at velocity all wrong.

  The teams I have been a part of have generally looked at velocity in one of two ways, either it was the measure of how many work items went from Resolved to Closed (meaning that QA successfully completed testing them) or it was how many went from Active to Resolved (meaning that Dev finished coding and unit testing and was ready for the feature to be run through QA).  When reexamining that strategy in the light of Lean and Theory of Constraints we actually should have been measuring velocity for both of those functions (QA and Dev).  Theory of Constraints is all about identifying the bottleneck and managing your input into the system to make sure you 1.  Feed the Constraint as effectively as possible (don't let it run dry) and 2. Manage inventory so you don't overload the pipeline with excess inventory. 

  Before diving into how and why you would measure velocity at different points perhaps a little aside into the dangers of excess inventory is in order.  Say for example Developer Tom finishes Story A and marks it as Resolved so that it moves into Tester Tim's queue to QA Tom's work.  Tim is currently the constraint in the system so work is piling up in front of him.  He isn't able to QA Tom's work for a week.  Tom of course doesn't sit around for that week and he moves on to a new story.  A week later Tim discovers that something isn't working right with the feature Tom completed and reactivates it.  Now several bad things happen as a result of the delay - the work Tom did on Story A is no longer fresh on his mind and so he has to spend time spinning up on it again.  The work he started on the new story gets interrupted and he will have to take some time to reacquaint himself with it when he finishes the rework on Story A.  Additionally we measured velocity by how how much the Devs marked as Resolved and so the previous week we reported X number of story points completed when due to the rework on Story A it was only X minus the number of Story Points for A.  We have a mess on our hands.

  Now you can't eliminate rework in the system no matter how hard you try. QA will identify issues with the code Dev delivers and Devs will find issues with the story details delivered to them by the Analysts or Customers, etc...  By reducing the excess inventory in the system and managing to the constraint you can reduce the cost of the context switch that the rework incurs.  In our situation Tim is the bottleneck and we shouldn't keep piling up inventory on his doorstep.  We need to slow the cadence of the development line to match his production.  With the excess time in other queues you look how you can leverage them in the meantime perhaps developers use it for training or if excessive enough leverage them on a different project.

   In order to manage all this we need to measure the velocity at each queue in the process (a queue being Dev, Test, Requirements Generation and Development, etc...).  When we do that we can then look at the software process, identify the constraint, and then work the constraint to improve it if improving the constraint makes business sense.  If for example development is the constraint, at some point you hit the Law of Diminishing Returns where adding more devs or improving their productivity is more costly than the resultant throughput improvement.  Measuring throughput at the different queue levels is critical if you want to be able to accurate assess the health of your development efforts. 

  I think of Agile Management as Active Management or perhaps even better Proactive Management.  An Agile Project Manager is actively engaged in analyzing the system and determining where improvement is needed.  In so doing he is much more effective than the project manager that runs around checking with everyone if they are still on track with what the Gantt Chart says.

  Metrics are certainly an important part of the Agile Management process.  The metrics considered important by Agile/Lean are certainly differently than the ones historically used by standard methodologies, but still critical to guiding projects successfully.

 

My System Build Software Specs

Base Build

  • OS (Vista preferred, but XP SP2 otherwise)
  • IE8
  • Firefox 3
  • Google Reader Note Bookmarklet
  • Live Mesh
  • Office 2007 SP1
  • Zoomit
  • VPN
  • Virtual PC
  • iTunes (do I have to??)
  • Windows Live Suite
  • Twirhl (everybody says use Tweetdeck, but I persist frankly because I don't need anything more than Twirhl)
  • Verizon Access Manager

Visual Studio

Oracle

Thursday, August 21, 2008

Oracle's stance on .NET?

In a word, Puzzling, over the years they have been historically late delivering support in their ADO.NET provider for new features in Oracle.  That said ODP.NET was generally the better option for accessing Oracle's more extended features than the MS based provider.  They were there giving .NET developers something.  Oracle Developer Tools for .NET (ODT) came later as a plug-in to Visual Studio to improve the development experience. It was a great move by Oracle at the time to improve the Visual Studio experience.  The integrated debugging (from .NET code to PL/SQL) was nice although honestly the tooling had enough usability issues and missing functionality that I had to go back to PL/SQL Developer to be productive even though I didn't want to switch between tools. 

  With the rise of LINQ, Entity Framework, and Visual Studio Team System Database Edition (VSTSDB) there is a lot of activity happening in the .NET Database world.  The Database Edition made news in June when it was announced that they would be exposing a provider model of sorts to support other database technologies to plug-in and use the tooling.  Deployment, Builds, Code Analysis, and Unit Testing are areas of database development that are extremely immature compared to what we see available in the Java and .NET worlds.  Data Dude as VSTSDB was called has given database developers tools to start making up the gap in those areas.  Oracle support for VSTSDB would of course obsolete their investment to some degree in ODT and I would be surprised to see them do that unfortunately.  I would personally prefer to see VSTSDB extended to support Oracle rather than have Oracle build a different development experience within VS.

  With the VSTSDB vs ODT argument Oracle is at least doing something with ODT to support .NET developers.  The lack of information and commitment from Oracle in regards to LINQ and the Entity Framework is the most puzzling piece.  LINQ and Entity Framework are at the forefront of many discussions these days.  Many database vendors are supporting it (including IBM who along with their move to support VSTSDB seem to be positioning DB2 as a .NET development option much more than they have in the past - or at least much more than they have seemed to in the past).  If Oracle's position is that they will let third-parties provide this capability (of which there are several doing just that) - then fine - but at least communicate.  There is a hesitancy by some to go the third party route (that is certainly the case at my company) and a preference to go with what Oracle provides if they were to provide something.  Oracle saying they aren't going to do it will allow people to plan accordingly or saying that they will but the timeline is lengthy still allows for dev teams to plan accordingly.  In the absence of information developers wonder about Oracle's commitment to .NET development and in a database world where for the large majority of apps could run just fine on any number of database platforms losing the hearts and minds of developers and architects doesn't seem to be a wise thing to do.

Tuesday, August 12, 2008

Agile - Micro-Management in a different package

I just finished reading Agile Project Management with Scrum by Ken Schwaber.  As I was reading it (and enjoying it - I find a lot to like in Scrum) I was struck at how easily Agile methodologies like Scrum or XP could fall into the trap of Micro-Management. 

Wiktionary defines micromanagement as

The direct management of a project etc to an excessive degree, with too much attention to detail and insufficient delegation

http://en.wiktionary.org/wiki/micromanagement

Freedictionary.com defines it as

To direct or control in a detailed, often meddlesome manner.

 http://www.thefreedictionary.com/micromanage

Anybody been in a stand-up that felt like that?  Now that Agile has "crossed the chasm" and is the popular method to use to improve software development it is likely that we will see micro management deployed in the disguise of Agile more often. 

David Laribee in a recent blog post characterized Agile differently than you usually hear it - he spoke of Agile as a way you do business disconnected from the many software practices that often come to mind when Agile is referenced (TDD, Stories, XP, Continuous Integration, Stand-up, Pair Programming, etc...).  All too often the software development community looks at the engineering practices that need to be implemented to be "Agile" and fail to appreciate the management practices that have to take hold.  Management practices are more than stories and iterations.  They are more than just involving customers in sprint planning meetings.  It affects how you approach decision making, prioritization, and organizational strategy.  That is one of the reasons that I am reading - Agile Management for Software Engineering - to get a better feel for the management mindset that needs to change to support Agile and what that different mindset looks like.  Because if Agile's long term answer to questions of how to manage Agilely is to point people to the XP books we've got problems.

Agile is no silver bullet.  It takes a lot of commitment for an organization to be Agile and characterizing your organization as "Agile "before it truly is - is detrimental.  In some cases misapplication can lead to micro-management and in other cases it leads to the absence of project management.

Technorati Tags: ,,,

Intel says "Skip Vista"

  Ed Bott covered this weeks ago when the news was made public that Intel was planning on skipping Vista and he provides good insight into the history around that and why it isn't super significant.  Being a former Intel employee the move doesn't surprise me.  It has been a little under a year since I left the company and they were already setting up that decision already.  A lot of the engineering effort to roll it out was slowed way down if not iced completely.  There is a lot of things I loved about Intel, but this Vista decision highlights one that always frustrated me.

  Intel seems to promote a "Do as I say not as I do" policy.  They tried mightily to get Microsoft to certify certain platforms as Vista Capable to promote hardware sales even when the hardware specs made the Vista experience sub-optimal (plethora of articles).  They also obviously promote people buying newer computers to get the latest processing power at a time when the average email reading, Internet surfing consumer has an average CPU usage of below 10% (a personal guess that I believe to be in the ballpark - if anyone knows a source for data like this I would be interested in it).  They want Vista in the consumer market because of the extra CPU requirements that features like the Aero interface bring yet internally they don't drink their own koolaid.

  Intel is a company and it is about making money fundamentally and I am sure they looked at the business bottom line and made this decision.  But it is nice to see companies that align their public message with their internal image.  In this case I think there is an mismatch between the message that Intel sells and the one they practice.  There is basically one option for a laptop through Intel's IT department (I don't count the ultra-thin as an option for most but technically there was the ultra-thin and then the laptop for everyone else) .  No Core 2 Duo options, 2 GB of RAM, slow HDD, small screen.  You can read in many places on the web what an appropriate developer machine is (like on Coding Horror), but those specs don't come close.  In my new job I was immediately handed a Core 2 Duo machine with 3.5 GB of RAM, good sized laptop screen, plus a large LCD. 

  I always got the feeling that even though Intel sells IT as an investment to your company that will bring good ROI internally they see it simply as a cost center.  My manager at Intel realized the insufficiency of the machines we were given and allowed budget for us to go out and buy machines that we could really use.  So for all of Intel's attempt to be cost conscious by providing a very limited set of IT supported hardware all they really did is promote more expense by forcing people to pay for an IT machine plus another machine that would really do what they needed.

 

Technorati Tags: ,,

What happens between Sprints

  I have been working with Agile methodologies for a couple of years now.  I have had a fair amount of practice on real projects and read a lot on the subject.  Lately Scrum has been on the top of my list.  I like some of the added direction for product/project management that I felt was missing in eXtreme Programming.  One thing that I haven't yet figured out is the cadence or flow between iterations (or sprints to use Scrum terminology).  You have a Sprint Planning Meeting, then Sprint, have a Sprint Review, and then a Sprint Retrospective.  After that is it a simple Lather, Rinse, Repeat process - is there some dead time - what happens?  I am curious about others strategies in this space. 

  In the past we have kept the flow going between iterations (one iteration right after another) and after the release there was down time for a couple of weeks before the iteration drumbeat was started again.  Iterations tended to be high-energy, high focus times and while that made us very effective - maintaining that could lead to burn out.  The alternative I suppose is to have lower energy iterations, but to me that would start to drain some of the effectiveness of Agile.  With Scrum it is a little different - perhaps after every Sprint there is a week down time before the next Sprint starts. 

  With high energy iterations we try to minimize context switching by maximizing focus.  I like Scrum's thinking in this space with the 3 week to month long Sprint versus XP's iteration concepts - you could do XP iterations of the same length, but I like the Scrum philosophy about how customers and dev teams interact during that time better.  At least from my interpretation Scrum frames the discussion better with the customer on the costs of changing their mind.  Sometimes I feel like the XP material waves on that - there is a cost to changing your mind that customers need to know.  Whether that cost is due to rework, dev team context switching, etc..

  Now to close let me say that by down time I don't mean slacking off etc..  By minimizing context switching and maximizing focus sometimes organizational things and even some project things get pushed to the side.  The downtime is a good time to clean up loose ends in my opinion.  By loose ends that doesn't mean you are tying up last minutes bugs or finishing testing though!

 

Wednesday, July 23, 2008

Many technologies can scale

  I was going to post this on Twitter - but shockingly - Twitter is down - I know - haven't heard that one before.  So in light of that - I am dropping my thought on my blog - so 2007.  Dare has some good thoughts around scaling and points out rightfully so that you can find examples of many different technologies that companies have scaled successfully even when you'll have people vehemently proclaiming that there is no way that technology can scale.  Scale in many cases is simply a function of how much work you want to put into it - guaranteed that the companies that Dare cites all have done some pretty tricky flips to eek out extra performance from their foundational technologies.  In the Team Foundation Server space which I follow quite heavily Brian Harry has documented on many occasions Microsoft's efforts to scale their TFS environment to meet the demands they place on it.

As a closing note people that take extreme (or absolute) positions on technology are almost always WRONG!

 

Technorati Tags: ,

Monday, July 14, 2008

The myth of cost when comparing Oracle Explain Plans

The cost that shows up in Oracle Explain Plans has long been a source of confusion for developers. I can’t count how many times I have heard experience Oracle developers comparing the cost of Explain Plans between different queries.
Now let me define different queries because I think that is often part of the confusion. Say you write a query to go get Lot History from Intel’s factory databases. As you go to write the query you realize that there are two ways that you could write it - perhaps the different ways involve different tables or different where clauses. If you then go get the Explain Plan for those queries they will show you the access paths that the database is taking to retrieve the data you want. Even though you are wanting to get the same data (so in that case the queries could be considered the same because they yield the same result set) the queries are different and as such you cannot use the cost information in the two Explain Plans to compare which query is more efficient or faster. Tom Kyte describes in detail why this is in a response on AskTom found here.

Technorati tags: Oracle, SQL

Writing Filters for TFS Alerts - a exercise in frustration

My team recently began looking at possibly migrating to a new TFS configuration where we would enable several dev groups with related components/systems that previously had separate dev management systems and bring them into one team project.  One of the first things that we wanted to make sure was possible was the ability to get alerts for only check-ins directly in our section of the codebase and for all changes to workitems connected to our efforts.

I was familiar with bissubscribe.exe somewhat and knew that there was filtering capability so I began looking around.  I was amazed at how little documentation there is out there for working with VSEFL (Visual Studio Event Filtering Language).  I found some here and Clark Sell has some stuff here.  Clark references a link that say you will find everything you want to know and more at it about Eventing well I would have to disagree because there is not good documentation for VSEFL and how to use it effectively. 

Accentient has a list of Team System widgets and lists two for use in working with Events and Notifications (Alerts).  Naren's tool hurt me to start with, but ended up helping me in the end.

Here is how it all worked out.

Check-In Subscription

This was the easy part - Buck Hodges had the answer in one of his blog posts and so I just took that and modified it with my path information and away we went - it all worked.

WorkItemChangedEvent

I thought this one was going to be likewise easy with Naren's tool.  Unfortunately it doesn't produce filter expressions that work with Bissubscribe.exe.  The way the command-line wants the single and double quotes is different than Naren generates and so I was getting invalid subscriptions created.  They were invalid because they didn't have single quotes around the System.IterationPath string in part of the filter and without the quotes the filter just wouldn't work.  It took quite a bit of playing to figure out how it should work, but in the end I ended up with a working bissubscribe.exe call to have a filter for only those workitems that fell under a certain Iteration Path which is how we are choosing to segment our subprojects within our Team Project.  Below is the final call with some of my specific info stripped out (like my email address!)BisSubscribe.exe /eventType WorkItemChangedEvent /deliveryType EmailHtml /server someserver:8080 /address someemail@email.com /filter "PortfolioProject = 'FSM DSS' AND (\"CoreFields/StringFields/Field[ReferenceName='System.IterationPath']/NewValue\" MATCH '\\FSM DSS\\TSS.*' OR \"CoreFields/StringFields/Field[ReferenceName='System.IterationPath']/OldValue\" MATCH '\\FSM DSS\\TSS.*')"

Notice the \ before the double quotes at the start of the CoreFields... and then the \ before the close double quotes at the end - those were crucial - but I don't know why - I suspect it is some escape at the command line that I should know - but it worked and perhaps someday I will look it up to try and understand it.  If you know please leave a comment explaining it to me!

I hope this helps someone somewhere such that they have an easier time figuring this out then I did

Technorati tags: tfs, vsts, visual studio team system, tfs alerts, vsefl

What everyone should know about using Bitmap Indexes with Oracle

I am only through part one of the three parts of the article that Jonathan Lewis wrote about bitmap indexes, but it was so good that I had to post it now.  He takes on the common understanding/misunderstanding with bitmap indexes.

Technorati Tags: Oracle

Interesting Oracle trick for range queries

http://www.linuxdevcenter.com/pub/a/linux/2004/01/06/rangekeyed_1.html

Why does every iTunes point release require a 70 MB reinstall?

Since the iPhone release which generated I think iTunes 7.3 over the last couple of months we have had 7.3.1 and 7.3.2 and 7.4 and now 7.5.  Each required reinstalls (including a reinstall of Quicktime???) and the size continues to grow - last night when prompted to install 7.5 it downloaded and installed 69 MB or something like that - I remember 7 being in the range of high 40s.  Anybody ever heard of a patch?  I am a software developer so I understand a little about the distribution of software.  I just don't get why point releases aren't handled as patches - going from 6 to 7 or 7 to 8 I completely understand a reinstall, but not for point releases.

Technorati Tags: iTunes,Apple

Solving the Oracle Home nightmare - Oracle Locator Express

For a year or two now I have been using a nice systray utility from dbmotive that has served me very well in managing the variety of Oracle Homes that I always seem to have on my machine.  It is called Oracle Locator Express - it looks like it used to be called Oracle Home Selector or something like that or perhaps is a replacement for it.

It runs in your systray and looks something like this

image

 

As you can see I have Oracle homes for Oracle 11g, 10g, XE, and Instant Client (not sure what the Instant Client thing is though?).  For my day to day development work I use the 10g client to connect to the various Oracle databases that I use.  Occasionally when I have something that I want to try out or test (like the Linq to Oracle prototype that is out there) I will switch over to the XE version.  This makes it very convenient and hassle free compared to what I have had to do in the past.  You just right click on the icon in the systray and up pops the menu above.  You select the Oracle Home you want to be active and away you go.  You have to restart any applications that you want to use this new Oracle Home.  For example I would need to restart Visual Studio so that it picks up the new path to the Oracle Home same goes for any SQL apps you might use like TOAD or PL/SQL Developer.

Entlib 4.0, Unity, Logging Application Block, and a CLR bug makes for a bad day

  One of my first tasks at my new job has been to look at integrating the Exception Handling block and Logging block into our .NET Stack.  We are also exploring using Unity as the Dependency Injection container.

  Things were moving along as I started playing with Unity and the Exception Handling block, but as soon as I tried to simply add logging of the handled Exception - it all blew up in my face.  Fortunately others had discovered the problem as well which was traced back to a bug in the CLR which had previously been reported and marked as fixed in .NET 4.0.  Now I understand that bugs happen and especially when they are bugs in the underlying platform there is only so much to be done.  The Entlib team did provide a code fix that could be applied to the source code and then with a custom compilation of Entlib you could be off and running again.  Well sort of - having a custom version of Entlib introduces other problems when you are talking about using the VS config tool to manage Entlib config.  When deploying this to 30 developers so they can manage Entlib config on their projects it gets to be problematic as the instructions for getting the built-in config tool involves changing solution properties and copying binaries around are not trivial.

  If I were doing something out of the ordinary I would be more willing to pay the price  - but I am trying to do the most basic Unity-EntLib integration here.  I am disappointed that issues with such a common scenario weren't caught before release.  The p&p teams have obviously invested time to make Unity and EntLib play nicely together (ala the Unity extensions that are available out of the box to enable Entlib to work with Unity) - I would have imagined that acceptance testing of any sort would have caught this.  Perhaps the explanation is as simple as the issue repros differently (perhaps JITs differently) on different machines.  Here is hoping that when the fix to the binaries that is hopefully in the works comes out that the team will explain how this happened.  Based on the principles that p&p espouses I know they value quality highly which makes this even more unusual. 

 Note: I use this blog to post both Personal and Technical articles.  For a technical only feed use the following URL (http://bryanandnoel.spaces.live.com/category/technology/feed.rss).  For a family only feed use the following URL (http://bryanandnoel.spaces.live.com/category/family/feed.rss)

Technorati Tags: ,