Showing posts with label software builds. Show all posts
Showing posts with label software builds. Show all posts

Friday, March 12, 2010

UnitTest++ == teh awesome

Been using UnitTest++ for the last couple months on my work project. So far, enjoying it. Very simple to:
  1. set up
  2. add new tests
  3. explain to the other developers
  4. ?
  5. profit

Our unit tests are part of our project's Visual Studio solution file. Each unit test's project is setup to run the unit test as a post-build step. So far, it's caught a number of 'simple' fixes that broke some.

I've also setup a Hudson server to automatically build our project... But, I didn't want it to fail the build if the tests failed. And, I wanted to collect the XML reports when it's built from the continuous integration server.

So, rather than always just blindly calling UnitTest::RunAllTests() in the unit tests's main() function, I made a utility library that'll look at an environment variable to determine how it should run the tests.

In the unit tests's main() :



int main(int, char const *[])
{
return myUnitTest_runAllTests("my_test");
}

The utility function:




#include "cstdlib"
#include "fstream"
#include "iostream"

#include "boost/filesystem.hpp"

#include "UnitTest++/UnitTest++.h"
#include "UnitTest++/XmlTestReporter.h"


namespace bfs = boost::filesystem;

struct True
{
bool operator()(const UnitTest::Test* const) const
{
return true;
}
};



DLLExport int myUnitTest_runAllTests(const char* const testName)
{
char* xmlDir = 0;
size_t len;
errno_t err = _dupenv_s(&xmlDir,&len,"UNITTEST_XML_DIR");

if (err || len == 0)
{
// env var not set, just run the test w/ standard mechanism
return UnitTest::RunAllTests();
}
else
{
bfs::path p = bfs::path(xmlDir);

// free memory from _dupenv_s
free(xmlDir);

// if necessary, create output directory
if (! bfs::exists(p) || ! bfs::is_directory(p))
{
if (!bfs::create_directories(p))
{
std::cerr << "Problem creating directory " << p << std::endl;
return -1;
}
}

std::string fname(testName);
fname += ".xml";

// use / operator to append filename onto path
bfs::path fpath = p / fname;

std::ofstream f(fpath.file_string().c_str());
UnitTest::XmlTestReporter reporter(f);

UnitTest::TestRunner runner(reporter);

// if we're outputting to XML, don't return failure as return code
// this way tests can fail without it making the automated build think the build failed.
int ret = runner.RunTestsIf(UnitTest::Test::GetTestList(),NULL,True(),0);

return 0;

}
}


And then you just point Hudson's xUnit plugin at the generated reports. Came together suprisingly easy

Saturday, January 10, 2009

best weird debugging experience ever

I am captivated by each nugget of horrible code.

typedef struct RMAP
{
bill* pbill;
line* pline;
rule* prule;
} rmap;

I'm not sure what I find the most interesting... The fact that somebody thought it was fine to copy that struct definition throughout 15+ .c files.

Or, the fact that somebody slightly modified a couple of those instances.

typedef struct RMAP
{
bill* pbill;
line* pline;
rule* prule;
Bool overrideFlg;
} rmap;

typedef struct RMAP
{
bill* pbill;
line* pline;
rule* prule;
double accum;
} rmap;

Or... the best subtle weirdness, in one instance someone reordered the pointers.

typedef struct RMAP
{
rule* prule;
bill* pbill;
line* pline;
} rmap;

So... when you debug the following code in any of the _other_ .c files...

void myCrazyFunction(rmap* a_rmap)
{
rule* r = a_rmap->prule;
double reduction = r->reduction;
... blahblahblahblahblah ...
}

At runtime, everything works -- the definition of rmap used by the compiler is the definition within the .c file, and the third set of pointer-size bits within the a_rmap struct are assigned to 'r'.

But, when I try to debug the code in Visual Studio and hover over the a_rmap->prule I see a bunch of garbage. I hover of 'r', and I see it's got the values I expect.

I'm guessing the debugger finds the first instance of the rmap type in the .pdb file, and the one oddly ordered struct just so happens to be in the first .c file alphabetically (and also first in build order).

But, that realization doesn't come until about 2 hours of other wild goose chases through the rest of the call stack to eliminate the other 'more likely' possibilities.

Good times.

Wednesday, October 08, 2008

hudson == teh awesome

Hudson is great. Lightyears beyond CruiseControl. Everything configurable through the (awesome) UI, a ecosystem of plugins, and incredibly simple installation.

We don't have that complex a build, but our SCM team looks at me like I'm a crazy person when I point out yet another issue with the build. After the umpteenth time I've found yet another build problem, I decided to give Hudson a try. We need to checkout some modules from the trunk, and others from a branch. It was bad enough when their chosen solution to that problem meant we couldn't automatically trigger the builds with a check-in -- builds had to be forced from the UI, then a pre-build step did a CVS checkout of the modules from the appropriate branches. The last straw was realizing that the pre-build script also contained some logic that overwrote any changes we made to our build.xml file with some out-of-synch version of build.xml that one of them had created at some point in the past. The sort of thing you'd like a heads up about.

With Hudson, builds have been a snap to set up -- monitor these CVS modules, run this script from the repository if any changes are checked-in, the build artifacts will match this regex pattern allow download of them from the UI, keep only X number of builds on the CI server.

Hudson doesn't let you checkout from a mixture of HEAD/branch either -- but it's simple enough to set up multiple build commands, the first of which is doing the branch checkout. We won't get a complete changelog or CI automation, but it's good enough.

Saturday, May 24, 2008

cruisecontrol == teh suck

Every time I try to dive into configuring CruiseControl, I hate it a little bit more.

Monday, May 05, 2008

project A makes baby jesus cry

And so... for some reason I'd gotten it into my head to volunteer for Project A at work. Project A is a rules engine for processing ... things... Myself and another fellow newbie on the project (Hi Jeff!) are slowly coming up to speed on the crazy.

How crazy? Let me count the ways...
  • Release every month!
  • Without a project manager, development manager, or release manager. Who needs 'em? Crybabies.
  • We can't control the # or scope of change requests coming in each month. State regulations drive all the changes to the system. Some nebulous decision process allows us to occasionally drop some changes from the queue... but we don't know which until the deadline has been blown.
  • Crumbling 12+ year old code base. Not given love or documentation over the years. Occasionally tossed a bucket of fish heads. You can tell which dark corners survived from the halcyon days -- they use the hacked-in exception handling in what appears to be the correct way. While I appreciate the cleverness of adding exception handling to C (preprocessing macros around some setjump/longjump magic), I'd appreciate it more if anyone had bothered to write a brief document describing said 'right' way. An example in the .h file? Pshaw! That's for suckers! In the 3 weeks I've spent diving through the code to correct memory leaks, there's at least 4-5 different exception handling idioms. Many possibly broken sections of code, swallowing exceptions with no comment to explain whether the author did accidentally or on purpose. If there aren't comments, it's usually right... for certain values of right. If there are comments, the author clearly thinks exceptions either work like Java's or has no clue.
  • Functions are named as ambiguously as possible with regard to whether it returns a reference to a list that shouldn't be modified, or a copy of the list that's safe to modify and needs to be cleaned up by the caller.
  • It doesn't matter that developers create the builds pushed to customers, right?
  • Oh, and developers have been doing the source control labeling too.
  • Oh, and they're doing both of those tasks half-assed
  • The developers have been too lazy or too scared of the HP-UX makefile to allow it to work without manually copying all source files from their CVS checkout directory to the application directory.
  • You know what'd be a great idea? Basing the fancy new .Net-based Product B's rules engine on Project A. Wait... let's add a Java JNI wrapper around its re-packaged Project A. Awww, yeah... now that's good and f*^@#$ed up.
  • Cherry on top: some genius decides to branch to support Project B. "Branches are neat! Wait... branches are hard... oh well, we'll just branch this one directory that has the DAO stuff, that code definitely has to be different between Project A and B. That's what branching is for, right?" The branched directory also contains nearly identical files that will be slowly, and not-so-slowly , diverging to meet each project's mostly identical monthly release requirements. What? Labels on the branch/trunk to indicate merged code? Nah, that'd be too helpful. I should consider myself lucky that the branch has only existed for a year.
  • Oh yeah, we're releasing Project B mid-Summer. Sweet!
Somehow, I'm the calming influence on Jeff. Probably up to the point where he reads this rant. C'mon, we're in it together! You know, like Musketeers! But less flouncy.

I'm pretty sure I'm living in a sitcom written by The Daily WTF. The alternative is unthinkable.

This is Mark's cue to waltz through with a comment where he doesn't say a thing. Which I appreciate.

Saturday, April 19, 2008

more ponytails than should be allowed by law

Decided to go to the April UJUG meeting a couple nights ago. The presenters varied a little in quality, but were average to great. The topic was interesting, I would have enjoyed more "What went horribly wrong" war stories.

I thought it was telling that even though Maven was held up as a savior for many of the projects, everyone who spoke up mentioned their love/hate relationship with it. It sounded like the LDS church's team had embraced it the most, and a big part of their success with it appears to be because they've dedicated 2-3 engineers to supporting it full time within their organization.

Turnout was huge, 100+. I think I prefer the smaller, and less formal, UPyUG meetings. And it's not just because I think software developers with long hair look silly. So many ponytails at the UJUG meeting. And at least one beret.

Tuesday, April 04, 2006

good 'n bad 'n plenty

Good stuff:
  • Haagen Dazs Mayan Chocolate Ice Cream
  • Subversion
  • The 15 Mbps internet connection due to be installed tomorrow. Think of all the pr0n, and pirated music, and ... I'm sure there are other things on the internet. Pirated software? Pirated movies? Pirates? Pilates?
  • She Wants Revenge. Creepy stalker music is the best music.
Bad stuff:
  • Being the merge/build monkey at work. Some of it is good... I have a good view of the whole project. But, then crunch time comes around, and I'm the guy at the end of the line that has to integrate all the changes before passing things along to CM. Contractors are late with their delivery. Partly because of last-minute requirements updates; partly because of them thinking testing on a virtual Windows host under VMWare is adequate for a system that'll be deployed across 4 Solaris servers. I can't do anything but wait and watch as things come screeching to a halt. Right when things get to me. And I'm taking tomorrow off. And I'm wasting time writing this.

Thursday, February 16, 2006

As if I needed another post on my blog proving that I'm the world's biggest dweeb.

Although I'm back in software development, after a couple years of being in Software CM, I'm getting sucked into build / source-control stuff again. Which isn't bad. Not every developer likes worrying about builds, or finds version control interesting. But, once you've seen the difference between 'good' builds and 'bad' builds... Oh, lordy, do bad builds suck.

At work, we're doing Java. Which means using Ant, simply because nearly all IDEs support it. Ant isn't necessarily horrible... But, trying to create a build for a complex system can be very, very annoying. I've been able to whip our builds into shape using my new best friend, the import task. In particular, the nugget of gold buried deep in the manual's example for the subant task.

There's very little unique information required for a Java build -- especially if you're a nazi and insist everyone in the group organize their source similarly. Being able to use to share a common template among 10 (and soon 30+) components of our production system is going to save a lot of time and maintenance headaches in the long run.

That being said, Ant is horrible to work with. It's not just that the build script is written in XML... ugh. Creating a flexible build scripts with Ant is a pain in the ass, particularly when you have a large set of interrelated components that needs to be collaborated on, and dependencies to be tracked among the components (and thirdparty applications).

There are a lot of Ant extensions, but little in the way of cohesion among them, or a good howto or round-up describing them. Ivy looks very cool, but it also seems like it could be a bootstrapping headache... particularly if the network is unreliable..

Ultimately, I want a build tool that provides:
  • Global dependency tracking for all components
  • Ability to build individual components easily. Without forcing you to repeat yourself everywhere; and not force you to put dependency information in a single file, which would quickly become a merge nightmare
  • Easy extension of basic functionality via scripting language
  • Not force you to learn a _new_ tool-specific scripting language
  • Not force you to use 3-4 scripting languages to do reasonably complex things. e.g. the horrible M4 + Perl + Unix Shell / MS Batch spaghetti mess that soon develops in order to do anything reasonably complex with Make.
Boost Build (based on Jam) is reportedly good. But, IMHO, the best thing since sliced bread is SCons. It is so nice to be able to use a single language, Python, to both define your build and extend the basic functionality of the tool. Having a fully functional scripting language as part of your build script is so nice.

When you only have to worry about a single application, it may seem like overkill. But, if you've ever had to build a large system of applications, shared libraries, manipulate text files, increment build numbers, etc for 5 different target OSes... Builds aren't simple and using the right tools makes a huge difference.

Speaking of development tools, a decent version control system is also invaluable. My new favorite is Bazaar-NG. The ability to work offline and safely rename/move files and directories is sweet. Another huge bonus is that you don't have to manually track previous merges between branches (unlike CVS/Subversion). And, it doesn't leave version-control turds spread throughout my tree.

Even if bzr isn't ready to provide version control for my whole organization, it is very easy to setup a single user repository. At the moment, I'm responsible for integrating the work done by our internal developers using PVCS, and an external consulting firm that uses Subversion. It may seem like overkill to throw a third version control system into the mix, but it works. It stays out of the way (no VC turds) and allows me to have 4-5 branches of development -- one for my work, one to merge from Subversion, one to merge from PVCS, and another two or three to experiment in. And, I can merge between the branches w/out the tool forcing me to manually remember what had been merged.

Books-I'm-Reading-News:
  • Finished the Steven Erickson books I'd ordered last month. From England. Because I have no patience.
Where-I'm-Going-News:
  • I'll be taking some training classes at our NYC office in early March. I plan to eat a lot of food from carts.