More whining about work.
It's becoming increasingly apparent that the previous development staff of Project A were lazy. Definitely not stupid, only people confident in their brilliance could produce such a poorly documented and nightmarish code base.
Lazy in terms of, "Hey, that's acting weird... it seems hard to figure out too. I'll blame X and the business people will live with it."
The latest example is the supposedly 'inconsistent' regression test results. When Jeff and I joined the project, we were repeatedly warned in reverent tones of the odd non-deterministic regression test. The first time the 10k+ bills in the regression suite are run, some bills fail. If you run those bills through again, they work.
On the face of it, it seems very odd. The way it was presented to us it seemed that the failing bills were random, and on re-run the identical bill would produce different results. Various things were blamed: memory leaks, buffer overrun, bad casts.
As we dug into it, it became clear that wasn't the case:
- the bill processing engine wasn't re-started between the failure and success.
- for at least one large set of the failing bills, the same bills fail every time the regression test is run.
- The regression test's preparatory script actually modifies the bill in such a way that they become illegal bills
- the engine sees the illegal bill, and tweaks it such that works again
- but, many of the failing bills are only partially tweaked because they have manual override codes. The partial tweaking sets the bill up to fail the first time, but corrects things enough that it will work the second time through the system.
Definitely a bug in the engine. The data that's reset by the prep script shouldn't make it behave that badly.
No comments:
Post a Comment