Exercise Solutions (Unrestricted)
The diary is one example; and the manually-written bill would be another.
[The interesting thing about the diary is that it very often surfaces as a candidate entity in exercises; and is quite frequently kept as an entity in the analysis model. However, when it comes to design time -- in the book it happens during the CRC workshop -- the idea of a diary causes as many problems as it solves. It is when (perhaps) the invented notion of a diary-per-table surfaces, that the diary starts to show its worth. But why was the diary not a direct clue, only "half a clue"? Because it is existing-solution-up-for-replacement, rather than subject matter. JD]
The invoice would seem clearly to be a part of the subject matter. It will still be there when the new system is in place, and it will be an important thing, whose existence and state the new system is very likely to model. Human authorization will still be there. I would classify of these as subject matter. The rubber stamp? There or not, potentially subject matter or existing system up for replacement, I'd deem it irrelevant in either case, and thus not include it.
Assumption 1: All those years ago, one could not have expected one's suppliers to stop sending purchase invoices, even if this were some kind of corporate group where the supplier and the customer were part of the same group.
Assumption 2: A standalone PC would not facilitate electronic authorization of purchase invoices.
Assumption 3: Investigation has shown that the rubber stamp itself is of no consequence. It could use a different ink color. It could be completely redesigned. It could be replaced by penciled scribbles. And the system-to-be wouldn't "give a damn" about any of these changes.
It's possible, now, that authorization and invoice provision no longer need to be done physically. We can treat both as existing system up for replacement.
Assumption 1: There is a possibility that some suppliers can be persuaded to submit invoices electronically.
Assumption 2: It is not infeasible that authorization can be done electronically in both a convenient and trustworthy way.
It's eminently doable. Forms are easily understood, even if they're terribly badly designed. People can be interviewed and asked to introspect on what they do and why they do it. If the requirements indicate that the system-to-be is intended to do the same as the existing, clerical system, but perhaps faster and more reliably, i.e. computerization is what is required, then a systems analysis would make a lot of sense. We have a lot more experience of systems analysis than we do of object-oriented systems development.
On the other hand, the existing system may have flaws or may be doing things in a less than optimal way. Or the system-to-be may be required to accomplish things that the existing system doesn't address at all. If that were the case, and the only activity prior to design (invention) were existing systems analysis, we would surely have missed some valuable, extra, definitive input to the design. Also, such a system tempts one to consider a description using dataflow and dataflow diagrams. However, experience (and reason (in hindsight)) shows us that a dataflow picture (functions activated by the arrival of dumb data) isn't very amenable to objects being designed from it. Finally there is the problem of "blinkers". Once an existing systems has been extensively studied, it can be difficult to conceive of new ways and means. Of course the job of "systems analyst" was usually taken to include the skills to suggest improvements and to suggest new and possibly more appropriate approaches.
Given the ominous description of the existing system, one must expect just about no advantages to studying the existing computer system. The disadvantages are that it would be extremely difficult to discover useful input to the development of the replacement system, and thus the very poor return on large, and painful, investment. This is the reverse engineering problem. There are perils in any reverse engineering -- "Let's take apart one of our competitors washing machines and copy what they did." With software engineering, the "soft" amplifies the perils, and, along with being unable to use one's senses on software, hugely amplifies the difficulties of software engineering.
Assumption 1: The BASIC is all we have. There is no other documentation, specification or explanation.
Assumption 2: The BASIC syntax is difficult to understand in the large because it has no singular underlying conceptual paradigm and because there is little in the way of modularization apart from some dubious function mechanisms (the BASIC I remember didn't even have recursive function calls).
Assumption 3: The BASIC code is difficult to understand because the comments (if any) are only of typical (i.e. poor) quality.
Assumption 4: The BASIC code is difficult to understand because Visual BASICs tended to encourage developers to develop a nice GUI (view), and then to just tack the essential application logic (model) onto the GUI "objects" (possibly even then being led to claim that the system was "object-orientated".
[Ouch -- I seem to be in a bitter mood this morning. Surely it won't be that bad! JD]
I suspect that the dataflow diagrams would look like the easiest place to start. My experience tells me, however, that I should model as much as I can with entities and relationships before I start to supply any crucial processing perspectives. The later that one begins to orient around objects, the worse those objects seem to be.
“Your team is going to carry out the analysis for the replacement of the investment analysis software we inherited from Ripe For Plucking Inc. Do you think five weeks will be enough?”
“How easy is it going to be finding a couple of investment and portfolio experts for us to talk to?”
“Ah… You can’t talk them. They’re over the other side of the world and it’s all rather sensitive anyway; they mustn’t know just yet that the replacement system is being developed here. You’ll just have to look at the listings of the current system.”
“But it was written in WIZFL 3 and what few comments there are, are out of date. And as far as I know, the original design documentation was “lost” when the disks were shipped to us.”
“Well, what about that reverse engineering tool I bought you last year, surely you can just use that.”
Write your reply.
"You can't reverse the sausage machine and get the pig back." Saint Gregory Thaumaturgus, AD 271.
While a reverse engineering tool can illuminate a little of the structure of a software system, albeit in a somewhat trivial way, it cannot tell one anything about why that structure was chosen; and, most importantly for us, it tells one very little about nature the problem that the system was solving. At each stage of transformation towards eventual code and compilation, structure and information are lost.
Surveying reverse engineering success stories via Google, we found that the vast majority of the first 200 were all from vendor sites. Developer postings regarding reverse engineering successes were conspicuously thin on the ground. There were several horror stories, however. I've attached one such from Acme Broking Systems' chief architect. Perhaps you recall that project; they were one of our competitors. It was that project where the user group successfully mounted a class action against Acme, and where the project manager had their stock options revoked.
In software trials run at the Govgravytrain Institute on 382K of executable program, where groups of approximately 115 subjects from the the top quartile of graduate computer science students of several nearby universities were shown memory bit patterns, none of them could figure out what the software would do; when shown assembler printouts with address mnemonics of average quality, only 2 students correctly deduced what the software would do, taking on average 19 hours each to come to their conclusions; when shown BASIC code of average quality, 3 students made correct deductions after 15 hours study; and when shown Smalltalk code of average quality, 5 students made correct deductions after an average of 10 hours study. The conclusion was there was no magic point at which the semantics could be deduced from the code.
[None of these arguments succeeded. The reverse engineering tools were used. The analysts tried their best over the next three months, but to no avail. Things were starting to look shaky, so the boss managed to get h(er/im)self promoted to another project that was just starting up. The project was cancelled a few weeks later. And so another project was added to the pile of those that didn't make it out of the door. JD] [And this is just an illustration of a possible answer, of course. Very little of this actually took place. JD]