Originally posted by rx72c
You cannot detect errors in implementing as the system is being setup and hasnt even started operating so no errors can be made.
"The
implementation stage developers the new system to the participants.
It involves using the solution to solve the problem.
During the Implementation stage, you will also undertake
System Testing. This is completely different to Testing, Evaluating and Maintaining.
This is the distinction you have to make:
Testing, evaluating and maintaining expects that the new system has
already been fully implemented. And I quote from the textbook: "Participants expect the system to be working correctly. The successful operation of a system involves the information technology working correctly and the participants using it effectively".
Testing a solution ensures that it works. So now, you cannot say that the final stage is the most difficult, but it can be the most expensive (do you see what I'm saying here, though?).
The
performance of the system, that is, the efficiency of the system is tested in the final stage. Following testing comes the evaluation, whereby a determination is made as to whether or not the system is working as required. Therefore, this is after the system has been established (working), with "minor problems being fixed" (Quote from Heinemann).
Getting back to the above, that I have quoted from you. You
can detect errors in implementation, because that is covered
completely in 'System Testing'. It is true that the system is being set-up, but if what you are saying is correct, (that is, the system has started operating, so no errors can be made), then how are the participants going to be trained?
I'll finish this off with System Testing, which is a process that occurs
during Implementation.
System Testing
Testing a system is a very important part of the implementation of a system. Without rigours testing, the system cannot be guaranteed to work as expected (ie harder to detect errors, more expensive to re-test and so on). Tests must be designed to examine the system operation
under all possible events. It is necessary to test
both the information technology and the information processes.
* Hardware is tested using diagnostic software and through general operation. Backup systems should be tested by selecting files to be restored.
* Software is tested using data that has been structured to test all decisions made within the system. This
test data must cover all possible combinations of data that may be encountered. It should be based on the original design specifications.
* Information processes are continually tested during the Implementation of the system over a period of time. Minor changes to procedures are immediately implemented.
but most likely its testing evaluating and maintaining because if the system has errors the whole system development cycle has to be started all over making it hard and expensive.[/B]
It makes it expensive, but it doesn't make it any harder to detect errors. The question asked for errors in detection
and at which stage is it the most expensive. You are correct when you say that the most expensive is during testing, as a new System Development Life-Cycle is required, but if it is 'easier' to detect errors in the final stages (namely Testing), then how could it also be the hardest stage in which to detect errors (ie bugs).
I go back to my proposal (above), divide the question into two, or get rid of one of the terms, ie:
1. When is it the
most difficult to detect errors?
2. When is it the
most expensive to correct problems?
Either way, anybody could argue for A, and the rest of the people could argue for B, as long as you have an answer, supported by logical statements, then you will get the marks.
I have seen past trial papers where examiners cannot agree on Multiple Choice questions so they have answers like:
"A or C"
"Remark Multiple choice"
and so on
Back later, gone to watch the news