Monday, July 30, 2007

Systems Engineering

Systems Engineering (SE) is an interdisciplinary approach and means for enabling the realization and deployment of successful systems. It can be viewed as the application of engineering techniques to the engineering of systems, as well as the application of a systems approach to engineering efforts. Systems Engineering integrates other disciplines and specialty groups into a team effort, forming a structured development process that proceeds from concept to production to operation and disposal. Systems Engineering considers both the business and the technical needs of all customers, with the goal of providing a quality product that meets the user needs.




Closely related fields:
Many related fields may be considered tightly coupled to systems engineering. These areas have contributed to the development of systems engineering as a distinct entity.



  • Cognitive systems engineering: This is Systems Engineering with the human integrated as an explicit part of the system. It draws from the direct application of centuries of experience and research in both Cognitive Psychology and Systems Engineering. Cognitive Systems Engineering focuses on how man interacts with the environment and attempts to design systems that explicitly respect how humans think, and works at the intersection of problems imposed by the world; needs of agents (human, hardware, and software); and interaction among the various systems and technologies that affect (and/or are affected by) the situation. Sometimes referred to as Human Engineering or Human Factors Engineering, this subject also deals with ergonomics in systems design.


  • Control engineering: The design and implementation of control systems, used extensively in nearly every industry, is a large sub-field of Systems Engineering. The cruise control on an automobile and the guidance system for a ballistic missile are two examples. Control systems theory is an active field of applied mathematics involving the investigation of solution spaces and the development of new methods for the analysis of the control process.


  • Industrial engineering: It is a branch of engineering that concerns the development, improvement, implementation and evaluation of integrated systems of people, money, knowledge, information, equipment, energy, material and process. Industrial engineering draws upon the principles and methods of engineering analysis and synthesis, as well as mathematical, physical and social sciences together with the principles and methods of engineering analysis and design to specify, predict and evaluate the results to be obtained from such systems.


  • Interface design: This design and it's specification are concerned with assuring that the pieces of a system connect and inter-operate with other parts of the system and with external systems as necessary. Interface design also includes assuring that system interfaces be able to accept new features, including mechanical, electrical, and logical interfaces, including reserved wires, plug-space, command codes and bits in communication protocols. This is known as extensibility. Human-Computer Interaction (HCI) or Human-Machine Interface (HMI) is another aspect of interface design, and is a critical aspect of modern Systems Engineering. Systems engineering principles are applied in the design of network protocols for local-area networks and wide-area networks.


  • Operations research: Operations research supports systems engineering. The tools of operations research are used in systems analysis, decision making, and trade studies. Several schools teach SE courses within the operations research or industrial engineering department[citation needed], highlighting the role systems engineering plays in complex projects. operations research, briefly, is concerned with the optimization of a process under multiple constraints.


  • Reliability engineering: This is the discipline of ensuring a system will meet the customer's expectations for reliability throughout its life; i.e. it will not fail more frequently than expected. Reliability engineering applies to all aspects of the system. It is closely associated with maintainability, availability and logistics engineering. Reliability engineering is always a critical component of safety engineering, as in failure modes and effects analysis (FMEA) and hazard fault tree analysis, and of security engineering. Reliability engineering relies heavily on statistics, probability theory and reliability theory for its tools and processes.


  • Performance engineering: This is the discipline of ensuring a system will meet the customer's expectations for performance throughout its life. Performance is usually defined as the speed with which a certain operation is executed or the capability of executing a number of such operations in the unit of time. It may be degraded where operations queue to be executed whenever the capacity is of the system is limited. For example, the performance of a packed-switched network would be characterised by the end-to-end packet transit delay or the number of packets switched within an hour. The design of performant systems makes use of analytical or simulation modeling, whereas the delivery of performant implementation involves thorough performance testing. Performance engineering relies heavily on statistics, queuing theory and probability theory for its tools and processes.


  • Safety engineering: The techniques of safety engineering may be applied by non-specialist engineers (e.g., EEs or SEs) in designing complex systems to minimize the probability of safety-critical failures. The "System Safety Engineering" function helps to identify "safety hazards" in emerging designs, and may assist with techniques to "mitigate" the effects of (potentially) hazardous conditions that cannot be designed out of systems.


  • Security engineering: This can be viewed as an interdisciplinary field that integrates the community of practice for control systems design, reliability, safety and systems engineering. It may involve such sub-specialties as authentication of system users, system targets, and others: people, objects, and processes.


  • Software engineering: From its beginnings Software engineering has shaped modern Systems Engineering practice to a great degree.[citation needed] The techniques used in the handling of complexes of large software-intensive systems has had a major effect on the shaping and reshaping of the tools, methods and processes of SE (e.g., see SysML, CMMI, Object-oriented analysis and design, Requirements engineering, Formal methods and Language theory).


  • Supportability engineering: Any system, when operational and providing the requirements defined in the design, needs degrees of support to maintain the operational functions. Supportability engineering is an analytical process that determines the optimal mix and distribution of support resources. By using the reliability aspects of the system and through isolating failure modes, causes and effects, the system's maintainability can be designed. A properly designed maintenance plan determines support resource capacities, such as trained support staff, documentation, spare parts, test equipment, repair facilities and contracted support, necessary to reduce the mean system downtime.


synchronization

The process of maintaining one operation in step with another. The commonest example is the electric clock, whose motor rotates at some integral multiple or submultiple of the speed of the alternator in the power station. In television, synchronization is essential in order that the electron beams of receiver picture tubes are at exactly the same spot on the screen at each instant as is the beam in the television camera tube at the transmitter

1. The arrangement of military actions in time, space, and purpose to produce maximum relative combat power at a decisive place and time.
2. In the intelligence context, application of intelligence sources and methods in concert with the operation plan.

Synchronization is a problem in timekeeping which requires the coordination of events to operate a system . The familiar conductor of an orchestra serves to keep the orchestra in time. Systems operating with all their parts in synchrony are said to be synchronous.

synchronization

In computer science, synchronization refers to one of two distinct, but related concepts: synchronization of processes, and synchronization of data. Process synchronization refers to the idea that multiple processes are to join up or handshake at a certain point, so as to reach an agreement or commit to a certain sequence of action. Data synchronization refers to the idea of keeping multiple copies of a dataset in coherence with one another, or to maintain data integrity. Process synchronization primitives are commonly used to implement data synchronization.

Process Synchronization
Process synchronization refers to the coordination of simultaneous threads or processes to complete a task in order to get correct runtime order and avoid unexpected race conditions

Data Synchronization
A distinctly different (but related) concept is that of data synchronization. This refers to the need to keep multiple copies of a set of data coherent with one another

Thursday, July 26, 2007

Degrees of rigor

Degrees of Rigor

The degree of rigor is a function of many project characteristics. As an example, small, non-mission critical projects can generally be addressed with somewhat less rigor than large, complex mission critical applications. It should be noted, however, that all projects must be conducted in a manner that results in timely, high quality deliverables.

Four different degrees of rigor are defined for the APM:

Casual. All APM framework activities are applied, but only a minimum task set is required. In general, umbrella tasks will be minimized and documentation requirements will be reduced. All basic principles of software engineering are still applicable.

Structured. The APM framework will be applied for this project. Framework activities and related tasks appropriate to the project type will be applied and umbrella activities necessary to ensure high quality will be applied. SQA, SCM, documentation and measurement tasks will be conducted in a streamlined manner.

Strict. The APM will be applied for this project with a degree of discipline that will ensure high quality. All umbrella activities will be applied and robust documentation will be produced.

Quick Reaction. The APM will be applied for this project, but because of an emergency situation, only those tasks essential to maintaining good quality will be applied. "Back-filling" (e.g., developing a complete set of documentation, conducting additional reviews) will be accomplished after the application/product is delivered to the customer.

Monday, July 23, 2007

SOFTWARE METRICS-TEAM4

Software Metrics

Metrics are management tools which are used to estimate the cost and resource requirements of a project.

In order to conduct a successful software project we must understand the scope of work to be done, the risks incurred, the resources required, the tasks to be accomplished, the milestones to be tracked, the cost, and the schedule to be followed. Project management provides this understanding.

Before a project can be planned, objectives and scope should be established, alternative solutions should be considered, and technical and management constraints should be identified. This information is required to estimate costs, project tasks, and a project schedule.

Metrics help us understand the technical process that is used to develop a product. The process is measured to improve it and the product is measured to increase quality.

Measuring software projects is still controversial. It is not yet clear which are the appropriate metrics for a software project or whether people, processes, or products can be compared using metrics.

Estimates for project cost and time requirements must derived during the planning stage of a project. Experience is often the only guide used to derive these estimates, but it may be insufficient if the project breaks new ground. A number of estimation techniques exist for software development. These techniques consist of establishing project scope, using software metrics based upon past experience are used to generate estimates, and dividing the project into smaller pieces which are estimated individually.

TEAM II- LEVELS OF SOFTWARE TESTING


*Unit testing tests the minimal software component, or module. Each unit (basic component) of the software is tested to verify that the detailed design for the unit has been correctly implemented.
*
Integration testing exposes defects in the interfaces and interaction between integrated components (modules). Progressively larger groups of tested software components corresponding to elements of the architectural design are integrated and tested until the software works as a whole.
*
System testing tests an integrated system to verify that it meets its requirements, which can sometimes be sub-divided into:
1.
Functional testing
2.
Non-Functional testing
*
System integration testing verifies that a system is integrated to any external or third party systems defined in the system requirements.
*
Acceptance testing can be conducted by the end-user, customer, or client to validate whether or not to accept the product. Acceptance testing may be performed after the testing and before the implementation phase.

1. Alpha testing is simulated or actual operational testing by potential users/customers or an independent test team at the developers' site. Alpha testing is often employed for off-the-shelf software as a form of internal acceptance testing, before the software goes to beta testing.
2. Beta testing comes after alpha testing. Versions of the software, known as beta versions, are released to a limited audience outside of the company. The software is released to groups of people so that further testing can ensure the product has few faults or
bugs. Sometimes, beta versions are made available to the open public to increase the feedback field to a maximal number of future users.

Team II-Image of Software Testinf Life Cycle


Team IV-Criticisms of Software Metrics

It is very difficult to satisfactorily define or measure "how much" software there is in a program, especially when making such a prediction prior to the detail design. The practical utility of software metrics has thus been limited to narrow domains where the measurement process can be stabilized.
Management methodologies such as the Capability Maturity Model or ISO 9000 have therefore focused more on process metrics which assist in monitoring and controlling the processes that produce the software.

Examples of process metrics affecting software:
  • Number of times the program failed to rebuild overnight
  • Number of defects introduced per developer hour
  • Number of changes to requirements
  • Hours of programmer time available and spent per week
  • Number of patch releases required after first product ship

Risk management and business continuity- Team 6

Risk management is simply a practice of systematically selecting cost effective approaches for minimising the effect of threat realization to the organization. All risks can never be fully avoided or mitigated simply because of financial and practical limitations. Therefore all organizations have to accept some level of residual risks.

Figure shows an Unavoidable Risk that occurs in our day to day life

Whereas risk management tends to be pre-emptive, business continuity planning (BCP) was invented to deal with the consequences of realised residual risks. The necessity to have BCP in place arises because even very unlikely events will occur if given enough time. Risk management and BCP are often mistakenly seen as rivals or overlapping practices. In fact these processes are so tightly tied together that such separation seems artificial. For example, the risk management process creates important inputs for the BCP (assets, impact assessments, cost estimates etc). Risk management also proposes applicable controls for the observed risks. Therefore, risk management covers several areas that are vital for the BCP process. However, the BCP process goes beyond risk management's pre-emptive approach and moves on from the assumption that the disaster will realize at some point.


Benefits of using SCM. TEAM 5

Basically,you want to know the entire lifecycle of every line of code. You want to track who wrote it, what version it is of the larger product, and you must have the ability to recreate any build. As a software professional, you should strive to move software development from an art to a science. You want to imbed tools and processes into your SDLC process that will result in a more predictable software development and release cycle. Doing so will ensure a higher quality software product and reduce overall operating costs.
An SCM allows you to automate repetitive development tasks and manage the concurrent development process of multiple developers on the same project. SCMs enable you to develop software in a distributed environment regardless of the geographical location of your developers. Using an SCM helps you to create a more bug-free product, manage changes, manage bug fixes, and continue to build the next software release. Developer and manager productivity will increase when you use an SCM.

SCM Team5

Software configuration management

Software configuration management is a crucial activity for any software development effort. The software configuration management activity, however, must not delay or impede the rapid software development schedule necessary to meet the harsh time to market needs of the E-World.
Consequently, effective and time-efficient software configuration must be practiced with all efforts having a justification in functional value. The software configuration management theoretical model that is most commonly referred to in literature does not easily correspond to the functions that must be accomplished through software configuration management activities. A better model that is functional in derivation, and that will be clearly understood and will be easily scaleable is the subject of this paper.
A functional model of software configuration management is organized into the areas of 1) version control, 2) document control, 3) change management, 4) build management, and 5) release control. This typology corresponds directly to the functional tasks that must be performed for a project and also agrees with the typology of the major software configuration management tool vendors.

A better model for software configuration management that is clearly understood and is scaleable is the subject of this paper. Software configuration management can be functionally broken out into the areas of 1) version control, 2) document control, 3) change management 4) build management, and 5) release control.

TEAM II - NEED FOR SOFTWARE TESTING

Testing is usually performed for the following purposes:

  • To improve quality.
    Quality means the conformance to the specified design requirement. Being correct, the minimum requirement of quality, means performing as required under specified circumstances. Debugging, a narrow view of software testing, is performed heavily to find out design defects by the programmer. The imperfection of human nature makes it almost impossible to make a moderately complex program correct the first time. Finding the problems and get them fixed is the purpose of debugging in programming phase.
  • For Verification & Validation (V&V)
    Another important purpose of testing is verification and validation (V&V). Testing can serve as metrics. It is heavily used as a tool in the V&V process. Testers can make claims based on interpretations of the testing results, which either the product works under certain situations, or it does not work. We can also compare the quality among different products under the same specification, based on results from the same test.
    We can not test quality directly, but we can test related factors to make quality visible. Quality has three sets of factors -- functionality, engineering, and adaptability. These three sets of factors can be thought of as dimensions in the software quality space. Each dimension may be broken down into its component factors and considerations at successively lower levels of detail.
  • Some of the most frequently cited quality considerations.
    Functionality (exterior quality)
    Engineering (interior quality)
    Adaptability (future quality)
    Correctness
    Efficiency
    Flexibility
    Reliability
    Testability
    Reusability
    Usability
    Documentation
    Maintainability
    Integrity
    Structure
    Good testing provides measures for all relevant factors. The importance of any particular factor varies from application to application.
  • For reliability estimation
    Software reliability has important relations with many aspects of software, including the structure, and the amount of testing it has been subjected to. Based on an operational profile (an estimate of the relative frequency of use of various inputs to the program ), testing can serve as a statistical sampling method to gain failure data for reliability estimation.
    Software testing is not mature. It still remains an art, because we still cannot make it a science. We are still using the same testing techniques invented 20-30 years ago, some of which are crafted methods or heuristics rather than good engineering methods. Software testing can be costly, but not testing software is even more expensive, especially in places that human lives are at stake. Solving the software-testing problem is no easier than solving the Turing halting problem. We can never be sure that a piece of software is correct. We can never be sure that the specifications are correct. No verification system can verify every correct program. We can never be certain that a verification system is correct either
    .

Project planning-team3

Project Planning Software for Pocket PC and Handheld PC
Pocket Plan is a Microsoft Project compatible project planning application for the Pocket PC and Handheld PC. Providing the same project planning capabilities as the desktop version of Plan, but running on a Pocket/Handheld PC!
Pocket Plan is a fully usable project planning tool in its own right, providing all the essential features you would expect to find in a PC based planning tool.
Using Pocket plan you can update your project plans on the move!
Use Pocket Plan stand-alone or use it along side desktop project planning tools such as Plan for Windows or Microsoft Project. Two-way synchronization is supported via ActiveSync allowing plans to be edited and recalculated on either device with subsequence re-synchronisation.

Team II-Definition of Software Testing


  • The process of devising a set of inputs to a given piece of software that will cause the software to exercise some portion of its code. The developer of the software can then check that the results produced by the software are in accord with his or her expectations.
  • Software testing is a process used to identify the correctness, completeness and quality of developed computer software. Actually, testing can never establish the correctness of computer software, as this can only be done by formal verification (and only when there is no mistake in the formal verification process). It can only find defects, not prove that there are none.


Steps in the risk management process - Team 6

Create a risk mitigation plan
The risk management plan should propose applicable and effective security controls for managing the risks. For example, an observed high risk of computer viruses could be mitigated by acquiring and implementing anti virus software. A good risk management plan should contain a schedule for control implementation and responsible persons for those actions.
Review and evaluation of the plan

1.To evaluate whether the previously selected security controls are still applicable and effective, and
2.To evaluate the possible risk level changes in the business environment. For example, information risks are a good example of rapidly changing business environment.
Risk management activities as applied to project management

In project management, risk management includes the following activities:
Planning how risk management will be held in the particular project. Plan should include risk management tasks, responsibilities, activities and budget.
Assigning a risk officer - a team member other than a project manager who is responsible for foreseeing potential project problems. Typical characteristic of risk officer is a healthy skepticism.
Maintaining live project risk database. Each risk should have the following attributes: opening date, title, short description, probability and importance. Optionally a risk may have an assigned person responsible for its resolution and a date by which the risk must be resolved.
Creating anonymous risk reporting channel. Each team member should have possibility to report risk that he foresees in the project.
Preparing mitigation plans for risks that are chosen to be mitigated. The purpose of the mitigation plan is to describe how this particular risk will be handled – what, when, by who and how will it be done to avoid it or minimize consequences if it becomes a liability.
Summarizing planned and faced risks, effectiveness of mitigation activities and effort spend for the risk management

What is SCM?????????

Current definition would say that SCM is the control of the
evolution of complex systems. More pragmatically, it is the
discipline that enable us to keep evolving software products
under control, and thus contributes to satisfying quality and
delay constraints.
SCM emerged as a discipline soon after the so called
« software crisis » was identified, i.e. when it was
understood that programming does not cover everything in
Software Engineering (SE), and that other issues were
hampering SE development, like architecture, building,
evolution and so on.
SCM emerged, during the late 70s and early 80s, as an
attempt to address some of these issues; this is why there is
no clear boundary to SCM topic coverage. In the early 80s
SCM focussed in programming in the large (versioning,
rebuilding, composition), in the 90s in programming in the
many (process support, concurrent engineering), late 90s in
programming in the wide (web remote engineering).
Currently, a typical SCM system tries to provide services in
the following areas:


  • Managing a repository of components
  • Help engineers in their usual activities
  • Process control and support

software configuration management(Architecture)


Software Development Metrics- Team IV


PROJECT PLANNING-TEAM III

Step 1 Project Goals


A project is successful when the needs of the stakeholders have been met. A stakeholder is anybody directly or indirectly impacted by the project.
As a first step it is important to identify the stakeholders in your project. It is not always easy to identify the stakeholders of a project, particularly those impacted indirectly. Examples of stakeholders are:
The project sponsor
The customer who receives the deliverables
The users of the project outputs
The project manager and project team
Once you understand who the stakeholders are, the next step is to establish their needs. The best way to do this is by conducting stakeholder interviews. Take time during the interviews to draw out the true needs that create real benefits. Often stakeholders will talk about needs that aren't relevant and don't deliver benefits. These can be recorded and set as a low priority.
The next step once you have conducted all the interviews and have a comprehensive list of needs is to prioritise them. From the prioritised list create a set of goals that can be easily measured. A technique for doing this is to review them against the SMART principle. This way it will be easy to know when a goal has been achieved.
Once you have established a clear set of goals they should be recorded in the project plan. It can be useful to also include the needs and expectations of your stakeholders.
This is the most difficult part of the planning process completed. It's time to move on and look at the project deliverables.


Step 2 Project Deliverables


Using the goals you have defined in step 1, create a list of things the project needs to deliver in order to meet those goals. Specify when and how each item must be delivered.
Add the deliverables to the project plan with an estimated delivery date. More accurate delivery dates will be established during the scheduling phase, which is next.


Step 3 Project Schedule


Create a list of tasks that need to be carried out for each deliverable identified in step 2. For each task identify the following:
The amount of effort (hours or days) required to complete the task
The resource who will carryout the task
Once you have established the amount of effort for each task, you can workout the effort required for each deliverable and an accurate delivery date. Update your deliverables section with the more accurate delivery dates.
At this point in the planning you could choose to use a software package such as Microsoft Project to create your project schedule. Alternatively use one of the many free templates available. Input all of the deliverables, tasks, durations and the resources who will complete each task.
A common problem discovered at this point is when a project has an imposed delivery deadline from the sponsor that is not realistic based on your estimates. If you discover that this is the case you must contact the sponsor immediately. The options you have in this situation are:
Renegotiate the deadline (project delay)
Employ additional resources (increased cost)
Reduce the scope of the project (less delivered)
Use the project schedule to justify pursuing one of these options.


Step 4 Supporting Plans


This section deals with plans you should create as part of the planning process. These can be included directly in the plan.


Human Resource Plan


Identify by name the individuals and organisations with a leading role in the project. For each describe their roles and responsibilities on the project.
Next, describe the number and type of people needed to carryout the project. For each resource detail start dates, estimated duration and the method you will use for obtaining them.
Create a single sheet containing this information.
Communications Plan
Create a document showing who needs to be kept informed about the project and how they will receive the information. The most common mechanism is a weekly/monthly progress report, describing how the project is performing, milestones achieved and work planned for the next period.


Risk Management Plan


Risk management is an important part of project management. Although often overlooked, it is important to identify as many risks to your project as possible and be prepared if something bad happens.
Here are some examples of common project risks:
Time and cost estimates too optimistic
Customer review and feedback cycle too slow
Unexpected budget cuts
Unclear roles and responsibilities
Stakeholder input is not sought or their needs are not properly understood
Stakeholders changing requirements after the project has started
Stakeholders adding new requirements after the project has started
Poor communication resulting in misunderstandings, quality problems and rework
Lack of resource commitment
Risks can be tracked using a simple risk log. Add each risk you have identified to your risk log and write down what you will do in the event it occurs and what you will do to prevent it from occurring. Review your risk log on a regular basis adding new risks as they occur during the life of the project. Remember, when risks are ignored they don't go away.
Congratulations. Having followed all the steps above you should have a good project plan. Remember to update your plan as the project progresses and measure progress against the plan

Objective of software Metrices - Team IV


Limitations in Risk Management -Team 6

Risk is a concept that denotes a potential negative impact to an asset or some characteristic of value that may arise from some present process or future event. In everyday usage, "risk" is often used synonymously with the probability of a known loss.

Risk management is the human activity which integrates recognition of risk, risk assessment, developing strategies to manage it, and mitigation of risk using managerial resources.

Limitations:
If risks are improperly assessed and prioritized, time can be wasted in dealing with risk of losses that are not likely to occur. Spending too much time assessing and managing unlikely risks can divert resources that could be used more profitably. Unlikely events do occur but if the risk is unlikely enough to occur it may be better to simply retain the risk and deal with the result if the loss does in fact occur.
Prioritizing too highly the risk management processes could keep an organization from ever completing a project or even getting started. This is especially true if other work is suspended until the risk management process is considered complete.
It is also important to keep in mind the distinction between risk and uncertainty.

Uncertainty: The lack of certainty, A state of having limited knowledge where it is impossible to exactly describe existing state or future outcome, more than one possible outcome.

Risk: A state of uncertainty where some possible outcomes have an undesired effect or significant loss.

Control Risk Management, ("CRM") is the only real solution to minimize your risk and to increase your profits

Project planning, Team 3

Project planning is the process of quantifying the amount of time and the size of the budget for a project. The output of the project planning process is a project plan that a project manager can use to track the project team's progress. Project planning is part of project management, which relates to the use of schedules such as Gantt charts to plan and subsequently report progress within the project environment.
Initially, the project scope is defined and the appropriate methods for completing the project are determined. Following this step, the durations for the various tasks necessary to complete the work are listed and grouped into a work breakdown structure. The logical dependencies between tasks are defined using an activity network diagram that enables identification of the critical path. Float or slack time in the schedule can be calculated using project management software.
Then the necessary resources can be estimated and costs for each activity can be allocated to each resource, giving the total project cost. At this stage, the project plan may be optimized to achieve the appropriate balance between resource usage and project duration to comply with the project objectives. Once established and agreed, the plan becomes what is known as the baseline. Progress will be measured against the baseline throughout the life of the project. Analyzing progress compared to the baseline is known as earned value management.

SCM(security) Team 5

The Security Basics
Fundamentally, there are some basic (potential) security requirements that any system needs to consider. These are:
  • confidentiality: are only those who should be able to read information able to do so?

  • integrity: are only those who should be able to write/change information able to do so? This includes not only limiting access rights for writing, but also protecting against repository corruption (unintentional or malicious). Changesets must be made atomically; if 3 files change in a changeset, either all or none should be committed.


  • availability: is the system available to those who need it? (I.E., is it resistant to denial-of-service attacks?)


  • identification/authentication: does the system safely authenticate its users? If it uses tokens (like passwords), are they protected when stored and while being sent over a network, or are they exposed as cleartext?

  • audit: Are actions recorded?


  • non-repudiation: Can the system "prove" that a certain user/key did an action later? In particular, given an arbitrary line of code, can it prove who was the individual that made that change and when? Can it show all those who approved/accepted it, as a path?


  • self-protection: Does the system protect itself, and can its own data (like timestamps, changesets, other data) be trusted?


  • trusted paths: Can the system make sure that its communication with users is protected?


  • resilience to security algorithm failures: If a given security algorithm fails (such as the hash function or encryption), can the algorithm be easily replaced to protect past and future data? (Added 2005-03-02, after the revelation of serious problems in SHA-1).


  • privacy: Is the system designed so it's not possible to retrieve information that users want to protect? For example, spamming is a serious problem; it may be desirable to NOT record real email addresses, at least in some circumstances. If there is a "secret branch" where security patches are located, try to not store its location in the dataset. This is similar to confidentiality, but you might not even trust an administrator... the notion is to NOT store or depend on data you don't want spread.

An SCM has several assets to protect. It needs to protect "current" versions of software, but it must do much more. It needs to make sure that it can recall any previous version of software, correctly, as well as the audit trail of exactly who made which change and when. In particular, an SCM has to keep the history immutable - once a change is made, it needs to stay recorded. You can undo the change, but the undoing needs to be recorded separately. Very old history may need to be removed and archived, but that's different than simply allowing history to be deleted.

PROJECT PLANNING TEAM 3


Risk Management - Team 6

Risk management can be defined as the culture, processes, and structures that are directed towards the effective management of potential opportunities and adverse effects.

This is a broad definition that can quite rightly apply in nearly all fields of management from financial and human resources management through to environmental management. However in the context of contaminated sites, risk management can be taken to mean the process of gathering information to make informed decisions to minimise the risk of adverse effects to people and the environment.

Risk assessment involves estimating the level of risk – estimating the probability of an event occurring and the magnitude of effects if the event does occur.

Essentially risk assessment lies at the heart of risk management, because it assists in providing the information required to respond to a potential risk.

In a resource management setting, environmental risk assessment may be used to help manage, for example:
  • Natural hazards (flooding, landslides)

  • Water supply and waste water disposal systems

  • Contaminated sites

Human health risk assessment is one form of risk assessment, focusing on assessing the risk to people and communities from hazardous substances or discharge of contaminants.

Ecological risk assessment is another form of risk assessment that can be used to assist management of risks to ecological values.

The focus of risk assessment for contaminated sites is usually human health, as a large proportion of the known potentially contaminated sites are located in urban areas. However, where valued natural environments are present, the focus of ecological risk assessment is on assessing the risks to plants, animals and ecosystem integrity from chemicals present at or discharging from a contaminated site.




As applied to corporate finance, risk management is the technique for measuring, monitoring and controlling the financial or operational risk on a firm's balance sheet.The Basel II framework breaks risks into market risk (price risk), credit risk and operational risk and also specifies methods for calculating capital requirements for each of these components.

There are two types of risk managements,
  • Enterprise Risk management

  • Risk management activities as applied to project management



Enterprise Risk Management:
Exposure to natural hazards is only one of the many risks faced by the P&C insurance industry. An insurer’s or reinsurer’s balance sheet supports a broad range of additional risks ranging from non-CAT liabilities to credit, market, and operational risk.

In the more general case, every probable risk can have a pre-formulated plan to deal with its possible consequences (to ensure contingency if the risk becomes a liability).

  • the cost associated with the risk if it arises, estimated by multiplying employee costs per unit time by the estimated time lost (cost impact, C where C = cost accrual ratio * S).

  • the probable increase in time associated with a risk (schedule variance due to risk, Rs where Rs = P * S).

  • the probable increase in cost associated with a risk (cost variance due to risk, Rc where Rc = P*C = P*CAR*S = P*S*CAR)


Risk in a project or process can be due either to Special Cause Variation or Common Cause Variation and requires appropriate treatment. That is to re-iterate the concern about extremal cases not being equivalent in the list immediately above.



Risk management activities as applied to project management:

In project management, risk management includes the following activities:

  • Planning how risk management will be held in the particular project. Plan should include risk management tasks, responsibilities, activities and budget.

  • Assigning a risk officer - a team member other than a project manager who is responsible for foreseeing potential project problems. Typical characteristic of risk officer is a healthy skepticism.

  • Maintaining live project risk database. Each risk should have the following attributes: opening date, title, short description, probability and importance. Optionally a risk may have an assigned person responsible for its resolution and a date by which the risk must be resolved.

  • Creating anonymous risk reporting channel. Each team member should have possibility to report risk that he foresees in the project.

  • Preparing mitigation plans for risks that are chosen to be mitigated. The purpose of the mitigation plan is to describe how this particular risk will be handled – what, when, by who and how will it be done to avoid it or minimize consequences if it becomes a liability.

  • Summarizing planned and faced risks, effectiveness of mitigation activities and effort spend for the risk management.

Software Project Metrices

DEFN: A software metric is a measure of some property of a piece of software or its specifications

Software metrics are numerical data related to software development. Metrics strongly support software project management activities. They relate to the four functions of management as follows:

1. Planning - Metrics serve as a basis of cost estimating, training planning, resource planning, scheduling, and budgeting.

2. Organizing - Size and schedule metrics influence a project's organization.

3. Controlling - Metrics are used to status and track software development activities for compliance to plans.

4. Improving - Metrics are used as a tool for process improvement and to identify where improvement efforts should be concentrated and measure the effects of process improvement efforts.

A metric quantifies a characteristic of a process or product. Metrics can be directly observable quantities or can be derived from one or more directly observable quantities. Examples of raw metrics include the number of source lines of code, number of documentation pages, number of staff-hours, number of tests, number of requirements, etc.

Common software metrics include:
1. Source lines of code
2. Cyclomatic complexity
3. Function point analysis
4. Bugs per line of code
5. Code coverage
6. Number of lines of customer requirements.
7. ber of classes and interfaces
8. Cohesion
9. COUPLING
10. Robert Cecil Martin's software package metrices


Steps in Risk Management - Team 6

Identification

The First step in the process of managing risk is to identify potential risks. Risks are about events that, when triggered, cause problems. Hence, risk identification can start with the source of problems, or with the problem itself.


  • Source analysis : Risk sources may be internal or external to the system that is the target of risk management. Examples of risk sources are: stakeholders of a project, employees of a company or the weather over an airport.

  • Problem analysis : Risks are related to identified threats. For example: the threat of losing money, the threat of abuse of privacy information or the threat of accidents and casualties. The threats may exist with various entities, most important with shareholders, customers and legislative bodies such as the government.

The chosen method of identifying risks may depend on culture, industry practice and compliance. The identification methods are formed by templates or the development of templates for identifying source, problem or event. Common risk identification methods are:


  • Objectives-based risk identification

  • Scenario-based risk identification

  • Taxonomy-based risk identification

  • Common-risk Checking

  • Risk Charting

Assessment
Once risks have been identified, they must then be assessed as to their potential severity of loss and to the probability of occurrence. The assessment process it is critical to make the best educated guesses possible in order to properly prioritize the implementation of the risk management plan.

There have been several theories and attempts to quantify risks. Numerous different risk formulae exist, but perhaps the most widely accepted formula for risk quantification is:

Rate of occurrence multiplied by the impact of the event equals risk


Potential risk treatments
Once risks have been identified and assessed, all techniques to manage the risk fall into one or more of these four major categories:

  • Tolerate
  • Treat
  • Terminate
  • Transfer

Another source, from the US Department of Defense; Defense Acquisition University, calls this ACAT, for Accept, Control, Avoid, and Transfer. The ACAT acronym is reminiscent of the term ACAT (for Acquisition Category) used in US Defense industry procurements.


  • Risk avoidance

  • Risk reduction

  • Risk retention
  • Risk transfer

Risk avoidance
Includes not performing an activity that could carry risk. An example would be not buying a property or business in order to not take on the liability that comes with it. Another would be not flying in order to not take the risk that the airplane were to be hijacked. Avoidance may seem the answer to all risks, but avoiding risks also means losing out on the potential gain that accepting (retaining) the risk may have allowed. Not entering a business to avoid the risk of loss also avoids the possibility of earning profits.

Risk reduction
Involves methods that reduce the severity of the loss. Examples include sprinklers designed to put out a fire to reduce the risk of loss by fire. This method may cause a greater loss by water damage and therefore may not be suitable. Halon fire suppression systems may mitigate that risk, but the cost may be prohibitive as a strategy.
Modern software development methodologies reduce risk by developing and delivering software incrementally. Early methodologies suffered from the fact that they only delivered software in the final phase of development; any problems encountered in earlier phases meant costly rework and often jeopardized the whole project. By developing in iterations, software projects can limit effort wasted to a single iteration.

Risk retention
Involves accepting the loss when it occurs. True self insurance falls in this category. Risk retention is a viable strategy for small risks where the cost of insuring against the risk would be greater over time than the total losses sustained. All risks that are not avoided or transferred are retained by default. This includes risks that are so large or catastrophic that they either cannot be insured against or the premiums would be infeasible. War is an example since most property and risks are not insured against war, so the loss attributed by war is retained by the insured. Also any amounts of potential loss (risk) over the amount insured is retained risk. This may also be acceptable if the chance of a very large loss is small or if the cost to insure for greater coverage amounts is so great it would hinder the goals of the organization too much.

Risk transfer
Means causing another party to accept the risk, typically by contract or by hedging. Insurance is one type of risk transfer that uses contracts. Other times it may involve contract language that transfers a risk to another party without the payment of an insurance premium. Liability among construction or other contractors is very often transferred this way. On the other hand, taking offsetting positions in derivatives is typically how firms use hedging to financially manage risk.
Some ways of managing risk fall into multiple categories. Risk retention pools are technically retaining the risk for the group, but spreading it over the whole group involves transfer among individual members of the group. This is different from traditional insurance, in that no premium is exchanged between members of the group up front, but instead losses are assessed to all members of the group.
Outsourcing is another example of Risk transfer where companies outsource IT,BPO,KPO etc. In IT, some companies will outsource only development work and product is made at offshore locations where as business requirements are handled at onshore/client site. This way, companies can concentrate more on business development rather than managing large group of IT development team.



Software Configuration Management

Software Configuration Management (SCM) is a set of activities designed to control change by identifying the work products that are likely to change, establishing relationships among them, defining mechanisms for managing different versions of these work products, controlling the changes imposed, and auditing and reporting on the changes made.

In other words, SCM is a methodology to control and manage a software development project.

The goals of SCM are generally:

  • Configuration Identification- What code are we working with?
  • Configuration Control- Controlling the release of a product and its changes.
  • Status Accounting- Recording and reporting the status of components.
  • Review- Ensuring completeness and consistency among components.
  • Build Management- Managing the process and tools used for builds.
  • Process Management- Ensuring adherence to the organization's development process.
  • Environment Management- Managing the software and hardware that host our system.
  • Teamwork- Facilitate team interactions related to the process.
  • Defect Tracking- making sure every defect has traceability back to the source

TEAM II-Software Testing


Software testing is the process used to measure the quality of developed computer software. Usually, quality is constrained to such topics as correctness, completeness, security, but can also include more technical requirements as described under the ISO standard ISO 9126, such as capability, reliability, efficiency, portability, maintainability, compatibility, and usability. Testing is a process of technical investigation, performed on behalf of stakeholders, that is intended to reveal quality-related information about the product with respect to the context in which it is intended to operate. This includes, but is not limited to, the process of executing a program or application with the intent of finding errors. Quality is not an absolute; it is value to some person. With that in mind, testing can never completely establish the correctness of arbitrary computer software; testing furnishes a criticism or comparison that compares the state and behaviour of the product against a specification.

In general, software engineers distinguish software faults from software failures. In case of a failure, the software does not do what the user expects. A fault is a programming error that may or may not actually manifest as a failure. A fault can also be described as an error in the correctness of the semantic of a computer program. A fault will become a failure if the exact computation conditions are met, one of them being that the faulty portion of computer software executes on the CPU. A fault can also turn into a failure when the software is ported to a different hardware platform or a different compiler, or when the software gets extended.
Software testing may be viewed as a sub-field of
Software Quality Assurance but typically exists independently (and there may be no SQA areas in some companies). In SQA, software process specialists and auditors take a broader view on software and its development. They examine and change the software engineering process itself to reduce the amount of faults that end up in the code or deliver faster.

A problem with software testing is that the number of defects in a software product can be very large, and the number of configurations of the product larger still. Bugs that occur infrequently are difficult to find in testing. A rule of thumb is that a system that is expected to function without faults for a certain length of time must have already been tested for at least that length of time. This has severe consequences for projects to write long-lived reliable software, since it is not usually commercially viable to test over the proposed length of time unless this is a relatively short period. A few days or a week would normally be acceptable, but any longer period would usually have to be simulated according to carefully prescribed start and end conditions.

A common practice of software testing is that it is performed by an independent group of testers after the functionality is developed but before it is shipped to the customer. This practice often results in the testing phase being used as project buffer to compensate for project delays, thereby compromising the time devoted to testing. Another practice is to start software testing at the same moment the project starts and it is a continuous process until the project finishes.

This is highly problematic in terms of controlling changes to software: if faults or failures are found part way into the project, the decision to correct the software needs to be taken on the basis of whether or not these defects will delay the remainder of the project. If the software does need correction, this needs to be rigorously controlled using a version numbering system, and software testers need to be accurate in knowing that they are testing the correct version, and will need to re-test the part of the software wherein the defects were found. The correct start point needs to be identified for retesting. There are added risks in that new defects may be introduced as part of the corrections, and the original requirement can also change part way through, in which instance previous successful tests may no longer meet the requirement and will need to be re-specified and redone (part of regression testing). Clearly the possibilities for projects being delayed and running over budget are significant.

Another common practice is for test suites to be developed during technical support escalation procedures. Such tests are then maintained in regression testing suites to ensure that future updates to the software don't repeat any of the known mistakes.

It is commonly believed that the earlier a defect is found the cheaper it is to fix it. This is reasonable based on the risk of any given defect contributing to or being confused with further defects later in the system or process. In particular, if a defect erroneously changes the state of the data on which the software is operating, that data is no longer reliable and therefore any testing after that point cannot be relied on even if there are no further actual software defects.

PROJECT PLANNING-TEAM3

Project planning

Project planning is part of project management, which relates to the use of schedules such as Gantt charts to plan and subsequently report progress within the project environment.

[1].Initially, the project scope is defined and the appropriate methods for completing the project are determined. Following this step, the durations for the various tasks necessary to complete the work are listed and grouped into a work breakdown structure.

[2].The logical dependencies between tasks are defined using an activity network diagram that enables identification of the critical path. Float or slack time in the schedule can be calculated using project management software.

[3].Then the necessary resources can be estimated and costs for each activity can be allocated to each resource, giving the total project cost. At this stage, the project plan may be optimized to achieve the appropriate balance between resource usage and project duration to comply with the project objectives. Once established and agreed, the plan becomes what is known as the baseline. Progress will be measured against the baseline throughout the life of the project. Analyzing progress compared to the baseline is known as earned value management.

project planning


Software Quality Assurance activity

Product evaluation and process monitoring are the SQAactivities that assure the software development and controlprocesses described in the project's Management Plan arecorrectly carried out and that the project's procedures andstandards are followed. Products are monitored forconformance to standards and processes are monitored forconformance to procedures. Audits are a key technique usedto perform product evaluation and process monitoring.Review of the Management Plan should ensure that appropriateSQA approval points are built into these processes.
Product evaluation is an SQA activity that assures standardsare being followed. Ideally, the first products monitoredby SQA should be the project's standards and procedures. SQAassures that clear and achievable standards exist and thenevaluates compliance of the software product to theestablished standards. Product evaluation assures that thesoftware product reflects the requirements of the applicablestandard(s) as identified in the Management Plan.
Process monitoring is an SQA activity that ensures thatappropriate steps to carry out the process are beingfollowed. SQA monitors processes by comparing the actualsteps carried out with those in the documented procedures.The Assurance section of the Management Plan specifies themethods to be used by the SQA process monitoring activity.
A fundamental SQA technique is the audit, which looks at aprocess and/or a product in depth, comparing them toestablished procedures and standards. Audits are used toreview management, technical, and assurance processes toprovide an indication of the quality and status of thesoftware product.
The purpose of an SQA audit is to assure that proper controlprocedures are being followed, that required documentationis maintained, and that the developer's status reportsaccurately reflect the status of the activity. The SQAproduct is an audit report to management consisting offindings and recommendations to bring the development intoconformance with standards and/or procedures.
Team I

software quality assurance

Software quality Assurance (SQA) is a planned and systematic approach to ensure that software process and products conforms to the established standards, processes, and procedures. The goals of SQA are to improve software quality by appropriately monitoring both software and the development process to ensure full compliance with the established standards and procedures. The first step to establish an SQA programis to get the top management's agreement on its goal. It then needs to identify SQA issues, to write SQA plan, to establish standards and SQA functions, to implement the SQA plan and to evaluate SQA program. For SQA to be effective, they must have good people and full management support. High quality software product must be able to run correctly and consistently, have few defects (if there are), handle abnormal situation nicely, and need few installation effort. The defects should not affect the normal use of the softwre, will not do any destructive things to system, and rarely be evident to the users. Before deciding what measures to use, it is essential to consider the objectives of the measurement program. If the measures will be used to manage software development, they should be objective, timely available and controllable. On the other hand, if the measures are to support decisions on product acceptance, they must reasonably represent user needs.
Team 1

PROJECT PLANNING-TEAM III


Definitions

1.Project Idea

someone recognizes that their current system of dealing with information storage, retrieval, and use is inadequate
sometimes the amount of information reaches a "critical mass" at which point updating records becomes ponderous and locating up-to-date records is nearly impossible
individuals or groups often begin to think about implementing a GIS after being exposed to the technology at trade shows or professional meetings and seeing that other organizations have found a "better way"

2.Project Formation and Plan
Defines the broad plan and sets goals
assesses current (pre-GIS) status
determines the direction of development
identifies potential applications
Set goal and estimate cost for next step

3.Present system and Functional requirements study

there must be a clear definition of the functions provided by the manual system (or DP system) already in place
inventories maps and reports used
inventories maps and reports produced
inventories procedures used for flow of work
notes frequency of procedures and operations
user needs analysis
what do users think of existing system?
what would they like to improve?
what new products or procedures would they add?
the resulting list of functions, along with any new requirements, will define the project scope
It is pointless to implement a GIS that is not capable of handling all the functions an organization needs. The Functional Requirements Study should describe in detail the products required from the system, the data available for the system, and the functions required to generate the products from the data. It is important that the functional requirements of a project are well understood, so that both management and vendors can assess the suitability of a given product or system to the project.
The Texas Water Commission undertook a Functional Requirements Study for its Geographic Information System which resulted in the generation of a Requirements Definition Report. This document, close to 200 pages long, outlines the current functions of the TWC and defines its GIS relationships with other State agencies.

4.Financial Feasibility Analysis

weighs costs of current system against costs of GIS implemetation, including pilot studies, hardware and software acquisition, system development, data acquisition, training
Set goal and estimate cost for next step

5.Request for Proposal
Request for Proposal is a pre-award contractual term requesting information from hardware and software vendors.
The RFP should clearly outline the functional requirements of a system. It should specify:
the type of database
the sourse of database information
database functions and procedures
needed output
It allows vendors to identify which of their systems best suits the needs of a given project. The RFP should allow the vendors to tailor technical solutions to meet a project's functional requirements.
The RFP should not impose technical solutions on a vendor, even if the user has specific technical solutions in mind. The vendors are generally the experts at developing system configurations which satisfy a user's requirements, and the configurations suggested by various vendors will help the user predict the feasibility of successfully implementing a project. Vendors will suggest the size of the CPU, number of input and output devices, and software configurations.
The vendors' responses should include a detailed list of technical solutions, timetables for system implementation, and costs.

6.System Selection and Benchmarking
Every system has plusses and minuses, and marketing literature generally plays up the plusses and plays down the minuses.
Benchmarking is a process which minimizes the risks associated with system selection by testing each system's exact capabilities. A test dataset is run on each system under consideration to determine how well it handles the functional requirements of the project.
The same series of tests should be run on each system and should be designed to test specific capabilities along with general user-friendliness and ease-or-use.
Benchmarking is the time to determine the flexibility of each system. For example:
can changes be made to the database structure after the initial setup and, if so, how difficult are such changes?
Can user-defined functions be added to the system?
Can custom applications be created?
Is there a programmer's interface for the development of such applications?
Does the system have adequate security feature built-in?
What are the networking options?
Are response times significantly different during periods of high and low-loading?

7.Risk Analysis

possible risks:
hardware or software may not live up to expectations
cost of implementing GIS may be higher than current system Set goal and estimate cost for next step
System Development and Detail designAfter a specific system has been chosen, each of the following are defined during system development:
database specifications
graphics specifications
report specifications
interfaces
calculations
specialized applications

8.prototype
A prototype is a working model of the planned system. It differs from a pilot project in that it may have the look or feel of the final system, but does not always incorporate all of its functional requirements.
Set goal and estimate cost for next step

9.Conversion
The process of converting existing information if the form of paper (or mylar, film) maps, tables, drawings, or other records into digital form for use in a computer database.

10.Pilot Project

There are two possible formats: demonstration and prototype
last step before full implementation of the system reasons for a pilot project:
demonstrate capabilities
verify estimates of costs and benefits
test alternatives
provide a means of communicating project potential to users and management
test procedures for training, production, management, and maintenance
evaluate hardware and software Set goal and estimate cost for next step
Full implementation

software quality assurance

Software Quality Assurance Activities


Product evaluation and process monitoring are the SQAactivities that assure the software development and controlprocesses described in the project's Management Plan arecorrectly carried out and that the project's procedures andstandards are followed. Products are monitored forconformance to standards and processes are monitored forconformance to procedures. Audits are a key technique usedto perform product evaluation and process monitoring.Review of the Management Plan should ensure that appropriateSQA approval points are built into these processes.

Product evaluation

Product evaluation is an SQA activity that assures standardsare being followed. Ideally, the first products monitoredby SQA should be the project's standards and procedures. SQAassures that clear and achievable standards exist and thenevaluates compliance of the software product to theestablished standards. Product evaluation assures that the software product reflects the requirements of the applicablestandard(s) as identified in the Management Plan.

Process monitoring

Process monitoring is an SQA activity that ensures thatappropriate steps to carry out the process are beingfollowed. SQA monitors processes by comparing the actualsteps carried out with those in the documented procedures.The Assurance section of the Management Plan specifies themethods to be used by the SQA process monitoring activity.

-Team I

Software Quality Assurance

A. Concepts and Definitions:

Software Quality Assurance (SQA) is defined as a planned and systematic approach to the evaluation of the quality of and adherence to software product standards, processes, and procedures. SQA includes the process of assuring that standards and procedures are established and are followed throughout the software acquisition life cycle. Compliance with agreed-upon standards and procedures is evaluated through process monitoring, product evaluation, and audits.Software development and control processes should include quality assurance approval points, where an SQA evaluation of the product may be done in relation to the applicable standards.

B. Standards and Procedures:

Establishing standards and procedures for software development is critical, since these provide the frame work from which the software evolves. Standards are the established criteria to which the software products are compared. Procedures are the established criteria to which the development and control processes are compared.Standards and procedures establish the prescribed methods for developing software; the SQA role is to ensure their existence and adequacy. Proper documentation of standards and procedures is necessary since the SQA activities of process monitoring, product evaluation, and auditing rely upon unequivocal definitions to measure project compliance.

C. Types of standards:

(i)Documentation Standards:
Documentation Standards specify form and content for planning, control, and product documentation and provide consistency throughout a project.

(ii)Design Standards :
Design Standards specify the form and content of the design product. They provide rules and methods for translating the software requirements into the software design and for representing it in the design documentation.

(iii)Code Standards:
Code Standards specify the language in which the code is to be written and define any restrictions on use of language features.

Team -1