domain 3. integrated manufacturing systems these is/are application(s) traditionally used in the...

196
DOMAIN 3 DOMAIN 3

Upload: lesley-baldwin

Post on 31-Dec-2015

213 views

Category:

Documents


0 download

TRANSCRIPT

  • DOMAIN 3

  • INTEGRATED MANUFACTURING SYSTEMSThese is/are application(s) traditionally used in the manufacturing sector to automate common operations.These applications integrate the manufacturing processing from recording raw materials, work in progress and finished goods transaction, inventory adjustments, purchases, supplier mgt, sales, account payables, account receivables, goods received, inspection, invoices, cost accounting, maintenance.Integrated Manufacturing System (IMS) or Manufacturing Resource Planning (MRP) is a typical module of most ERP packages such as SAP, Oracle, J.D. Edwards, Navision and it usually integrated in modern CRM & SCM systems.

  • Some examples of IMSBill of Materials ( BOM)Bill of material Processing (BOMP)Manufacturing resources Planning(MRP)Computer Assisted Design (CAD)Computer-integrated (or Computer-intensive) manufacturing (CIM)Manufacturing accounting and production (MAP)

  • What is Lean Manufacturing?

    It is focusing on the ELIMINATION ofWASTE (non-value-added activities) through CONTINUOUS IMPROVEMENT!It is not about eliminating people.It is about Expanding capacity by reducing costs and shortening cycle times btw order and ship date.

  • Bill of materials

    Bill of materials (BOM) is a list of the raw materials, sub-assemblies, intermediate assemblies, sub-components, components, parts and the quantities of each needed to manufacture an end item (final product)It may be used for communication between manufacturing partners, or confined to a single manufacturing plant.

  • Manufacturing Resources Planning

    Manufacturing resource planning, also known as MRP II, is a method for the effective planning of a manufacturer's resources. MRP II is composed of several linked functions, such as business planning, sales and operations planning, capacity requirements planning, and all related support systems.

    The earliest form of manufacturing resource planning was known as material requirements planning (MRP).Material requirements planning (MRP) is a computer-based, time-phased system for planning and controlling the production and inventory function of a firm from the purchase of materials to the shipment of finished goods.

  • Computer Integrated Manufacturing

    (CIM) in engineering is a method of manufacturing in which the entire production process is controlled by computer. The traditional separated process methods are joined through a computer by CIM. This integration allows that the processes exchange information with each other and they are able to initiate actions.

  • The heart of computer integrated manufacturing is CAD/CAM. Computer-aided design (CAD) and computer-aided manufacturing (CAM) systems are essential to reducing cycle times in the organization. CAD/CAM is a high technology integrating tool between design and manufacturing. CAD techniques make use of group technology to create similar geometries for quick retrieval. Electronic files replace drawing rooms. CAD/CAM

  • Continuity planningWhat is a BCP? It is a plan that gives a recovery teamthe information it needs to: Recover from a disaster Continue the business operations Return to normal operations

    RTO VS RPO

  • ELECTRONIC FUND TRANSFERThe underlying goal of the automated environment is to wring out costs inherent in the business processes.This generally refers to the transfer of money from one account to another account without physical exchange of money.EFT allows parties. to move money from one account to another account , replacing traditional check writing and cash collectionprocedures

  • In the settlement between parties, EFT transactions usually function via an internal bank transfer from one party's account to another or via a clearinghouse network. Usually, transactions originate from a computer at one institution (location) and are transmitted to a computer at another institution (location) with the monetary amount recorded in the respective organization's account.

  • Because of its sensitivity, access security and authorisation are important controls.EFT switch network is also an audit concern.The IS Auditor should review back up arrangements for continuity of operations.Central bank requirements should be reviewed for application in these processes.

  • CONTROLS IN AN EFT ENVIRONMENT Because of the potential high volume of money being exchanged these systems may be in an extremely high-risk category and security in an EFT environment becomes extremely critical. Security includes the methods used by the customer to gain access to the system, the communications network and the host or application processing site.Individual consumer access to the EFT system is generally controlled by a plastic card and a PIN.Both items are required to initiate a transaction.IS auditor should review the physical security of unissued plastic cards and the procedures used to generate PINs.Access to commercial EFT systems generally does not require a plastic card but the IS audit or should ensure that reasonable identification methods are required. The communications network should be designed to provide maximum security. Data encryption is recommended for all transactions.

  • An EFT switch involved in the network is also an audit concern. The IS auditor should review the contract with the switch and the third party audit of the switch operations. I f a third party audit has not been performed, the auditor should consider visiting the switch location.At the application processing level, the IS auditor should review the interface between the EFT system and the application systems that process the accounts from which funds are transferred . Availability of funds or adequacy of credit limits should be verified before funds are transferred.Because of the penalties for failure to make a timely transfer, the IS auditor should review backup arrangements or other methods used to ensure continuity of operations. Since EFT reduces the output of paper and consequently reduces normal audit trails, the IS auditor should determine that alternative audit trails are available.

  • INTEGRATED CUSTOMER FILEICF provides details and history about all business relationships a customer maintains with an organisation.ICF aids in customer profiling for the purpose of marketing and tailoring of customized services.

  • OFFICE AUTOMATIONThis basically refers to a variety of electronic devices and techniques to aid in the conduct of business.A good examples are common office packages like Word, Excel, PowerPoint e.t.cLocal area network can equally be considered as one.

  • AUTOMATED TELLER MACHINEThis basically is a form of Point of Sale terminal.It is designed as unmanned terminal used by a customer of a financial institution.It customarily allows a range of banking credit and debit operations.ATM are usually located in uncontrolled area to facilitate easy access to customer after hoursControls must be in place for issuance and delivery of PINs, exception reporting, restriction to accounts after small number of unsuccessful attempts, PIN should not be stored unencrypted, e.t.cWait a minute! what is the first step in establishing controls???

  • Recommended internal control guidelines for ATMsAudit of ATM

    Page ...........?

    Are u waiting ? Read !!!!

  • TEASERAutomated teller machines (ATMs) are a specialized form of a point of sale terminal which:A. allow for cash withdrawal and financial deposits only. B. are usually located in populous areas to deter theft or vandalism. C. utilize protected telecommunication lines for data transmissions. D. must provide high levels of logical and physical security.

  • EXPLANATIONAutomated teller machines (ATMs) are a specialized form of a point of sale terminal and their system must provide high levels of logical and physical security for both customer and the machinery. ATMs allow for a variety of transactions including cash withdrawal and financial deposits, are usually located in unattended areas and utilize unprotected telecommunication lines for data transmissions.

  • COOPERATIVE PROCESSING SYSTEMSThese are systems divided into segments. Different parts run on different independent computer devices.The system divides the problem into units that are processed in a number of environments and communicates the results among them to produce a solution to the total problem.The system must be designed to minimize and maintain the integrity of communication between the component parts.

  • parallel computing: Solving a problem with multiple computers or computers made up of multiple processors. It is an umbrella term for a variety of architectures, including symmetric multiprocessing (SMP), clusters of SMP systems, massively parallel processors (MPPs) and grid computing.

    grid computing, the concurrent application of the processing and data storage resources of many computers in a network to a single problem. It also can be used for load balancing as well as high availability by employing multiple computerstypically personal computers and workstationsthat are remote from one another, multiple data storage devices, and redundant network connections. Grid computing requires the use of parallel processing software that can divide a program among as many as several thousand computers and restructure the results into a single solution of the problem. Primarily for security reasons, grid computing is typically restricted to multiple computers within the same enterprise.

  • VOICE RESPONSE ORDERING SYSTEMSVROS are systems in which the user interacts with the computer over a telephone connection in response to verbal instructions given by the computer systemInteractive voice response (IVR) systems good for large call volumes.

  • PURCHASE ACCOUNTING SYSTEMThese basically refers to a set of integrated systems usually triggered when purchases are made.In a departmental store for example, a customer purchases triggers the following processes:Sales accounting processesAccount receivable processes (if payment is through credit card)Cash or bank processes (if payment is through cash)Inventories processesPurchase accounting processes to initiate replacement of inventoryUltimately, the transaction is recorded in the general ledger

  • 3 basic functionsAccounts payable processing- Recording transactions in theaccounts payable records Goods received processing- Recording details of goods received but not yet invoicedOrder processing- Recording goods ordered but not yet received

  • IMAGE PROCESSINGImage processing refers to computer manipulation of images.It is the replacement of paper document with electronic document.An imaging system stores, retrieves and processes graphic data such as pictures, charts, graphs either in addition to text data or instead of it.This system usually requires enormous storage capacity and by implication, costlyThis system includes techniques that identify level of shades and colors that can not be differentiated by human eye.

  • ADVANTEGES OF IMAGE PROCESSINGMerits include:Item processing (e.g. signature storage & retrieval)Immediate retrievalIncreased productivityImproved control over paper filesReduced deterioration due to handlingEnhanced disaster recovery procedure.

  • ISSUES WITH IMAGE PROCESSINGRisk areas that management should address when installing imaging systems and that IS auditors should be aware of when reviewing an institution's controls over imaging systems include: Planning -Critical issues include converting existing paper storage files and integration of the imaging system into the organization workflow and electronic media storage to meet audit and document retention legal requirements.Audit- may change or eliminate the traditional control s as well as the checks and balances inherent in paper-based systemsRedesign of workflow-Institutions generally redesign or reengineer workflow processes to benefit from imaging technology.Scanning devicesSoftware security - unauthorized access and modificationsTraining

  • TEASERWhich of the following is NOT an advantage of image processing? A. Verifies signatures B. Improves service C. Relatively inexpensive to use D. Reduces deterioration due to handling

  • ARTIFICIAL INTELLIGENCE the study and design of intelligent agents. where an intelligent agent is a system that perceives its environment and takes actions which maximize its chances of success.The field was founded on the claim that a central property of human beings, intelligencecan be so precisely described that it can be simulated by a machine.

  • ARTIFICIAL INTELLIGENCE Artificial intelligence is the study and application of the following principles:Knowledge acquisition and usages;Goals generation and achievement;Information communication;Achievement of collaboration;Concept formation;Language development.

  • The two main programming languages for AI :LISPPROLOG

  • Major Branches of AI Perceptive systemA system that approximates the way a human sees, hears, and feels objectsVision systemCapture, store, and manipulate visual images and picturesRoboticsMechanical and computer devices that perform tedious tasks with high precisionExpert systemStores knowledge and makes inferencesLearning systemComputer changes how it functions or reacts to situations based on feedbackNatural language processingComputers understand and react to statements and commands made in a natural language, such as EnglishNeural networkComputer system that can act like or simulate the functioning of the human brain

  • Artificial intelligenceRoboticsVision systemsLearning systemsNatural language processingNeural networksExpert systems

  • EXPERT SYSTEMSExpert systems are an area of artificial intelligence and perform a specific function or are prevalent in certain industries.This is a branch of AI that allows users to specify certain basic assumptions or formulas and then uses theses assumptions or formulas to analyze events.Based on the information used as input to the system, a conclusion is produced.expert system is a computer program that simulates the thought process of a human expert to solve complex decision problems in a specific domain.

  • BENEFITS OF EXPERT SYSTEMCapturing the knowledge and experience of individuals before they leave the organisation.Sharing knowledge and experience in areas where there is limited expertiseFacilitating consistent and efficient quality decisionsEnhancing personnel productivity and performanceAutomating highly repetitive tasks (help desk)Operating in environment where a human expert is not available (e.g. medical assistance on board of a ship, satellites)

  • COMPONENT OF EXPERT SYSTEMDatabaseKnowledge base (decision tree, Rules & semantic nets)Inference engineExplanation module

    These are called shells when they are not populated with particular data

  • COMPONENTS contdKnowledge base represents the key to the system.It contains information or fact patterns associated with a particular subject matter and the rules for interpreting these facts.Knowledge base interfaces with a database in attaining data to analyze a particular problem in deriving an expert conclusion.

  • KNOWLEDGE BASE contdThe information in KB can be expressed in several ways:Decision trees this uses questionnaire to lead the user through a series of choices, until a conclusion is reached. With this, flexibility is compromised because user must answer the question in exact manner and sequence.Rules expressing declarative knowledge through the use of if-then relationship e.g. if temperature is over 390C, and pulse is under 60 ,then the patient suffers from OMO-LARIA !Semantic nets Consists of graphs. It resembles dataflow diagram and makes use of an inheritance mechanism to prevent data duplication.

  • INFERENCE ENGINEThe inference engine is a program that uses the KB and determines the most appropriate outcome based on the information supplied by the userInference engineSeeks information and relationships from the knowledge base and provides answers, predictions, and suggestions in the way a human expert would

  • In addition, an expert system includes the following components:

    Knowledge interface-Allows the expert to enter knowledge into the system without the traditional mediation of a software engineer . Data interface- Enables the expert system to collect detail from nonhuman sources, such as measurement instruments in a power plant.

  • An explanation module that is use-oriented in addressing the problem is analyzed, and the expert conclusion reached is also provided.This mode allows the system to explain its conclusions and its reasoning process. This ability comes from the AND/OR trees created during the production system reasoning process. Expert systems are gaining acceptance and popularity as audit tools. as operating systems, online software environments, Access control products microcomputer environments

    These tools can take the form of a series of a well designed questionnaires actual software that integrates and report on system parameters and data sets

  • Expert Systems in ActionMedical managementTelephone network maintenanceCredit evaluationTax planningDetection of insider securities tradingDetection of common metalsMineral explorationIrrigation and pest managementDiagnosis and prediction of mechanical failureClass selection for students

  • stringent change control procedures should be followed since the basic assumptions and formulas may need to be changed as more expertise is gained. As with other systems, access should be on a needto-know basis.

    The IS auditor needs to be concerned with the controls relevant to these systems when used as an integral part of an organizations business process or mission critical functions, and the level of experience or intelligence used as a basis for developing the software

  • TEASERThe use of expert systems: A. facilitates consistent and efficient quality decisions. B. captures the knowledge and experience of industry experts. C. cannot be used by IS auditors since they deal with system specific controls. D. improves system efficiency and effectiveness, not personal productivity and performance.

  • BUSINESS INTELLIGENCEBusiness intelligence is a broad field of IT that encompasses the collection and dissemination of information to assist decision making and assess organizational performance.Business intelligence basically assists in the understanding of a wide range of business questions.

  • BI contd......Business intelligence (BI) refers to skills, technologies, applications and practices used to help a business acquire a better understanding of its commercial context. Business intelligence may also refer to the collected information itself.

  • Business intelligence (BI) is a set of theories, methodologies, architectures, and technologies that transform raw data into meaningful and useful information for business purposes. BI can handle enormous amounts of unstructured data to help identify, develop and otherwise create new opportunities. BI, in simple words, makes interpreting voluminous data friendly. Making use of new opportunities and implementing an effective strategy can provide a competitive market advantage and long-term stability

  • BUSINESS INTELLIGENCE contdSome of the business questions include:Process cost, efficiency and qualityCustomer satisfaction with product and serviceCustomer profitabilityStaff and business unit achievement of key performance indicatorsRisk management e.g. by identifying unusual transaction patterns and accumulation of incident and loss statistics.

  • BUSINESS INTELLIGENCE contdReasons buy business intelligence:Increasing size and complexity of organisationPursuit of competitive advantageLegal requirements SOX ( Sarbanes-Oxley Act), CBNs directive of KYC and their transactions

    BI vs competitive intelligence

  • BI uses technologies, processes, and applications to analyze mostly internal, structured data and business processes while competitive intelligence gathers, analyzes and disseminates information with a topical focus on company competitors. If understood broadly, business intelligence can include the subset of competitive intelligence

  • Do you need Business Intelligence?

    Companies continuously create data whether they store it in flat files, spreadsheets or databases. This data is extremely valuable to your company. Its more than just a record of what was sold yesterday, last week or last month.1. It should be used to look at sales trends in order to plan marketing campaigns 2. to decide what resources to allocate to specific sales teams. 3. It should be used to analyse market trends to ensure that your products are viable in todays marketplace. 3. It should be used to plan for future expansion of your business.4. It should be used to analyse customer behaviour.

    The bottom line is that your data should be used to maximize revenue and increase profit.

  • In order to deliver effective BI, a company needs to design and implement a data architecture. A complete data architecture consists of two components: The enterprise data flow architecture (EDFA) A logical data architecture BUSINESS INTELLIGENCE contd

  • Data ArchitectureData Architecture in enterprise architecture is the design of data for use in defining the target state and the subsequent planning needed to hit the target state.A data architecture describes the data structures used by a business and/or its applications. There are descriptions of data in storage and data in motion; descriptions of data stores, data groups and data items; and mappings of those data artifacts to data qualities, applications, locations etc.

  • DATA FLOW ARCHITECTUREPresentation/desktop access layer this is where end users directly deal with information. This layer includes familiar desktop tools like MS Access, MS Excel and other direct querying tools.Data Mart Layer this represents a subset of information contained in the core data warehouse, selected and organized to meet the needs of a particular business unit or business line. This may be in form of a relational DB or OLAP( online analytical processing)

  • OLAPShort for Online Analytical Processing, a category of software tools that provides analysis of data stored in a database. OLAP tools enable users to analyze different dimensions of multidimensional data. For example, it provides time series and trend analysis views. OLAP often is used in data mining.data miningA class of database applications that look for hidden patterns in a group of data that can be used to predict future behavior. For example, data mining software can help retail companies find customers with common interests. The term is commonly misused to describe software that presents data in new ways. True data mining software doesn't just change the presentation, but actually discovers previously unknown relationships among the data.

  • Data Feed/Data Mining Indexing Layer this is otherwise called data preparation layer. It is concerned with the assembly and preparation of data for loading to data mart. Only presorted and pre-calculated values should be loaded into the data repository to increase access speed.Data Warehouse Layer this is where all the data (or at least the majority) of interest to an organisation is captured and organized to assist reporting and analysis. A properly constituted data warehouse should support three basic forms of inquiry:DATA FLOW ARCHITECTURE contd

  • DATAWAREHOUSE contdDrilling up and drilling down this implies flexibility in data aggregation e.g. drilling up: sum store sales to get region sales and ultimately national sales. drilling down: break store sales down to computer salesDrilling across use of common attributes to access a cross section of information in the warehouse e.g. sum sales across all product lines by customer and groups of customers according to any attribute of interestHistorical analysis the warehouse should be capable of holding historical, time variant data.

  • DATA FLOW ARCHITECTURE contdData staging & quality layer this layer is responsible for data copying, transformation into data warehouse format and quality control. Data access layer this layer operates to connect the data staging and quality layer with data stores in the data source layer. Data source layer this basically depicts data and information source. It includes: operational data, -Data captured and maintained by an organization's ,existing systemsexternal data and - Data provided to an organization by external sources non-operational data. - Information needed by end users that is not currently maintained in computer- accessible format .

  • Metadata repository layer this is data about data. Warehouse management layer the function of this layer is the scheduling of the tasks necessary to build and maintain the data warehouse and populate data marts.Application messaging layer this layer is concerned with transporting information between the various layers.Internet/intranet layer this layer is concerned with basic data communication. It includes browser based user interfaces and TCP/IP networkDATA FLOW ARCHITECTURE contd

  • BUSINESS INTELLIGENCE GOVERNANCEGovernance determines how an organization is controlled and directed.An important part of the governance process involves determining:Which BI initiative to fund;What priority to assign to initiative;How to measure their return on investment (ROI)In the area of BI funding governance, it is advisable to establish a business/IT advisory team that allows different functional perspectives to be represented.Final funding decisions should rest with a technology steering committee that comprises senior management.

  • GOVERNANCE contd.. another important part is data governance. Which includes:Establishing std definition of data.Business rules and metricsIdentifying approved data sourceEstablishing stds for data reconciliation and balancing

  • DECISION SUPPORT SYSTEMA DSS is an interactive system that provides the user with easy access to decision models and data from a wide range of sources, to support semi-structured decision-making tasks typically for business purposes.It assists in making decisions through data provided by business intelligence tools.A decision support system (DSS) is a computer-based information system that supports business or organizational decision-making activities. DSSs serve the management, operations, and planning levels of an organization and help to make decisions, which may be rapidly changing and not easily specified in advance.

  • DSS contdTypical information that a DSS might gather and present would be:Comparative sales figures between one week and the next;Projected revenue figures based on new product sales assumptionsConsequence of different decision alternatives, given past experience in the described context.

  • DSS contdCharacteristics of DSS include:Aims at solving less structured, under specified problems that senior managers face;Combines the use of models or analytic techniques with traditional data access and retrieval functions;Emphasizes flexibility and adaptability to accommodate changes in the environment and the decision-making approach of the users.The degree to which a problem or decision is structured corresponds roughly to the extent to which it can be automated or programmed.

  • DSS IMPLEMENTATION & USEThe main challenge is to get the users to accept the use of DSS. The following are the steps involved in changing user behaviors:Unfreezing- Altering the forces acting on individuals such that they are distracted sufficiently to change. (increasing the pressure/ reducing the threats to change)Moving this step presents a direction of change & process of learning new attitudes Refreezing- this step integrates the changed attitudes into the individuals personality

  • DSS RISK FACTORThere are basically eight implementation factors:Non existence or unwilling users;Multiple users or implementers;Disappearing users, implementers or maintainers;Inability to specify purpose or usage patterns in advance;Inability to predict and cushion impact on all parties;Lack or loss of supportLack of experience with similar systems;Technical problems and cost-effectiveness issues.

  • CUSTOMER RELATIONSHIP MGTFor competitive reasons, companies are shifting their focuses from products to customers.CRM emphasizes the importance of focusing on information relating to:Transaction dataCustomer preferencesCustomer purchase patternCustomer statusContact historyDemographic information.

  • CRM contd...........Customer relationship management (CRM) consists of the processes a company uses to track and organize its contacts with its current and prospective customers. CRM software is used to support these processes; information about customers and customer interactions can be entered, stored and accessed by employees in different company departments. Typical CRM goals are to improve services provided to customers, and to use customer contact information for targeted marketing.

  • CRM contdCRM centers all business processes around the customer rather than marketing, sales or any other function.This business model makes use of telephony, web and database technologies and enterprise integration technologies.It also extends to other business partners who can share information, communicate and collaborate with the organization with the seamless integration of web-enabled applications.

  • SUPPLY CHAIN MGTThis is about linking the business processes between the related entities e.g the buyer and the seller.The link could cover:Managing logistics & exchange of informationExchange of goods and services between supplier, consumer, warehouse, wholesale/retail distributors and the manufacturer of goods.

  • SCM Contd.......Supply chain management (SCM) is the management of a network of interconnected businesses involved in the ultimate provision of product and service packages required by end customers . Supply Chain Management spans all movement and storage of raw materials, work-in-process inventory, and finished goods from point of origin to point of consumption (supply chain).

  • SUPPLY CHAIN MGTEDI, which is extensively used in SCM aids in data interchange between business entitiesSCM is all about managing the flow of goods, services and information among stakeholders SCM shifts the focus as all entities in the supply chain can work collaboratively and in a real time mode, reducing the inventory to a great extent.JIT inventory approach becomes more possible and the cycle becomes shorter with the objective toward reducing unwanted inventory.

  • INFRASTRUCTURE DEVELOPMENT/ACQUISITION PRACTICES

  • The physical architecture analysis, the definition of a new one and the necessary road map to move from one to the other is a critical task for an IT department. Its impact is not only economic, but also technological, since it decides many other choices down stream such as operational procedures, training needs, installation issues and total cost of ownership (TCO). Thus physical architecture analysis cannot be based solely on price or isolated features. A formal, reasoned choice must be made.

  • Factors that might render legacy system obsolete:Deficits in functionality;Endangered future reliability;Increase in cost;Product development handicapped;Deficits in information supply;Future business requirements not fulfilledInsufficient process support INFRASTRUCTURAL ACQUISITION PRACTICES

  • Goals of migrating technical architecture to a new oneTo successfully analyze the existing architectureTo design a new architecture that takes into account the existing architecture and a company's particular constraints/requirement such as:.Reduced costsincreased functionality Minimum impact on daily work- Security and confidentiality issuesProgressive migration to the new architecture

    To write the functional requirements of this new architectureTo develop a proof of concept based on these functional requirements:To characterize price functionality and performance To identify additional requirements that will be used later

    The resulting requirements will be documents and drawings describing the reference infrastructure that will be used by all projects downstreamThe requirements are validated using a proof of concept

  • PROJECT PHASES OF PHYSICAL ARCHITECTUREANALYSIS 1. Review of Existing Architecture To start the process, the latest documents about the existing architecture must be reviewed. Participants of the first workshop will be specialists of the lCT department in all areas directly impacted by physical architectureThe output of the first workshop is a list of components of the current infrastructure and constraints defining the target physical architecture. 2. Analysis and Design After reviewing the existing architecture, the analysis and design of the actual physical architecture has to be undertaken , adhering to best practices and meeting business requirements. 3.Draft Functional Requirements . Wit h the first physical architecture design in hand the first (draft) of functional requirements is composed. This material is the input for the next step and the vendor select ion process.

  • 4.Vendor and Product Selection . While the draft functional requirements are written, the vendor selection process proceeds in parallel5.Writing Functional Requirements After finishing the draft functional requirements and feeding the second part of this project , the functional requirements document is written, which will be introduced at the second architecture workshop with staff From all affected parties. The results will be discussed and a list of the requirements that need to be refined or added will be composed. This is the last checkpoint before the sizing and the proof of concept (POC) starts. although the planning of the POC starts after the second workshop.With the finished functional requirements, the proof of concept phase begins.

  • Proof of Concept Establishing a POC is highly recommended to prove that the selected hardware and software are able to meet all expectations, including security requirements. The deliverable of the POC should be a running. prototype, including the associated document and test protocols describing the tests and their results.

  • HARDWARE ACQUISITION

    Selection of a computer hardware and software environment frequently requires the preparation of specifications for distribution to hardware/software (HW/SW) vendors and criteria for evaluating vendor proposals. The specifications are sometimes presented to vendors in the form of an invitation to tender (IIT), also known as a request for proposal (RFP). The specifications must define, as completely as possible, the usage,2. tasks and requirements for the equipment needed a description of the environment where that equipment will be used.

  • Acquisition Steps

    When purchasing (acquiring) hardware and software from a vendor, consideration should be given to the following: Testimonials or visits with other users Provisions for competitive bidding Analysis of bids against requirements Comparison of bids against each other using predefined evaluation criteria Analysis of the vendor's financial condition Analysis of the vendor's capability to provide maintenance and support (including training) Review of delivery schedules against requirements Analysis of hardware and software upgrade capability Analysis of security and control facilities Evaluation of performance against requirements Review and negotiation of price Review of contract terms (including right to audit clauses) Preparation of a formal written report summarizing the analysis for each of the alternatives and justifying the selection based on benefits and cost

  • The criteria used for evaluating vendor proposals

    Turnaround time- The time that the help desk or vendor takes to fix a problem from the moment it is logged inResponse time- The time a system takes to respond to a specific query by the userSystem reaction time- The time taken for logging into a system or getting connected to a networkThroughput- The quantity of useful work made by the system per unit of time. Workload- The capacity to handle the required volume of work , or the volume of work that the vendor's system can handle in a given time frame Compatibility- The capability of an existing application to run successfully on the newer system supplied by the vendor Capacity- The capability of the newer system to handle a number of simultaneous requests from the network for the application and the volume of data that it can handle from each of the users Utilization- The system availability time vs. the system downtime

  • IS AUDITORS CONCERNWhen performing an audit of this area , the IS auditor should : Determine if the acquisition process began with a business need and whether the hardware requirements for this need were considered in the specifications. Determine if several vendors were considered and whether the comparison between them was done according to the aforementioned criteria .

  • SYSTEM SOFTWARE ACQUISITION It is IS management's responsibility to be aware of HW/SW capabilities since they may improve business processes and provide expanded application services to businesses and customers in a more effective way. It is important that organizations stay current by applying the latest version or release and updates/patches of system software to remain protected and competitive. If the version or release is not current, the organization risks being dependent on software that may have known vulnerabilities or may become obsolete and no longer supported by the software vendorShort- and long-term plans should document IS managements plan for migrating to newer, more efficient and more effective operating systems and related systems software .

  • When selecting new system software, a number of business and technical issues must be considered including: Business functional and technical needs and specifications Cost and benefits Obsolescence Compatibility with existing systemsSecurityDemands on existing staffTraining and hiring requirementsFuture growth needs Impact on system and network performanceOpen source code vs. proprietary code

  • SYSTEM SOFTWARE IMPLEMENTATION

    System software implementation involves identifying features,configuration options and controls for standard configurationsto apply across the organization. Additionally, implementation involves testing the software in a non production environment and obtaining some form of certification and accreditation to place the approved operating system software into production.

  • SYSTEM SOFTWARE CHANGE CONTROLPROCEDURES All test results should be documented, reviewed and approved bytechnically qualified subject matter experts prior to production use. Change control procedures are designed to ensure that changesare authorized and do not disrupt processing. This requires that IS management and personnel are aware of and involved in the system software change processChange control procedures should ensure that changes impacting the production systems (particularly in relation to the impact of failure during installation ) have been assessed appropriately, and that appropriate recovery/backout (rollback) procedures exist. E.g. a configuration management system in place .for maintaining prior OS versions or prior states when applying security patches related to high risk security issuesChange control procedures should also ensure that all appropriate members of the management team who could be affected by the change have been properly informed and have made a previous assessment of the impact of the change in each area.

  • INFORMATION SYSTEM MAINTENANCE PRACTICESThis primarily refers to the process of managing change to application systems while maintaining the integrity of both the production source and executable code.Once a system is moved into production, it seldom remains static. Change is an expected event in all system regardless of whether they are vendor-developed or internally developed.

  • SYSTEM MAINTENANCE contd..Reasons for change in normal operations include:IT changes;Business changes;Changes in classification related to either sensitivity or criticality;Audit;Adverse incidents such as intrusions and virus.

  • system changes

    Must:be appropriate to the needs of the organization.Appropriately authorized. documented.thoroughly tested and approved by management The process typically is established in the design Phase of the application when application system requirements are baselined.

  • CHANGE MANAGMENTChange management begins with authorizing changes to occur.Change requests are initiated from end users as well as operational staff and system development/maintenance staff.For purchased system, a vendor may distribute period updates, patches or new releases of the software.User and system management should review such changes

  • CHANGE PROCEDUREUser department should decide whether the changes are appropriate for the organization.Change request should be in a format that is track-able with unique serial number. All requests for changes and related information should be maintained by the system maintenance staff as part of the systems documentation.

  • CHANGE PROCEDURE contdMaintenance records of all changes should be kept. Maintenance information usually consists of the programmer ID, time and date of change, project or request number associated with the change, before-and-after images of the line of code that were changed.In lieu of the manual process of management approving changes before the programmer can submit them into production, management could have automated change control software installed to prevent unauthorized program changes. By doing this the programmer is no longer responsible for migrating changes into production. The change control software becomes the operator that migrates programmer changes into production based on approval by management.Programmers should not have write, modify or delete access to production data

  • Deploying Changes After the end user is satisfied with the system test results and the adequacy of the system documentation , approval should be obtained from user management . User approval could be documented on the original change request or in some other fashion (memo or e- mail ).

    Documentation To ensure the effective utilization and future maintenance of a system, it is important that all relevant system documentation be updated.Procedures should be in place to ensure that documentation stored offsite for disaster recovery purposes is al so updated .

  • TEASERWhich of the following is MOST effective in controlling application maintenance?A. Informing users of the status of changes B. Establishing priorities on program changes C. Obtaining user approval of program changes D. Requiring documented user specifications for changes User approvals of program changes will ensure that changes are correct as specified by the user and that they are authorized. Therefore, erroneous or unauthorized changes are less likely to occur, minimizing system downtime and errors.

  • CHANGE TESTINGChanged programs should be tested and certified with the same discipline as newly developed system to ensure that the changes perform the functions intended.Effort must be made to verify that existing functionality is not damaged by the change;existing performance is not degraded because of the change;No security exposures have been created because of the change

  • EMERGENCY CHANGESThere may be times when emergency changes must be carried out to resolve system problems and enable critical production jobThis is done typically through special logon IDs (emergency IDs). It grants programmer/analyst temporary access to the production environment.Special logon IDs possess powerful privileges. Its use should be logged and carefully monitored

  • EMERGENCY CHANGES contdChanges done in this fashion are held in a special emergency library, from where they should be moved into normal production libraries in a controlled manner, and through the change management process.Management should ensure that all normal change management controls are retroactively applied even after effecting the emergency change.

  • MIGRATING CHANGES TO PRODUCTIONOnce user management has approved the change, the changed or modified programs can be moved into production environment.It must be noted that a group, independent of the programmer/analyst that maintained the system should move changes to production.Such group could include computer operations, quality assurance or a change control group designated for that purpose.To ensure that only authorized individuals have the ability to migrate programs to production, an access control software could be implemented

  • Change Exposures (Unauthorized Changes)

    An unauthorized change to application system programs can occur for several reasons: The programmer has access to production libraries containing programs and data including object code. The user responsible for the application was not aware of the change (no user signed the maintenance change request approving the start of the work). A change request form and procedures were not formerly established.The appropriate management official did not sign the change form approving the start of the work ,etc

  • CONFIGURATION MANAGEMENT Because of the difficulties associated with exercising control over programming maintenance, activities, some organizations implement configuration management Systems.Configuration management involves procedures throughout the software life cycle (from requirements analysis to maintenance) to identify. define and baseline software items in the system and thus provide a basis for problem management, change management and release management.The process involves identification of the items that are likely to change (called configuration items). These include things such as programs, documentation and data. Once an item is developed and approved, it is handed over to a configuration management team for safekeeping and assigned a reference number. Once base lined in this way, an item should only be changed through a formal change control process.

  • SYSTEM DEVELOPMENT TOOLS AND PRODUCTIVITY AIDSThis includesCode generators;CASE applications;Fourth-generation languages

  • CODE GENERATORSCode generators are tools that generate program code based upon parameters defined by a system analyst or on data/entity flow diagrams developed by the design module of a CASE product.It allows most developers to implement software programs with efficiency.An IS auditor should be aware of nontraditional origins of source code.

  • COMPUTER AIDED SOFTWARE ENGINEERING -- (CASE)CASE is the use of automated tools to aid in the software development process. Its use may include the application of software tools for software requirement analysis, software design, testing, document generation and other software development activities.CASE products are generally divided into three categories:Upper CASEMiddle CASELower CASE

  • CASE CATEGORIESUpper CASE These products are used to describe and document an application requirement.Middle CASE These products are used for developing the detailed design.Lower CASE These products are involved with the generation of program code and database definitions.

  • TEASERWhich of the following computer aided software engineering (CASE) products is used for developing detailed designs, such as screen and report layouts?A. Super CASE B. Upper CASE C. Middle CASE D. Lower CASE

  • FOURTH-GENERATION LANGUAGESOften abbreviated 4GL, fourth-generation languages are programming languages closer to human languages than typical high-level programming languages. Most 4GLs are used to access databases. For example, a typical 4GL command is FIND ALL RECORDS WHERE NAME IS "SMITH" The other four generations of computer languages are first generation: machine language second generation: assembly language third generation: high-level programming languages, such as C, C++, and Java. fifth generation: languages used for artificial intelligence and neural networks.

  • An assembly language contains the same instructions as a machine language, but the instructions and variables have names instead of being just numbersPrograms written in high-level languages are translated into assembly language or machine language by a compiler. Assembly language programs are translated into machine language by a program called an assembler.Machine languages are the only languages understood by computers. While easily understood by computers, machine languages are almost impossible for humans to use because they consist entirely of numbers

  • 4GL is identified by its xteristics and does not have a standard definition.Characteristics include:Non-procedural- Most 4GLs do not obey the procedural paradigm of continuous statement execution and subroutine call and control instructions . Instead. they are eventdriven and make extensive use of object-oriented programming concepts such as objects. Properties and methods.Environmental independence (portability)-Many 4GLs are portable across computer architectures, operating systems and telecommunications monitorsSoftware facilities- (ability to design, paint, retrieval screen format, training routines, help screens and produce graphical output)

  • Programmer workbench concept programmer has access through the terminal to easy filing facilities, temporary storage, text editing and operating system commands.(IDE)Simple language subset simple language concept that can be used by less-skilled users in an information centre

    Care should be taken when using 4GLs. Unlike traditional languages, the 4GLs can lack the lower level detail commands necessary to perform certain types of data intensive or online operations. These operations are usually required when developing major applications. For this reason, the use of 4GLs as development languages should be weighed carefully against traditional languages already discussed

  • 4GLs classifications: Query and report generators- These specialized languages can extract and produce reports (audit software). Recently, more powerful languages have been produced that can access database records, produce complex on line outputs and be developed in an almost natural language. Embedded database 4GLs- These depend on self contained database management systems. This characteristic often makes them more user-friendly but also may lead to applications that are not integrated well with other production applications. Examples include FOCUS, RAM IS II and NOMAD 2. Relational database 4GLs- These high level language products are usually an optional feature on a vendor's DBMS product line. These allow the applications developer to make better use of the DBMS product, but they often are not end- user 0riented . Examples include SQL+, MANTIS and NATURAL. Application generators- These development tools generate lower level programming. languages (3GLs) such as COBOL and C. The application can be further tailored and customized . Data processing development personnel , not end users, use application generators.

  • VERY IMPORTANT!Wait a minute!!!What is the most common demerit of 4th Generation Language???

    It can lack lower level detail command necessary for certain data intensive or online operations/calculation.

  • BUSINESS PROCESS REENGINEERINGA business process can be seen set of interrelated work activities characterized by specific inputs and value-added tasks that produce specific customerfocused outputs. Business processes consist of horizontal work flows that cut across several departments of functions.BPR is the process of responding to competitive and economic pressures, and customer demands to survive in the current business environment. This is usually done by automating system processes so that there are fewer manual interventions and manual controls.BPR achieved with the help of implementing an ERP system is often referred to as package-enabled reengineering (PER)

  • BPR contdBenefits of BPR are usually experienced where the reengineered process appropriately suits the business needs.BPR has increased in popularity as a method for achieving the goal of cost savings, through streamlining operations and gaining the advantages of centralization within the same process.

  • BPR STAGESDefine the areas to be reviewed;Develop a project plan;Gain understanding of the process under review;Redesign and streamline the process;Implement and monitor the new process;Establish a continuous improvement process.

  • BPR contdThe newly designed business processes inevitably involve changes in the ways of doing business and could impact the philosophy, finances and personnel of the organization, its business partners and customers;Throughout the change process, the change management team must be sensitive to organization culture, structure, direction and the component of change.They must also be able to predict and/or anticipate issues and problems and offer appropriate resolutions that will accelerate the change process.

  • BPR contdA major concern in Business Process Reengineering is that key controls may be reengineered out of a business process.The IS Auditor should identify the existing controls and evaluate the impact of removing them.If the controls are key preventive controls, the IS Auditor must ensure management is aware of the removal of the control and that management is willing to accept the potential material risk of not having that preventive control.

  • BENCHMARKINGThis is about improving business process. It is defined as a continuous, systematic process for evaluating the products, services and work processes of organization recognized as representing best practices for the purpose of organization improvement.

  • BENCHAMARKING PROCESS Plan; Research; Observe; Analyze; Adapt;Improve.

  • BENCHMARKING PROCESSPlan critical processes are identified for the benchmarking exercise. Benchmarking team should understand the kind of data needed.Research benchmarking team should collect data about its own processes, before collecting this data about others. Benchmarking partners are identified through media sources;Observe next step is to collect data and visit benchmarking partners. There should be an agreement with the partner organization, a data collection plan and methods to facilitate proper observation

  • Analyze data collected so far are analyzed and interpreted for the purpose of identifying gaps between the organization and the partners process. Converting key findings into new operational goals will be the goal of this stage.Adapt results of the process is adapted to organization's process. This involves translating the findings into core principlesContinuous improvement this is the key focus in a benchmarking exercise.BENCHMARKING PROCESS

  • BPR Audit and EvaluationWhen reviewing an organization's business process change (reengineering) efforts, l S auditors must determine whether: The organization's change efforts are consistent with the overall culture and strategic plan of the organizationThe reengineering team is making an effort to minimize any negative impact the change might have on the organization's staffThe BPR team has documented lessons to be learned after the completion of the BPR/process change project The IS auditor would also provide a statement of assurance or conclusion with respect to the objectives of the audit.

  • ISO 9126ISO 9126 is an international standard to assess the quality of software productThis standard provides the definition of the xteristics and associated quality evaluation process to be used when specifying the requirements for, and evaluating the quality of, software products throughout their life cycle. Attributes evaluated include:Functionality;Reliability;Usability;Efficiency;Maintainability;Portability

  • ISO 9126 Functionality existence of a set of functions and their specified properties;Reliability capability of software to maintain its level of performance under stated conditions for a stated period of time;Usability effort needed to use the software and individual assessment of such use;Efficiency amount of resources needed by the software to maintain a given level of performanceMaintainability effort needed to make modifications; Tj, talk about cohesion & coupling !!!!!!!!!!!Portability ability of the software to be transferred from one environment to another.

  • TEASERFunctionality is a characteristic associated with evaluating the quality of software product throughout their life cycle, and is best described as the set of attributes that bear on the:A.The existence of a set of functions and their specified propertiesB.Ability of the software to be transferred from one environment to anotherC.Capability of the software to maintain its level of performance under stated conditionsD. Relationship between the level of performance of the software and the amount of resource used

  • TEASERVarious standards have emerged to assist IS organizations in achieving an operational environment that is predictable, measurable and repeatable. The standard that provides the definition of the characteristics and associated quality evaluation process to be used when specifying the requirements for and evaluating the quality of software products throughout their life cycle is:A. ISO 9001. B. ISO 9002. C. ISO 9126. D. ISO 9003 . Explanation: ISO 9126 is the standard that focuses on the end result of good software processes, i.e., the quality of the actual software product. ISO 9001 contains guidelines about design, development, production, installation or servicing. ISO 9002 contains guidelines about production, installation or servicing, and ISO 9003 contains guidelines final inspection and testing.

  • CAPABILITY MATURITY MODEL INTEGRATION CMM was adopted for softwares: other models were developed for disciplines as system engineering etcCMMI was conceived as a means of combining various models into a set of integrated models.CMMI is a means to improving processes and rulesCMMI offers practices inform of activities and task

  • CMMI is less directly aligned with the waterfall/SDLC/Traditional approach but aligns directly with cotemporary software development practices like :iterative development etcCMMI is useful to evaluate management of a computer centre, the development function mgt process and implement and measure the IT change mgt process.

  • ISO/IEC 15504 ( SPICE)This internationally standardises Maturity ModelsIt is a series of documents that provide guidance on process improvement, benchmarking and assessmentIt includes detailed guidance that can be leveraged to create enterprise best practicesSee page .211/212

  • APPLICATION CONTROLSApplication controls could be manual or programmed.Objective is to ensure completeness, accuracy and validity of the entries made into a system from both manual and programmed processing.Application control are controls over input, processing and output at ensuring that:only complete, accurate and valid data are entered and updated in a computer system;processing accomplishes the correct task;processing result meet expectation.

  • The IS auditor's tasksIdentifying the: significant application componentsThe flow of transactions through the system,gaining a detailed understanding of the application by reviewing the available documentation and interviewing appropriate personnelIdentifying the application control strengths, and evaluating the impact of the control weaknessesDeveloping a testing strategyTesting the controls to ensure their functionality and effectiveness by applying appropriate audit procedures Evaluating the control environment by analyzing the test results and other audit evidence to determine that control objectives were achieved

  • INPUT/ORIGINATING CONTROLSThis ensures that every transaction to be processed is received, processed and recorded accurately and completely. It ensures that only valid and authorized information is input and that these transactions are only processed once.Therefore the system receiving the output of another system as input /origination must in turn apply edit checks: validations and access controls to those data.

  • INPUT AUTHORIZATIONThis helps ensure that only authorized data are entered into the computer system for processing by applications. This could be performed on-line; a computer generated report, listing items requiring manual authorization may be generated. Types of authorization include:Signatures on batch forms or source documents;Online access controls--Ensure that only authorized individuals may access data or perform sensitive functionsUnique passwords-Necessary to ensure that access authorization cannot be compromised through use of another individual's authorized data accessTerminal or client workstation identification- Used to limit input to specific terminals or workstations as well as to individualsSource documents-A well designed source document increases the speed and accuracy with which data can be recorded & reference checking, controls work flow etc

  • Ideally, source documents should be pre-printed forms to provide consistency. accuracy and legibility. Source documents should include standard headings, titles, notes and instructions. Source document layouts should : Emphasize ease of use and readability Group similar field s together to facilitate input Provide predetermined input codes to reduce errors Contain appropriate cross-reference numbers or a comparable identifier to facilitate research and tracing Use boxes to prevent field size errors Include an appropriate area for management to document authorization

    All source documents should be appropriately controlled .Procedures should be established to ensure that all source documents have been input and taken into account. Prenumbering source documents facilitate this control.

  • BATCH CONTROLS AND BALANCINGBatch controls group inputs transactions to provide control totals. It includes:Total monetary amount; Verification that the total monetary value of items processed equals the total monetary value of the batch documentsTotal items; verification that the total number of items included on each document in the batch agrees with the total number of items processedTotal documents; Verification that the total number of documents in the batch equals the total number of documents processedHash totals. Verification that the total in a batch agrees with the total calculated by the system

  • Batch BalancingBatch balancing can be performed through manual or automated reconciliation. It ensures that all documents are included in a batch, all batches are submitted for processing, all batches are accepted by the computer. It includes:Batch register-These registers enable manual recording of batch totals and subsequent comparison with system reported totalsControl accounts-performed through an initial edit file to determine batch totals. The data are then processed to the master file, and a reconciliation is performed between the totals processed during the initial edit file and the master fileComputer agreement-Computer agreement with batch totals is performed through the input of batch header details that record the batch totals; the system compares these to calculated totals, either accepting or rejecting the batch Incase there is error, please review possible actions to error reporting on pg 214

  • INPUT CONTROL TECHNIQUESReconciliation of dataError correction proceduresAnticipationDocumentationTransaction logTransmittal logCancellation of source documentsPage 212

  • Data Validation and Editing ProcedureThis is a process of ensuring that input data are validated and edited as close to the time and point of origination as possibleIf input procedures allow supervisor overrides of data validation and editing, automatic logging should occur.A management individual who did not initiate the override should review this logAbove all gentlemen & ladies, pleaaaaaaaaase note that Data Validation and Edit procedure are PREVENTIVE CONTROL that are used before data are processed.

  • DATA VALIDATION AND EDIT CONTROL1.Check digit

    7.Reasonableness check2.Completeness check

    3.Limit check 5.Sequence check4.Logical relationship check

    6.Range check

    8.Duplicate check9.Validity check

    10.Table look-ups11.Existence check

    12.Key Verification

  • PROCESSING CONTROLSProcessing procedures and controls ensure the reliability of application program processing. The following are processing control techniques:Manual recalculation; A sample of transactions may be recalculated manually to ensure that processing is accomplishing the anticipated taskEditing; An edit check is a program instruction or subroutine that tests the accuracy, completeness and validity of data . It may be used to control in put or later processing of dataRun to run totals; Run-to-run totals provide the ability to verify data values through the stages of application processingProgrammed controls; Software can be used to detect and initiate corrective action for errors in data and processing

  • Reasonableness verification of calculated amounts; Application programs can verify the reasonableness of calculated amounts. The reasonableness can be tested to ensure appropriateness to predetermined criteriaLimit checks on calculated amounts; -An edit check can provide assurance through the use of predetermined limits, that amounts have been keyed or calculated correctlyReconciliation of file totals; Reconciliations may be performed through the use of a manually maintained account, a file control record or an independent control file Exception reports. An exception report is generated by a program that identifies transactions or data that appear to be incorrect :These items may be outside a predetermined range or may not conform to specified criteria

  • TEASERThe editing/validation of data entered at a remote site would be performed MOST effectively at the:A. central processing site after running the application system. B. central processing site during the running of the application system. C. remote processing site after transmission of the data to the central processing site. D. remote processing site prior to transmission of the data to the central processing site.

  • Data File Control ProcedureData files, or indeed database tables, generally fall into four categories:System control parameters; The entries in these files change the workings of the system and may alter controls exercised by the systemStanding data; These master files include data, such as supplier/customer names and addresses. that do not frequently change and are referred to during processingMaster data/balance data; Running balances and totals that are updated by transactions should not be capable of adjustment except under strict approval and review controlsTransaction files. These are controlled using validation checks, control totals exception report s etc.

  • METHODS OF DATA FILE CONTROL1.File updating & Maintenance authorization7.Source documentation retention2.Transaction log3.Pre-recorded input 5.Before & After imageReporting4.Parity checking6.Maintenance error reportingand handling

    8.Data file security9.One for one checking10.Version usage 11.Internal & External labeling

  • TEASERAs updates to an online order entry system are processed, the updates are recorded on a transaction tape and a hard copy transaction log. At the end of the day, the order entry files are backed up on tape. During the backup procedure, a drive malfunctions and the order entry files are lost. Which of the following is necessary to restore these files?

    AThe previous days backup file and the current transaction tapeBThe previous days transaction file and the current transaction tapeCThe current transaction tape and the current hard copy transaction logDThe current hard copy transaction log and the previous days transaction file

  • OUTPUT CONTROLSOutput controls provide assurance that the data delivered to users will be presented, formatted and delivered in a consistent and secure manner. They include:Logging and storage of sensitive and critical forms;Computer generation of critical and sensitive forms;Report distribution;Balancing and reconciling;Output error handling;Output report retention;Verification of receipt of report.PAGE .214

    Are u waiting? R-E-A-D!!!!

  • DATA INTEGRITY TESTINGThis is a set of substantive tests that examines the accuracy, completeness, consistency and authorization of data presently held in the system. Data integrity test will indicate failures in input or processing controls.Controls for ensuring the integrity of accumulated data in a file can be done against authorized source documentation.When this checking is done against authorized source documentation, it is common to check only a portion of the file at a time. Since the whole file is regularly checked in cycles, the control technique is often referred to as cyclical checking.

  • TYPES OF DATA INTEGRITY TESTRelational integrity test:Relational integrity tests are performed at the data element and record-based levels and usually involve calculating and verifying various calculated fields, such as control totals . It is enforced through data validation routines built into the application or by defining the input condition constraints and data xteristics at the table definition in the database stage itself. Sometimes, it is a combination of both.

  • DATA INTEGRITY TEST contdReferential integrity test:Referential integrity tests define existence relationship between entities in a database that needs to be maintained by the DBMS. It is required for maintaining interrelation integrity in the relational data model. Whenever two or more relations are related through referential constraints (primary & foreign key), it is necessary that references be kept consistent in the events of insertions, deletions and updates to these relations

  • TEASERIn a relational database with referential integrity, the use of which of the following key would prevent deletion of a row from a customer table as long as the customer number of that row is stored with live orders on the orders table?A.foreign keyB.primary keyC.secondary keyD.public key

  • TEASERIn a relational database with referential integrity, the use of foreign keys would prevent events such as primary key changes and record deletions, resulting in orphaned relations within the database. It should not be possible to delete a row from a customer table when the customer number (primary key) of that row is stored with live orders on the orders table (the foreign key to the customer table). A primary key works in one table, so it is not able to provide/ensure referential integrity by itself. Secondary keys that are not foreign keys are not subject to referential integrity checks. Public key is related to encryption and not linked in any way to referential integrity.

  • TEASERWhich of the following controls would provide the GREATEST assurance of database integrity?A. Audit log proceduresB. table link/reference checksC. query/table access time checksD. rollback and rollforward database features

  • DATA INTEGRITY TEST contdDomain integrity test used to confirm whether data validation and edits controls and procedures are working appropriately.Also used to confirm whether data exists in its correct domain.

  • DATA INTEGRITY IN ONLINE TRANSACTION PROCESSING SYSTEMIn multi-user transaction systems, it is necessary to manage parallel user access to stored data, typically controlled by a DBMS. Of particular importance are 4 online data integrity requirements, known as ACID principle:Atomicity; From a user perspective, a transaction is either completed in its entirety (i.e. all relevant database tables are updated) or not at all. If an error or interruption occurs, all changes made up to that point are backed out. Consistency; All integrity conditions in the database are maintained with each transaction, taking the database from one consistent state into another consistent stateIsolation; Each transaction is isolated from other transactions and hence each transaction only accesses data that are part of a consistent database state.Durability. If a transaction has been reported back to a user as complete, the resulting changes to the database survive subsequent hardware or software failures

  • TESTING APPLICATION SYSTEMExhaust the list of Application Systems Techniques on pages 213Also review the last paragraph after Application System Testing

  • CONTINOUS ON-LINE AUDITINGContinuous on-line auditing is becoming increasingly important in todays e-business world, because it provides a method for the IS auditor to collect evidence on system reliability while normal processing takes place.

    The continuous audit approach cuts down on needless paperwork and leads to the conduct of an essentially paperless audit.

    In this sense, an IS auditor can report directly through the microcomputer on significant errors or other irregularities that may require immediate management attention.

  • TYPES OF ONLINE AUDITING TECHNIQUESSCARF/EAMSnapshotsAudit hooksITFContinuous and intermittent simulationsALSO DISCUSS THE FOREGOING IN ORDER OF COMPLEXITY

  • BROAD CLASSIFICATIONSBroadly, any concurrent audit technique would fall within:Those that can be used to evaluate application systems with test/live data during normal production processing runs Examples is Integrated Test Facility (ITF)Those that can be used to select transactions for audit reviews during normal production processing runs Examples are Snapshot and Extended RecordsThose that can be used to trace or map changing states of application systems during normal production processing runs Examples are System Control Audit Review Files (SCARF) and Continuous and Intermittent Simulations

  • Integrated Test Facility (ITF) In this technique, dummy entities are set up and included in an auditee's production files. The IS auditor can make the system either process live transactions or test transactions during regular processing runs, and have these transactions update the records of the dummy entity. The operator enters the test transactions simultaneously with the live transactions that are entered for processing. The auditor then compares the output with the data that have been independently calculated to verify the correctness of the computer processed data.

  • ITFITFApplicationsystemDatabase with Dummy entityITFApplicationsystemDatabase with Dummy entityLive dataTest dataTag live transaction

  • Audit hooks- This technique involves embedding hooks in application systems to function as red flags and to induce IS auditors to act before an error or irregularity gets out of hand.

  • SNAPSHOTFor application systems that are large or complex, tracing the different execution paths through the system can be difficult. If auditors wish to perform transaction walkthroughs, therefore, they could face a difficult or impossible task.A simple solution to the problem is to use the computer to assist with performing transaction walkthroughs.

  • SNAPSHOTThe Snapshot technique involves having software take pictures of a transaction as its flows through an application system.Typically, auditor embed the software in the application system at those points where they deem material processing occurs.The embedded software then captures images of a transaction as it progresses through these various processing points.These software takes a beforeimages and afterimages of the transaction and the transformation that has occurred on the transaction.

  • InputValidationprogramSnapshotreportErrorreportInput errorfileTransactionSnapshot Points 1,2,3Output Error fileValidtransactionUpdateprogramSnapshotreportUpdatereportSorted TransactionfileInput masterfileOutputMaster fileAAASnapshot points4,5,6,7,8Report programSnapshotreportMgtreportSNAPSHOT

  • EXTENDED RECORD TECHNIQUEThis is a modification of Snapshot techniqueAs opposed to having the software write one record for each Snapshot point, auditors can have it construct a single record that is built up from the images captured at each Snapshot point.Extended records have the merit of collecting all the Snapshot data related to a transaction in one place, thereby facilitating audit evaluation work.

  • EXTENDED RECORD TECHNIQUESnapshotPoint1Before imageAfter imageSnapshotPoint2Before imageAfter imageSnapshotPointnBefore imageAfter imageThe Snapshot and Extended record techniques can be used in conjunction with the ITFTechnique to provide extensive audit trail

  • SCARFThis is the most complex of all the concurrent auditing technique.It involves embedding audit software modules within a host application to provide continuous monitoring of the systems transaction.These audit module are placed at predetermined points to gather information about transactions or events.Information collected is written onto a special file the SCARF master file.

  • SCARFIn many ways, the SCARFtechnique is like the snapshot/extended record technique. Indeed, the SCARF embedded software can be used to capture snapshots and to create extended recordsIt must however be noted that SCARF technique uses a more complex reporting system than snapshot and extended record technique.

  • SCARFUpdate programcontaining SCARFEmbeddedaudit routines

    UpdatereportInput masterFileTransactionOutput Master fileSCARFSCARFReportingsystemAuditreports

  • CONTINOUS AND INTERMITTENT SIMULATIONThis is a variation of SCARFIt can be used whenever application systems use a DBMS.Whereas SCARF requires embedding audit module within an application to trap exceptions, CIS uses the DBMS to trap these exceptions.This way, the application system is left intact. When application system invokes the services provided by the DBMS, the DBMS in turn indicate to CIS that a service is required.

  • CISApplicationWorkingstorageDBMSCISTransactionDatabaseException logMainframeDBMSPeripherals

  • CIS TECHNIQUECIS then determines whether it wants to examine the activities to be carried out by the DBMS on behalf of the applicationThe DBMS provides the CIS with all the data required by the application system to process the selected transaction. Using this data, CIS also processes the transaction.In other words, CIS replicates application system processing logic.Every update to the database that arises from processing the selected transaction will be checked by CIS to determine whether discrepancies exist between its results and that of application systems

  • TEASERWhich of the following online auditing techniques is most effective for the early detection of errors or irregularities?A. Embedded audit module B. Integrated test facility C. Snapshots D. Audit hooks

  • EXPLANTIONExplanation: The audit hook technique involves embedding code in application systems for the examination of selected transactions. This helps the IS auditor to act before an error or an irregularity gets out of hand. An embedded audit module involves embedding specially written software in the organization's host application system so that application systems are monitored on a selective basis. An integrated test facility is used when it is not practical to use test data, and snapshots are used when an audit trail is required.

  • TEASERWhich of the following audit tools is MOST useful to an IS auditor when an audit trail is required?

    A. Integrated test facility (ITF) B. Continuous and intermittent simulation (CIS) C. Audit hooks D. Snapshots

  • TEASERWhich of the following would BEST ensure the proper updating of critical fields in a master record?A. Field checks B. Control totals C. Reasonableness checks D. Before and after maintenance report Before and after maintenance report is the best answer because a visual review would provide the most positive verification that updating was proper.

  • TEASERAn IS auditor reviewing an organization's data file control procedure finds that transactions are applied to the most current files, while restart procedures use earlier versions. The IS auditor should recommend the implementation of:A. source documentation retentionB. data file securityC. version usage controlD. one for one checking

  • TEASERWhich of the following types of data validation and editing are used to determine if a field contains data, and not zeros or blanks?A. Check digit B. Existence check C. Completeness check D. Reasonableness check

  • TEASEREdit controls are considered to be:A. preventive controls. B. detective controls. C. corrective controls. D. compensating controls.

  • TEASERWhich of the following provides the ability to verify data values through the stages of application processing?A. Programmed controls B. Run-to-run totals C. Limit checks on calculated amounts D. Exception reports

  • TEASERWhich of the following is intended to reduce the amount of lost or duplicated input?A. Hash totals B. Check digits C. Echo checks D. Transaction codes Hash totaling involves totaling specified fields in a series of transactions or records. If later checks do not result in the same number, then records are either lost, entered or transmitted incorrectly, or are duplicated.

  • TEASERWhich of the following is NOT an objective of application controls?A. Detection of the cause of exposure B. Analysis of the cause of exposure C. Correction of the cause of exposure D. Prevention of the cause of exposure Controls are usually classified in three categories; preventive, corrective or detective. No control is gained by a routine that analyzes an exposure.

  • TEASERProcedures for controls over processing include:A. hash totals. B. reasonableness checks. C. online access controls. D. before and after image reporting

    Reasonableness checks are a form of processing controls that can be used to ensure that data conforms to predetermined criteria. Before and after image reporting is essentially a control over data files that makes it possible to trace the impact transactions have on computer records. Online access controls prevent unauthorized access to the system and data. Hash totals are a form of batch control that are used to verify a predetermined numeric field for all documents in a batch to the agreed number of documents processed.

  • TEASERParity bits are a control used to validate:Data accuracyData completenessData authenticationData source

  • TEASERWhich of the following BEST describes an integrated test facility?A. A technique that enables the IS auditor to test a computer application for the purpose of verifying correct processing B. The utilization of hardware and/or software to review and test the functioning of a computer system C. A method of using special programming options to permit the printout of the path through a computer program taken to process a specific transaction D. A procedure for tagging and extending transactions and master records that are used by an IS auditor for tests

  • ANSWERThe correct answer is: A. A technique that enables the IS auditor to test a computer application for the purpose of verifying correct processing You answered correctly. Explanation: Answer A best describes an integrated test facility, which is a specialized computer-assisted audit process that allows an IS auditor to test an application on a continuous basis. Answer B is an example of a systems control audit review file; answers C and D are examples of snapshots.

  • *************************************************************************************************************