fundamentals of transaction systems - part 4: purity emerges from impurity (practical makes perfect)

55
4-1 Valverde Computing The Fundamentals of Transaction Systems Part 4: Purity emerges from Impurity (Practical makes perfect) C.S. Johnson <[email protected]> http://ValverdeComputing.Com http://ValverdeComputing.Ning.Com The Open Source/ Systems Mainframe Architecture

Upload: valverde-computing

Post on 14-Jul-2015

799 views

Category:

Technology


1 download

TRANSCRIPT

4-1

Valverde Computing

The Fundamentals of Transaction Systems Part 4:Purity emerges from Impurity(Practical makes perfect)

C.S. Johnson <[email protected]> http://ValverdeComputing.Com http://ValverdeComputing.Ning.Com

The Open Source/ SystemsMainframe Architecture

4-2

21. Openness (Glasnost) Open systems, open source, free software

What is Openness? Does allowing a Linux software partition to co-

exist and share data with your database partition make your DBMS open source? No.

Does open-sourcing part of your DBMS code, where the dependent magic is actually in your legacy operating system or special hardware, make your DBMS open source? No.

There is no substitute for the real thing !!! Gorbachev said one of the two things needed to

reduce the corruption at the top and moderate the abuse of administrative power was Glasnost (openness): openness will lead to lasting change

4-3

21. Openness (Glasnost) Open systems, open source, free software

Open systems run open source and free software packages with, at most, minor changes

Open source means you can see and (mostly) reuse the source of that software: but can you compile it and build it, and do you have all the pieces to run it?

Free software means ‘free’ as in ‘freedom’, not as in ‘free beer’ (Richard Stallman): there areFour FreedomsFour Freedoms , according to him:

1. The freedom to use the software for any purpose2. The freedom to change the software to suit your

needs3. The freedom to share the software with your friends

and neighbors4. The freedom to share the changes you make

http://www.fsf.org/licensing/licenses/quick-guide-gplv3.html

4-4

21. Openness (Glasnost) Open systems, open source, free software

When a program offers users all of these freedoms, we call it free software.

The open systems that FOSS (free and open source software) stacks can execute on vary widely:

LAMP – Linux, Apache, MySQL, PHP WAPR – Windows, Apache, PostGres, Ruby

Who is really open? Although MS Office and MS SQL Server are

legacy (legacy = proprietary), Windows is a legacy open system

IBM’s Linux partition on MVS is open source, because it runs on legacy hardware (not free software)

4-5

21. Openness (Glasnost) Open systems, open source, free software

Who is really open? (continued) HP Nonstop’s Unix interface is a legacy open system

on top of NSKernel HP’s OpenVMS, Tru64 Unix and their Enterprise Unix

on Superdomes (and the modern name-changed versions of that) are legacy open systems

Oracle is legacy software that runs on open systems Linux (all versions) is free software, which has

evolved into a legacy-like project MySQL is open source, because if you use it in

certain ways, you need to pay for it (it’s not free software)

PostGres is fairly free software (BSD license, not GNU GPL), but the Greenplum fork of it is legacy software (which could not happen under the GPLv2 or GPLv3 license)

4-6

21. Openness (Glasnost) Open systems, open source, free software

Why is openness a good thing? Because

LEGACY IS SLAVERY !!! (and not just for

customers) Large software systems have four levels of

discipline, which are involved in their development:1 Manufacturing and Logistics: unit,

integration and stress testing, version control, releases, deployment, maintainability and supportability

2 Engineering: requirements, architecture, high level and detailed design, formats, protocols, standards, best practices and patents

4-7

21. Openness (Glasnost) Open systems, open source, free software

Four levels of discipline (continued):3 Research (Science and Mathematics):

algorithms, academic and industry research and development, papers, conferences

4 Art and Culture: flow, composition, beauty, elegance, community and social values, astonishing and desirable function and performance … the things that make a team or community want to work on your architecture

When you are in the legacy trap, it’s very hard for a development community to deal with any discipline beyond level 1: manufacturing and logistics

4-8

21. Openness (Glasnost) Open systems, open source, free software

At Nonstop TMF (the transaction subsystem), a project called Mother-May-I was conceived before I arrived, designed and implemented during my 16 years there, and had not been released when I left (I don’t know if it has been released, even 7 years later), even though it speeded up distributed transactions on busy systems by an order of magnitude

Legacy software projects get stuck in the various legacy release pipelines because: Hardware projects touch multiple releases, have

complex and cyclic dependency graphs (worse than trees) and get prioritized because quarterly revenue is dependent on them

4-9

21. Openness (Glasnost) Open systems, open source, free software

Legacy software projects stall because (continued): Other big software projects have more political

juice behind them than yours Serious bugs impacting critical or visible accounts

(stock exchanges, etc.) steamroll everything else out of the way

Complex, distributed, asynchronous (racy code) products like transaction systems have complex unit tests, complex and lengthy integration tests, and tend to break stress integration tests at the end of the pipeline: breaking the end of a release can shove your product out of the release vehicle and destabilize that vehicle anyway (your old version needs the same extensive testing cycle) … people start to really hate you (Nonstop TMF on Gremlin)

4-10

21. Openness (Glasnost) Open systems, open source, free software

Legacy software projects stall because (continued): Getting shoved out of a release vehicle, or having

that vehicle made obsolete, causes version control nightmares if your software is also involved in the high priority release, even in tiny ways

If your project has any dependencies or is not isolated to a subsystem, it will be far more likely to get bumped over and over

All the serious and desirable changes (engineering, research, or art and culture) to the way products work involve non-isolated dependencies

Standard reasons (and others) why openness is good: More eyeballs on everything: requirements,

architecture, design, code, testing, schedules, etc.

4-11

21. Openness (Glasnost) Open systems, open source, free software

Standard reasons (and others) why openness is good (continued): FOSS architecture paradigms are more widely

understood generally than legacy paradigms … but then you can’t charge so much for those big customer seminars and special treatment

Anyone can run your tests, and customers can expedite the development process this way, for their favorite development thread

For critical customers, you can architect, design and test FOSS software in place, on their nickel: once it works satisfactorily the customer can begin using it, or they can do all of that without your involvement, either giving you back the source to slip in, or forking the code, if they choose to

4-12

21. Openness (Glasnost) Open systems, open source, free software

Standard reasons (and others) why openness is good (continued): This is much better than the decade-long waits

for the wish list to get shorter (it never shrinks for the lesser customers; in a legacy shop, big customers are inserted at the top of the wish list)

Finally, the customers, especially the institutional critical computing customers at the high end who are bleeding from legacy mainframe costs and cannot survive without those fundamentals … those customers are begging for relief from the legacy trap

4-13

21. Openness (Glasnost) Open systems, open source, free software

When is the time to switch to FOSS for critical systems?

A Gedanken Experiment (Einstein’s armchair visualization): If the world economy were to fragment into chaos and tumult, which software systems would remain viable as large institutions started to fail? Open source and especially, free software (FOSS)

is widely disseminated, mirrored and understood X86 systems are also widely maintainable and

understood Even if slow, x86 processors and boards could be

manufactured, remanufactured and recycled - even in rudimentary and primitive circumstances, but only the oldest versions of Windows would run on them

4-14

21. Openness (Glasnost) Open systems, open source, free software

Gedanken Experiment (continued): You don’t need to see a total breakdown in

society to see the benefits of FOSS: it is more survivable in partial or sector breakdowns (or meltdowns), or even in the kinds of fragmentation of society that we are seeing the possibility of right now (2009)

So, the time to switch critical systems to FOSS is now, and as soon as possible: eventually, the legacy mainframe space will be filled with a completely open source and open system mainframe technology that has no legacy in it anywhere … so why not an optimal RDBMS, and why not now?

4-15

21. Openness (Glasnost) Open systems, open source, free software

How to switch to FOSS for the critical mainframe-style systems?

All the hardware is there, and all the FOSS but for a reliable clustered message system, a transaction system that does reliable clustered and wide commit, and an S2PL database that is not just a web database: the optimal RDBMS as described in this presentation

Implementing all of that will impinge upon four areas of intellectual property …1 Copyrights protect the form and not the

content, and you can’t copyright the phone book (ordered lists), so APIs cannot be copyrighted: and APIs are the only thing we want from legacy software anyway

4-16

21. Openness (Glasnost) Open systems, open source, free software

Four intellectual property areas …2 Patents protect the content, utility patents last

17 years from filing, suing FOSS licensees can run a software firm afoul of any FOSS they need to operate (GPL, Stallman), the FOSS infringer is likely a large corporate or institutional customer … and the patents the optimal RDBMS needs are Nonstop clustering patents that have expired, and the rest can be gotten around anyway

3 Trademarks protect the names, a law firm in Boston trademarked ‘Linux’ and caused Torvalds some trouble, Stallman says not to bother (let ‘em sue), I say don’t name it until you have the cash to trademark it (call it an ‘optimal RDBMS’ instead of giving it a name)

4-17

21. Openness (Glasnost) Open systems, open source, free software

Four intellectual property areas …4 Trade Secrets are not

protected except by keeping the grey hairs employed, which nobody bothers with, so the Nonstop people are spread all over the map and have the collective knowledge to implement an optimal RDBMS in FOSS on open systems, now

4-18

22. Restructuring (Perestroika) Online application and schema maintenance

Restructuring (Perestroika) was used by Gorbachev to dismantle the Soviet economic system, which led to Yeltsin and the breakaway of the republics: we are seeking a successful online restructuring of database schema and applications, which should allow users to break away from the grip of maintenance outages bringing down what might otherwise be continuously available transaction systems

Without restructuring support from the optimal RDBMS, applications evolve through punctuated equilibrium (Eldridge & Gould), where little revolutions occur each time there is a notable change in database schema: the apps are brought down, the database is changed, and each app is brought back up and changed until it works again, and then all the queries are fixed until they work satisfactorily again

4-19

22. Restructuring (Perestroika) Online application and schema maintenance

This has an abysmal effect on the users who are either using the old system during this time and a huge reload of the database has to be done in the middle of the night to switch over (risky business), or applications are blocked one at a time as they are changed over (scheduling nightmare): this leads to user hatred of any change

Restructuring support in an optimal RDBMS would minimize the havoc in database schema changes by adding baseline versioning support for schema changes, and fallback and migration support in the catalog, SQL query language and the application languages and interfaces through the use of a specialized framework

4-20

22. Restructuring (Perestroika) Online application and schema maintenance

Some of these methods are already in the Nonstop and IBM DB2 product, and some constitute a new framework which would be applied to an optimal RDBMS to allow a baseline-fallback-migration approach to solve the online application and schema maintenance problem in relational database

4-21

22. Restructuring (Perestroika) Online application and schema maintenance

The purpose of the online application and schema maintenance framework (OASMF) is to make scripts, query plans, applications and the database execute tolerably well, both during and after schema changes that would normally break execution:1. On the side against a snapshot copy of the

database to successively improve on the changes until the database works like it used to, and then

2. Online against a continuously operating database to seamlessly stage the introduction of the schema and application changes in such a way as to allow a fallback if the migration has a problem online

4-22

22. Restructuring (Perestroika) Online application and schema maintenance

The framework has to deal with keeping things running through the following schema (DDL) changes …

Create Table: row-level access should not see any new rows from

this table in existing query plans and apps (of course)

new foreign key references should only be added once they are satisfied with data in the new table (of course)

Drop Table: row-level access to the table in existing query plans

and apps would result in the empty set column-level access to the table in existing query

plans and apps would receive a null foreign key references to the table would need to

be dropped at the time of the drop table (of course)

4-23

22. Restructuring (Perestroika) Online application and schema maintenance

Alter Table … Add Column: existing rows would have nulls inserted for the new

column column-level access should not see the new column in

existing constraints, query plans and apps (of course) wildcard or anonymous column-level access in query

formation would pull the new column through automatically in generated reports, temp files or streams

the row resizing hit on existing tables would depend on having slack in the row and block formats, otherwise there could be an immediate performance hit from shadow deblocking/reblocking and possible block splits in the partitions at the time the change was committed (except in the singlet case of vertical partitioning by column, which is a great argument for that technique)

4-24

22. Restructuring (Perestroika) Online application and schema maintenance

Alter Table … Drop Column: column-level access in existing query plans and

apps would receive nulls column-level access in existing constraints and

foreign key references could still work if strict relational matching rules were relaxed (allowing null matches by duck-typing) and if the corresponding columns were dropped with version synchronization, otherwise dropping/altering the constraints and foreign key references in version synchronization would be required

if the dropped column was named explicitly in an existing query plan select, it could still execute and receive either too few or too many rows, depending on the where clause

4-25

22. Restructuring (Perestroika) Online application and schema maintenance

Alter Table … Drop Column (continue): if the dropped column was named explicitly in an

existing query plan project, it could still execute and receive nulls in that projected column

if the dropped column was named explicitly in an existing query plan equijoin, it could still execute if strict relational matching rules were relaxed (allowing null matches by duck-typing), and if the corresponding fields in the joined tables were dropped together

row and block format slack would be increased if the physical deletion and shadow deblocking/reblocking occurred at the time the change was committed, or you could just leave the hole in the row

4-26

22. Restructuring (Perestroika) Online application and schema maintenance

Alter Table … Alter Column (including changes through the typing system) conversion of the column data occurs with the

alter column, column-level access in existing query plans and apps would receive the converted value unless duck typing could not produce a decent fit, then a null would result

column-level access in existing constraints, and foreign key references would require their changes to be committed together

if the altered column was named explicitly in an existing query plan select, it could still execute and could possibly receive either too few or too many rows, depending on the where clause

4-27

22. Restructuring (Perestroika) Online application and schema maintenance

Alter Table … Alter Column (continue): if the altered column was named explicitly in an

existing query plan project, it could still execute and receive the converted value in that projected column

if the altered column was named explicitly in an existing query plan equijoin, it could still execute if strict relational matching rules were relaxed (matching the converted value with duck-typing) and the columns being joined had their changes committed together

the row resizing hit on existing tables would depend on having slack in the row and block formats, otherwise there could be an immediate performance hit from shadow deblocking/reblocking and possible block splits in the partitions at the time the alteration was committed (could be exacerbated in the singlet case of vertical partitioning by column)

4-28

22. Restructuring (Perestroika) Online application and schema maintenance

Alter Table … Rename Column name changes will be handled with coexisting shadow aliases under the covers

Create and Drop Indexes will affect performance in both directions, but should not impact correct execution in fallback and migration: query plans should be flexible and should not initially be upset by the sudden presence or absence of an index at the beginning of a run, a missing index can be replaced by a hash sort and new indexes should be taken advantage of in the course of standard execution (a query plan is a parallelized set of sequences of scans, sorts, selects, projects, joins, etc., to form a compiled query against a physical database)

4-29

22. Restructuring (Perestroika) Online application and schema maintenance

In the common SQL paradigm, where an SQL cluster contains databases containing schemas that contain tables, etc., potentially in more than one tablespace: Create and Drop Database and Create and Drop Schema can be handled by shadow versions under the covers, for fallback and migration

a baseline release of the optimal RDBMS would include basic versioning support for online application and schema maintenance: a synchronized set of committed schema (DDL)

changes would make up a new version a dialect would contain a set of compatible versions

Fallback and Migration would then include a transition from one dialect to another: if it did not involve a dialect change (one or more versions of DDL changes), then the migration would be application-only and would not require online schema maintenance

4-30

22. Restructuring (Perestroika) Online application and schema maintenance

Online application and schema maintenance in an optimal RDBMS would be accomplished by an application and RDBMS framework that is multi-threaded, and which allows for two states of the system, the fallback and the migration state, and which will intervene in the interfaces involved, trapping information and shadowing elements of the database in such a way as to mediate transitions back and forth between the fallback and migration states of the transaction system (RDBMS, applications and other subsystems supported by the framework)

4-31

22. Restructuring (Perestroika) Online application and schema maintenance

The online application and schema maintenance framework (OASMF) would need to be based upon a truly multithreaded framework, such as Microsoft’s .Net or Spring on Java threads: Spring should be an excellent choice, using the Groovy language on top of the JVM to provide the dynamic language support for runtime typing and type conversion, and the Groovy feature of ‘closure delegation’ that could make schema maintenance metaprogrammable as a DSL, while Groovy’s ability to interface to all of the Java Class Library (JCL) with a serious reduction of code complexity and size (refactoring) would make it possible to add the bells and whistles necessary to make the OASMF work in a magical way

4-32

22. Restructuring (Perestroika) Online application and schema maintenance

Here’s the theory behind the baseline, fallback and migration online application and schema maintenance framework (OASMF) triad:

1 The baseline DDL versioning release of the optimal RDBMS contains the extra ability to deal with versions and dialects of DDL changes, but with no change in function: there should be no side effects whatever to this

If the baseline functionality has a problem it can be switched back out with impunity, also, if baseline functionality is already present (if you had already done an online application and schema maintenance operation in the past), then no installation would be necessary … overall, installing and running the baseline should have zero risk (otherwise it is broken)

4-33

22. Restructuring (Perestroika) Online application and schema maintenance

1 Baseline state (continued): Baseline functionality inserts itself into all

the pertinent interfaces in the transaction system, thus preparing for the ability to shadow any RDBMS component that needs to migrate and fall back, without acquiring any state

2 Once the OASMF framework is installed and enabled, the transaction system is automatically in the fallback state: In the fallback state the framework first

acquires state by scanning the transaction system for applications, subsystems, APIs, databases, schemas, tables, etc., and then prepares itself generally to be ‘migration-capable’

4-34

22. Restructuring (Perestroika) Online application and schema maintenance

2 Fallback state (continued): The fallback state can functionally deal with the

semantics of schema changes, when they are later created in the migration state, but changes nothing that could break the user’s applications, and makes no DDL (schema) changes on its own

If something goes immediately wrong with the fallback framework, you can uninstall it online and you will be back to the baseline RDBMS code (with no active OASMF framework) with complete safety, because no persistent shadow state has been created and no schema (DDL) changes have been made up to this point: therefore the fallback framework, in and of itself, has small risk

4-35

22. Restructuring (Perestroika) Online application and schema maintenance

2 Fallback state (continued): Once you decide to migrate, you cannot go back to

the baseline RDBMS code, because new persistent shadow state will be present that will not function correctly under the baseline transaction system (without an active OASMF framework), so you have to make sure that the fallback framework execution is satisfactory for your applications on all counts before proceeding to migration: once you migrate, you will have to live with the active framework until the ability to fall back can be dispensed with

3 Once the transaction system is capable of making the transition from the fallback state to the migration state, then the switch can be thrown to migrate, which allows the schema (DDL) changes to the database to be made while the applications are online:

4-36

22. Restructuring (Perestroika) Online application and schema maintenance

3 Migration state (continued): The OASMF framework deals with the appropriate

APIs to pause and sequence the DDL changes until the affected applications are in a safe state and then pauses the affected applications during the brief critical phases of the DDL changes: the framework functions as a traffic cop or rail switching in a train system

The appropriate corresponding application and RDBMS APIs will be forked as the DDL change that modifies their function is deployed: cursors will still function (since database references are logical in an optimal RDBMS, instead of using physical RIDs), but cursors on a table that is undergoing schema change may get blown away

4-37

22. Restructuring (Perestroika) Online application and schema maintenance

3 Migration state (continued): Some queries and application cursors that are in

progress will be blown away by a given DDL change (changes to keys, for instance): these would have shown up during the design and testing phase of the migration against a snapshot of the database, and the reports or application cursors would have to be restarted, and transactions aborted and begun anew

When problems due to a DDL change show up in testing the migration, a fork of the application code will be made and that fork will be switched in when the DDL change is sequenced in during the migration of the live system and switched back out if the live system goes to fallback state

4-38

22. Restructuring (Perestroika) Online application and schema maintenance

3 Migration state (continued): Because of this ability to fork and

shadow everything, if something is unsatisfactory in regards to the migration functionality, you can pop back to the fallback state with safety, because it will put the appropriate DDL versions (dialect) and application forks back in place: so the migration state has the typical risk of a serious update (risk that scales up according to the size of the change), which is mitigated by the capability to fall back to the previous “safe” functionality that your transaction system is used to

4-39

22. Restructuring (Perestroika) Online application and schema maintenance

3 Migration state (continued): You can transit back and forth

between the fallback state and the migration state and updates to it as many times as necessary, until you get to a migration state transaction system that your users can stand to live with: then you can disable fallback and make the migration permanent: although the framework may have to continue to execute in some cases even without the fallback capability

4-40

23. Reliable SoftwareTelemetry push streamingneeds a many-to-many architecture

For decent operability by a set of configurable instrument panels, and for a host of other excellent reasons, having decent software (and hardware) telemetry is a necessity… if it’s good, it will be widely used and so the performance of the clustered instrumentation facility must be extremely scalable: Polling is anathema (which means only used

when desperate), mostly transmissions need to be event-based, implying distributed registration for monitoring instruments and push streaming them back (similar to Reverse AJAX push streaming from servers, with no webpage refresh)

4-41

The clustered instrumentation facility (continued): Not all instrument changes should be

propagated, there needs to be hysteresis-auto-ranging and triggers for propagating significant changes

A clustered instrumentation facility needs to be in shared library with access to global data, at least, but optimally in the interrupt layer with locked-down globals

Clustered instrumentation needs boxcarring to minimize overhead, so system timers and buffers are involved

– These features are also needed for clustered transaction management and database flushing

23. Reliable SoftwareTelemetry push streamingneeds a many-to-many architecture

4-42

The clustered instrumentation facility (continued): To minimize addressing overhead (routing tables

and package addresses), instrumentation needs to be handled by relaying (the source system telemetry service hands the value off to the target systems telemetry service, which hands it off to the telemetry server, which hands it off to the telemetry user application)

See the patent for a generalized fault-tolerant instrumentation facility with some useful diagrams :

Enhanced instrumentation software in fault tolerant systems <http://www.google.com/patents?id=cRIJAAAAEBAJ&dq=6,360,338>

23. Reliable SoftwareTelemetry push streamingneeds a many-to-many architecture

4-43

Code can be instrumented asynchronously from the outside (as in Java annotations) or snatched from the application stack by an offset from a privileged access (people have actually done this, e.g, Computer Associates), but for coherent telemetry - the best instrumentation is that which is synchronous to the main processing loop of the application, either (1) sampled periodically or (2) deposited during a state change in the globals for external access, or (3) externalized by a dispatching signal on demand if the size is unpredictable, or the form is irregular or a graph, or if the content is parametric

23. Reliable SoftwareTelemetry push streamingneeds a many-to-many architecture

4-44

24. Publish and Subscribe

TBD

4-45

25. Ubiquitous Work Flow

TBD

4-46

26. Virtual Operating System

TBD

4-47

27. Scaling Inwards Extreme Single Row Performance for Exchanges

Stock exchanges do transactions based on a stock symbol, like ‘IBM’ on the NYSE: a database can store the trade to the log by updating a single row in an SQL table with a 3 character primary key of ‘IBM’, or 4 characters of ‘DELL’ on NASDAQ

Since the one record table never shrinks nor grows, there should be tiny deblocking/reblocking overhead in the database: the row will stay in cache and get hammered through to the log as an update record at transaction commit time

With transaction boxcarring size of 50, and 1000 updates a second to this row: this enables a single stock symbol update speed in excess of 1000 tx/sec to be obtained - going beyond that requires more advanced architecture

4-48

27. Scaling Inwards Extreme Single Row Performance for Exchanges

Using SQL rowsets, compound statements (compiled SQL only) and RM localized transactions: this would enable a rate in excess of 5000 tx/sec on a single stock symbol

Inserting the functionality for SQL rowsets, compound statements (compiled SQL only) and RM localized transactions directly into the log manager code, and placing the table on the log tail (no database disk, storing the row values with the log configuration) and doing all of that on one pinned thread to minimize dispatches and cache sloshing between cores: this would enable enable a rate in excess of 10000 tx/sec on a single stock symbol

4-49

27. Scaling Inwards Extreme Single Row Performance for Exchanges

Why use a log? Why not just do it in-memory? The reason is that for regulators (SEC, etc.) you are going to have to write the transaction to stable store anyway, and if you can make the log record format satisfy their reporting requirements (and you can) then this is all you need

Why do you need such a hellacious rate of trading on a single stock symbol? Because any stock symbol can become the focus of instantaneous interest on the upside, or the downside: in an open multi-lateral trading system where buyers and sellers get matched (which is the only way to set a fair price), causing unnecessary waits and queuing artificially changes the price in a way which is only fair to those ahead in the queue, or the reverse … an unlimited single stock symbol trading velocity guarantees greater fairness over any velocity which allows queues to form

4-50

28. Ad Hoc Aggregation Institutional Query Transparency for Regulation

TBD

4-51

29. Reliable Multi-Lateral Trading Regulated Fairness & Performance, Guaranteed Result

TBD

4-52

30. Semantic Data Verity of Data Processing

TBD

4-53

31. Integration and Test Platform Real-Time Transaction Database

TBD

4-54

32. Integrated Logistics

TBD

4-55

33. Industry Consortium or Shovel-Ready Project VISA was begun as an industry consortium by

banking and financial interests, Hoover Dam, the Golden Gate Bridge and the Interstate Highway System were shovel-ready projects: building an optimal RDBMS is much smaller than these on the one hand, and will benefit mankind much more widely on the other hand

Everything that has been mentioned could be done in between 50 and 100 man-years of work (rough estimate), in fairly short order (it’s been done before)

Then you could have a Chinese mainframe, a Red Cross mainframe, a Chicago Schools mainframe… everyone could be their own IBM