Computer Science Report


QUESTIONNAIRE


1. DO YOU HAVE A COMPUTERIZED SYSTEM?

IF NOT WHAT WOULD YOU LIKE COMPUTERIZED?


2. WHAT TYPE COMPUTER SYSTEM DO YOU HAVE?


3. ARE THERE ANY SETBACKS IN USING THIS SYSTEM?


4. IS THIS SYSTEM DOING ALL THAT IS REQUIRED TO BE

DONE?


5. WHAT ARE THE ADVANTAGES?


6. WHAT ARE THE DISADVANTAGES?


7. ARE THERE ANY IMPROVEMENTS YOU WOULD LIKE IN

YOUR COMPUTER SYSTEM?


8. IF SO WHAT IMPROVEMENTS WOULD YOU RECOMMEND?


ANSWERS FROM QUESTIONNAIRE

THE CIFTON DUPINY COMMUNITY COLLEGE


1. NO WE HAVE A COMPUTERIZED SYSTEM.

ALL ASPECTS OF THE SCHOOL'S RECORD.


2. WE DO MOST OF OUR WORK IN WORD PERFECT WE HAVE NO

COMPUTER PROGRAMMES.


3. YES THERE ARE MANY SETBACKS, THE PERSON USING THE COMPUTER

HAS TO FIGURE OUT EVERYTHING , THIS LEADS TO A VERY HEAVY

WORK LOAD AND LOSS OF TIME.


4. NO, THIS SYSTEM IS NOT DOING WHAT IS REQUIRED.



5. THE ONLY ADVANTAGE IS THAT WE CAN STORE OUR WORK ON THE

COMPUTER.


6. DISADVANTAGES OF OUR SYSTEM ARE SLOW, TIME CONSUMING,

INEFFICIENT AMONG OTHERS.


7. YES WE WOULD LIKE A LOT OF IMPROVEMENTS IN THE SYSTEM.


8.THE IMPROVEMENTS I WOULD RECOMMEND IS A COMPUTER

PROGRAM TO REGISTER STUDENTS,TO CHECK CLASS SCHEDULES,

TO STORE STUDENT FILES, CHECK ON STUDENTS MARKS, THE

ARRANGEMENTS OF TIMETABLES AND TEACHER'S SCHEDULE, CLASS

USAGE, TO DETERMINE THE PROMOTION OF STUDENTS AND THE

RECORD OF THE SCHOOL'S FINANCES.

A Brief Look at Robotics

Two years ago, the Chrysler corporation completely gutted its Windsor, Ontario, car assembly plant and within six weeks had installed an entirely new factory inside the building.  It was a marvel of engineering.  When it came time to go to work, a whole new work force marched onto the assembly line.  There on opening day was a crew of 150 industrial robots.  Industrial robots don't look anything like the androids from sci-fi books and movies.  They don't act like the evil Daleks or a fusspot C-3P0.  If anything, the industrial robots toiling on the Chrysler line resemble elegant swans or baby brontosauruses with their fat, squat bodies, long arched necks and small heads.  

An industrial robot is essentially a long manipulator arm that holds tools such as welding guns or motorized screwdrivers or grippers for picking up objects.  The robots working at Chrysler and in numerous other modern factories are extremely adept at performing highly specialized tasks - one robot may spray paint car parts while another does spots welds while another pours radioactive chemicals.  Robots are ideal workers: they never get bored and they work around the clock.  What's even more important, they're flexible.  By altering its programming you can instruct a robot to take on different tasks. This is largely what sets robots apart from other machines; try as you might you can't make your washing machine do the dishes.  Although some critics complain that robots are stealing much-needed jobs away from people, so far they've been given only the dreariest, dirtiest, most soul-destroying work.  The word robot is Slav in origin and is related to the words for work and worker.  Robots first appeared in a play, Rossum's Universal Robots, written in 1920 by the Czech playwright, Karel Capek.  The play tells of an engineer who designs man-like machines that have no human weakness and become immensely popular.  However, when the robots are used for war they rebel against their human masters.  Though industrial robots do dull, dehumanizing work, they are nevertheless a delight to watch as they crane their long necks, swivel their heads and poke about the area where they work.  They satisfy "that vague longing to see the human body reflected in a machine, to see a living function translated into mechanical parts", as one writer has said.  Just as much fun are the numerous "personal" robots now on the market, the most popular of which is HERO, manufactured by Heathkit.  Looking like a plastic step-stool on wheels, HERO can lift objects with its one clawed arm and utter computer-synthesized speech.  There's Hubot, too, which comes with a television screen face, flashing lights and a computer keyboard that pulls out from its stomach.  Hubot moves at a pace of 30 cm per second and can function as a burglar alarm and a wake up service.  Several years ago, the swank department store Neiman-Marcus sold a robot pet, named Wires.  When you boil all the feathers out of the hype, HERO, Hubot, Wires et. al. are really just super toys. 

You may dream of living like a slothful sultan surrounded by a coterie of metal maids, but any further automation in your home will instead include things like lights that switch on automatically when the natural light dims or carpets with permanent suction systems built into them.  One of the earliest attempts at a robot design was a machine, nicknamed Shakey by its inventor because it was so wobbly on its feet.  Today, poor Shakey is a rusting pile of metal sitting in the corner of a California laboratory.  Robot engineers have since realized that the greater challenge is not in putting together the nuts and bolts, but rather in devising the lists of instructions - the "software - that tell robots what to do".      Software has indeed become increasingly sophisticated year by  year.  The Canadian weather service now employs a program  called METEO which translates weather reports from English to  French.  There are computer programs that diagnose medical  ailments and locate valuable ore deposits.  Still other computer  programs play and win at chess, checkers and go.     As a results, robots are undoubtedly getting "smarter".  The  Diffracto company in Windsor is one of the world's leading  designers and makers of machine vision.  A robot outfitted with  Diffracto "eyes" can find a part, distinguish it from another part  and even examine it for flaws.  Diffracto is now working on a  tomato sorter which examines colour, looking for no-red - i.e.   unripe - tomatoes as they roll past its TV camera eye.  When an  unripe tomato is spotted, a computer directs a robot arm to pick  out the pale fruit.   Another Diffracto system helps the space shuttle's Canadarm pick up satellites from space.  This sensor looks for reflections on a satellites gleaming surface and can determine the position and speed of the satellite as it whirls through the sky.  It tells the astronaut when the satellite is in the right position to be snatched up by the space arm.      The biggest challenge in robotics today is making software that can help robots find their way around a complex and chaotic world.  Seemingly sophisticated tasks such as robots do in the factories can often be relatively easy to program, while the ordinary, everyday things people do - walking, reading a letter, planning a trip to the grocery store - turn out to be incredibly difficult.  The day has still to come when a computer program can do anything more than a highly specialized and very orderly task.  The trouble with having a robot in the house for example, is that life there is so unpredictable, as it is everywhere else outside the assembly line.  In a house, chairs get moved around, there is invariably some clutter on the floor, kids and pets are always running around.  Robots work efficiently on the assembly line where there is no variation, but they are not good at improvisation.  Robots are disco, not jazz.  The irony in having a robot housekeeper is that you would have to keep your house perfectly tidy with every item in the same place all the time so that your metal maid could get around.  Many of the computer scientists who are attempting to make robots brighter are said to working in the field of Artificial Intelligence, or AI.  These researchers face a huge dilemma because there is no real consensus as to what intelligence is.  Many in AI hold the view that the human mind works according to a set of formal rules.  They believe that the mind is a clockwork mechanism and that human judgement is simply calculation.  Once these formal rules of thought can be discovered, they will simply be applied to machines.  On the other hand, there are those critics of AI who contend that thought  is intuition, insight, inspiration.  Human consciousness is a stream in which ideas bubble up from the bottom or jump into the air like fish.    This debate over intelligence and mind is, of course, one that has gone on for thousands of years.  Perhaps the outcome of the  "robolution" will be to make us that much wiser.

A Brief History of Library Automation: 1930-1996


An automated library is one where a computer system is used to
manage one or several of the library's key functions such as
acquisitions, serials control, cataloging, circulation and the public
access catalog. When exploring the history of library automation,  it
is possible to return to past centuries when visionaries well before
the computer age created devices to assist with their book lending
systems. Even as far back as 1588, the invention of the French "Book
Wheel" allowed scholars to rotate between books by stepping on a pedal
that turned a book table. Another interesting example was the "Book
Indicator", developed by Albert Cotgreave in 1863. It housed miniature
books to represent books in the library's collection. The miniature
books were part of a design that made it possible to determine if a
book was in, out or overdue. These and many more examples of early
ingenuity in library systems exist, however, this paper will focus on
the more recent computer automation beginning in the early twentieth
century.

The Beginnings of Library Automation: 1930-1960
      It could be said that library automation development began in the
1930's when punch card equipment was implemented for use in library
circulation and acquisitions. During the 30's and early 40's progress
on computer systems was slow which is not surprising, given the
Depression and World War II. In 1945, Vannevar Bush envisioned an
automated system that would store information, including books,
personal records and articles. Bush(1945) wrote about a hypothetical
"memex" system which he described as a mechanical library that would
allow a user to view stored information from several different access
points and look at several items simultaneously. His ideas are well
known as the basis for hypertext and mputers for their operations. The
first appeared at MIT, in 1957, with the development of COMIT,
managing linguistic computations, natural language and the ability to
search for a particular string of information. Librarians then moved
beyond a vision or idea for the use of computers, given the
technology, they were able make great advances in the use of computers
for library systems. This lead to an explosion of library automation
in the 60's and 70's.

Library Automation Officially is Underway: 1960-1980
      The advancement of technology lead to increases in the use of
computers in libraries. In 1961, a significant invention by both
Robert Noyce of Intel and Jack Kirby of Texas Instruments, working
independently, was the integrated circuit. All the components of an
electronic circuit were placed onto a single "chip" of silicon. This
invention of the integrated circuit and newly developed disk and tape
storage devices gave computers the speed, storage and ability needed
for on-line interactive processing and telecommunications. 
The new potential for computer use guided one librarian to develop a
new indexing technique. HP. Luhn, in 1961, used a computer to produce
the "keyword in context" or KWIC index for articles appearing in
Chemical Abstracts. Although keyword indexing was not new, it was
found to be very suitable for the computer as it was inexpensive and
it presented multiple access points. Through the use of Luhn's keyword
indexing, it was found that librarians had the ability to put
controlled language index terms on the computer.
      By the mid-60's, computers were being used for the production of
machine readable catalog records by the Library of Congress. Between
1965 and 1968, LOC began the MARC I project, followed quickly by MARC
II. MARC was designed as way of "tagging" bibliographic records using
3-digit numbers to identify fields. For example, a tag might indicate
"ISBN," while another tag indicates "publication date," and yet
another indicates "Library of Congress subject headings" and so on. In
1974, the MARC II format became the basis of a standard incorporated
by NISO (National Information Standards Organization). This was a
significant development because the standards created meant that a
bibliographic record could be read and transferred by the computer
between different library systems.
      ARPANET, a network established by the  Defense Advanced Research
Projects Agency in 1969 brought into existence the use of e-mail,
telnet and ftp. By 1980, a sub-net of ARPANET made MELVYL, the
University of Californiaís on-line public access catalog, available on
a national level. ARPANET, would become the prototype for other
networks such as CSNET, BITNET, and EDUCOM. These networks have almost
disappeared with the evolution of ARPANET to NSFNET which has become
the present day Internet. 
      During the 1970's the inventions of the integrated computer chip
and storage devices caused the use of minicomputers and microcomputers
to grow substantially. The use of commercial systems for searching
reference databases (such as DIALOG) began. BALLOTS (Bibliographical
Automation of Large Library Operations) in the late 1970's was one of
the first and later became the foundation for RLIN (the Research
Libraries Information Network). BALLOTS was designed to integrate
closely with the technical processing functions of the library and
contained four main files: (1)MARC records from LOC; (2) an in-process
file containing information on items in the processing stage; (3) a
catalog data file containing an on-line record for each item; and (4)
a reference file. Further, it contained a wide search retrieval
capability with the ability to search on truncated words, keywords,
and LC subject headings, for example. 
OCLC, the On-line Computer Library Center began in 1967, chartered in
the state of Ohio. This significant project facilitated technical
processing in library systems when it started it's first cooperative
cataloging venture in 1970. It went on-line in 1971. Since that time
it has grown considerably, providing research and utihypermedia.
      In order to have automation, there must first be a computer. The
development of the computer progressed substantially from 1946 to
1961, moving quickly though a succession of vacuum tubes, transistors
and finally to silicon chips. From 1946 to 1947 two significant
computers were built. The ENIAC I (Electronic Numerical Integrator and
Calculator) computer was developed by John Mauchly and J. Presper
Eckert at the University of Pennsylvania. It contained over 18,000
vacuum tubes, weighed thirty tons and was housed in two stories of a
building.  It was intended for use during World War II but was not
completed in time. Instead, it was used to assist the development of
the hydrogen bomb. Another computer, EDVAC, was designed to store two
programs at once and switch between the sets of instructions. A major
breakthrough occurred in 1947 when Bell Laboratories  replaced vacuum
tubes with the invention of the transistor. The transistors decreased
the size of the computer, and at the same time increased the speed and
capacity.  The UNIVAC I (Universal Automatic Computer) became the
first computer using transistors and was used at the U.S. Bureau of
the Census from 1951 until 1963.
      Software development also was in progress during this time.
Operating systems and programming languages were developed for the
computers being built. Librarians needed text-based computer
languages, different from the first numerical languages invented for
the number crunching "monster computers", in order to be able to use
colities designed to provide users with the ability to access
bibliographic records, scientific and literary information which
continues to the present .


Library Automation 1980-present
The 70's were the era of the dummy terminal that were used to gain
access to mainframe on-line databases. The 80's gave birth to a new
revolution. The size of computers decreased, at the same time,
technology  provided faster chips, additional RAM and greater storage
capacity. The use of microcomputers during the 1980's expanded
tremendously into the homes, schools, libraries and offices of many
Americans.  The microcomputer of the 80's became a useful tool for
librarians who put to them to use for everything from word processing
to  reference, circulation and serials.
On-line Public Access Catalogs began to be used extensively the
1980's. Libraries started to set-up and purchase their own computer
systems as well as connect with other established library networks.
Many of these were not developed by the librarians themselves, but by
vendors who supplied libraries with systems for everything from
cataloging to circulation. One such on-line catalog system is the CARL
(Colorado Alliance of Research Libraries) system.  Various other
software became available to librarians, such as spreadsheets and
databases for help in library administration and information
dissemination.
      The introduction of  CD-ROMs in the late 80ís has changed the way
libraries operate.  CD-ROMs became available containing databases,
software, and information previously only available through print,
making the information more accessible. Connections to "outside"
databases such as OCLC, DIALOG, and RLIN continued, however, in the
early 90's the databases that were previously available on-line became
available on CD-ROM, either in parts or in their entirety.  Libraries
could then gain information through a variety of options.
      The nineties are giving rise to yet another era in library
automation. The use of networks for e-mail, ftp, telnet, Internet, and
connections to on-line commercial systems has grown. It is now
possible for users to connect to the libraries from their home or
office.  The world wide web which had it's official start date as
April of 1993 is becoming the fastest growing new provider of
information. It is also possible, to connect to international library
systems and information through the Internet and with ever improving
telecommunications.  Expert systems and knowledge systems have become
available in the 90ís as both software and hardware capabilities have
improved. The technology used for the processing of information has
grown considerably since the beginnings of the thirty ton computer.
With the development of more advanced silicon computer chips, enlarged
storage space and faster, increased capacity telecommunication lines,
the ability to quickly process, store, send and retrieve information
is causing the current information delivery services to flourish.



 Bibliography

Bush, V. (1945).As we may think. Atlantic Monthly. 176(1), 101-8. 

Duval, B.K. & Main, L. (1992). Automated Library Systems: A Librarians
Guide and Teaching Manual. London: Meckler

Nelson, N.M., (Ed.) (1990). Library Technology 1970-1990: Shaping the
Library of the Future. Research Contributions from the 1990 Computers
in Libraries Conference. London: Meckler.

Pitkin, G.M. (Ed.) (1991). The Evolution of Library Automation:
Management Issues and Future Perspectives. London: Meckler.


Brief History Of Data Bases


In the 1960's, the use of main frame computers became widespread in many companies.  To access vast amounts of stored information, these companies started to use computer programs like COBOL and FORTRAN.  Data accessibility and data sharing soon became an important feature because of the large amount of information recquired by different departments within certain companies.  With this system, each application owns its own data files.  The problems thus associated with this type of file processing  was uncontrolled redundancy, inconsistent data, inflexibility, poor enforcement of standards, and low programmer maintenance.

      In 1964, MIS (Management Information Systems) was introduced.  This would prove to be very influential towards future designs of computer systems and the methods they will use in manipulating data.
      In 1966, Philip Kotler had the first description of how managers could benefit from the powerful capabilities of the electronic computer as a management tool. 
      In 1969, Berson developed a marketing information system for marketing research.  In 1970,  the Montgomery urban model was developed stressing the quantitative aspect of management by highlighting a data bank, a model bank, and a measurement statistics bank.  All of these factors will be influential on future models of storing data in a pool.
According to Martine, in 1981, a database is a shared collection of interrelated data designed to meet the needs of multiple types of end users.  The data is stored in one location so that they are independent of the programs that use them, keeping in mind data integrity with respect to the approaches to adding new data, modifying data, and retrieving existing data.  A database is shared and perceived differently by multiple users.  This leads to the arrival of Database Management Systems.

      These systems first appeared around the 1970=s as solutions to problems associated with mainframe computers.  Originally, pre-database programs accessed their own data files.  Consequently, similar data had to be stored in other areas where that certain piece of information was relevant.  Simple things like addresses were stored in customer information files, accounts receivable records, and so on.  This created redundancy and inefficiency.  Updating files, like storing files, was also a problem.  When a customer=s address changed, all the fields where that customer=s address was stored had to be changed.  If a field happened to be missed, then an inconsistency was created.  When requests to develop new ways to manipulate and summarize data arose, it only added to the problem of having files attached to specific applications.  New system design had to be done, including new programs and new data file storage methods.  The close connection between data files and programs sent the costs for storage and maintenance soaring.  This combined with an inflexible method of the kinds of data that could be extracted, arose the need to design an effective and efficient system.
      Here is where Database Management Systems helped restore order to a system of inefficiency.  Instead of having separate files for each program, one single collection of information was kept, a database.  Now, many programs, known as a database manager, could access one database with the confidence of knowing that it is accessing up to date and exclusive information. 



Some early DBMS=s consisted of:
 Condor 3
dBaseIII
Knowledgeman
Omnifile
Please
Power-Base
R-Base 4000
Condor 3, dBaseIII, and Omnifile will be examined more closely.

Condor 3
      Is a relational database management system that evolved in the microcomputer environment since 1977.  Condor provides multi-file, menu-driven relational capabilities and a flexible command language.  By using a word processor, due to the absence of a text editor, frequently used commands can automated. 
      Condor 3 is an application development tool for multiple-file databases.  Although it lacks some of the capabilities like procedure repetition, it makes up for it with its ease to use and quick decent speed. 
      Condor 3 utilizes the advantages of menu-driven design.  Its portability enables it to import and export data files in five different ASCII formats.  Defining file structures is a relatively straightforward method by typing the field names and their length, the main part of designing the structure is about complete.  Condor uses six data types:

alphabetic
alphanumeric
C.    numeric
C.    decimal numeric
C.    Julian date
C.    dollar
      Once the fields have been designed, data entry is as easy as pressing enter and inputting the respective values to the appropriate fields and like the newer databases, Condor too can use the Update, Delete, Insert, and Backspace commands.  Accessing data is done by creating an index.  The index can be used to perform sorts and arithmetic.



dBaseIII
      DbaseIII is a relational DBMS which was partially built on dbaseII.  Like Condor 3, dbaseIII is menu-driven and has its menus built in several levels.  One of the problems discovered, was that higher level commands were not included in all menu levels.  That is, dBaseIII is limited to only basic commands and anything above that is not supported.     
Many of the basic capabilities are easy to use, but like Condor, dBaseIII has inconsistencies and inefficiency.  The keys used to move and select items in specific menus are not always consistent through out.  If you mark an item to be selected from a list, once it=s marked it can not be unmarked.  The only way to correct this is to start over and enter everything again.  This is time consuming and obviously inefficient.  Although the menus are helpful and guide you through the stages or levels, there is the option to turn off the menus and work at a little faster rate.
      DBaseIII=s command are procedural (function oriented) and flexible.  It utilizes many of the common functions like:
select records
C.    select fields
C.    include expressions ( such as calculations)
C.    redirect output to the screen or to the printer
C.    store results separately from the application
      Included in dBaseIII is a limited editor which will let you create commands using the editor or a word processor.  Unfortunately, it is still limited to certain commands, for example, it can not create move or copy commands.  It also has a screen design package which enables you to design how you want your screen to look.  The minimum RAM requirement of 256k  for this package really illustrates how old this application is.  The most noticeable problem documented about dBaseIII is inability to edit command lines.  If, for example, an error was made entering the name and address of a customer, simply backing up and correcting the wrong character is impossible without deleting everything up to the correction and re-entering everything again.
      DBaseIII is portable and straightforward to work with.  It allows users to import and export files in two forms: fixed-length fields and delimited fields.  It can also perform dBaseII conversions.  Creating file structures are simple using the menus or the create command.  It has field types that are still being used today by applications such as Microsoft  Access, for example, numeric fields and memo fields which let you enter sentences or pieces of information, like a customer=s address, which might vary in length from record to record.  Unlike Condor 3, dBaseIII is able to edit fields without having to start over.  Inserting new fields or deleting old fields can be done quite easily. 
      Data manipulation and query is very accessible through a number of built-in functions.  The list and display commands enable you to see the entire file, selected records, and selected files.  The browse command allows you to scroll through all the fields inserting or editing records at the same time.  Calculation functions like sum, average, count, and total allow you to perform arithmetic operations on data in a file.  There are other functions available like date and time functions, rounding, and formatting.

Omnifile
      Omnifile is a single-file database system.  This database is form oriented meaning that it has a master form with alternate forms attached to it.  Therefore, you can work with one file and all of its subsets at the same time.  The idea of alternating forms provides for a greater level of security, for example, if a user needed to update an address field, they would not be able to access any fields which displayed confidential information.  The field in need of updating would only display the necessary or relevant information.
      Menus are once again present and used as a guide.  The use of function keys allows the user to move about screens or forms quite easily.  Menus are also used for transferring information, either for importing or for exporting.  One inflexibility noted was that when copying files the two files must have the exact same fields in the same order as the master file.  This can be problem if you want to copy identical fields from different files. 

      Forms design is simple but tedious.  Although it may seem flexible to be able to paint the screen in any manner that you wish, it can be time consuming because no default screen is available.  Like other database management systems, the usual syntax for defining fields apply, field name followed by the length of the field in braces.  However, editing is a little more difficult.  Changing the form can be done by inserting and deleting, one character at a time.  Omnifile does not support moving fields around, nor inserting blank lines.  This means that if a field was to be added at the beginning of the record, the entire record would have to be re-entered. 
      Records are added and viewed in the format that the user first designed it.  Invalid entries are not handled very well.  Entering an illegal value in a certain field results in a beep and no message, the user is left there to try and decide what the error is.  Omnifile does support the ability to insert new records while viewing existing records and to make global or local changes. 
      Querying can be performed by using an index or using a non-indexed search.  If a search for a partial entry is made like ARob@ instead of ARobinson@, a message is then displayed stating that not an exact match was found. 
Overall
These are just a few of the database programs that help start the whole database management system era.  It is apparent that DBMS=s today still use some of the fundamentals first implemented by these >old= systems.  Items like menus, forms, and portability are still key parts to current applications.  However, programs have come along since then, but still have as their bases the same fundamental principles.

2000 Problem (Y2K)

Fiction, Fantasy, and Fact:

"The Mad Scramble for the Elusive Silver Bullet . . . and the Clock Ticks Away."

 
Wayne Anderson
November 7, 1996
     The year 2000 is practically around the corner, promising a new era of greatness and
wonder . . . as long as you don't own a computer or work with one.  The year 2000 is bringing a
Pandora's Box of gifts to the computer world, and the latch is slowly coming undone. 
     The year 2000 bug is not really a "bug" or "virus," but is more a computer industry
mistake.  Many of the PC's, mainframes, and software out there are not designed  or
programmed to compute a future year ending in double zeros.  This is going to be a costly "fix"
for the industry to absorb.  In fact, Mike Elgan who is the editor of Windows Magazine, says " . .
. the problem could cost businesses a total of $600 billion to remedy." (p. 1)
The fallacy that mainframes were the only machines to be affected was short lived as industry
realized that 60 to 80 million home and small business users doing math or accounting etc. on
Windows 3.1 or older software, are just as susceptible to this "bug."  Can this be repaired in
time?  For some, it is already too late.  A system that is devised to cut an annual federal deficit to
0 by the year 2002 is already in "hot water."  Data will become erroneous as the numbers "just
don't add up" anymore.  Some PC owners can upgrade their computer's BIOS (or complete
operating system) and upgrade the OS  (operating system) to Windows 95, this will set them up
for another 99 years.  Older software however, may very well have to be replaced or at the very
least, upgraded.


     The year 2000 has become a two-fold problem.  One is the inability of the computer to
adapt to the MM/DD/YY issue, while the second problem is the reluctance to which we seem to
be willing to address the impact it will have.  Most IS (information system) people are either
unconcerned or unprepared. 
     Let me give you a "short take" on the problem we all are facing.  To save storage space
-and perhaps reduce the amount of keystrokes necessary in order to enter the year to date-most
IS groups have allocated two digits to represent the year.  For example, "1996" is stored as "96"
in data files and "2000" will be stored as "00."  These two-digit dates will be on millions of files
used as input for millions of applications.  This two digit date affects data manipulation,
primarily subtractions and comparisons. (Jager, p. 1)  For instance, I was born in 1957.  If I ask
the computer to calculate how old I am today, it subtracts 57 from 96 and announces that I'm 39.
So far so good.  In the year 2000 however, the computer will subtract 57 from 00 and say that I
am -57 years old.  This error will affect any calculation that produces or uses time spans, such as
an interest calculation.  Banker's beware!!!
     Bringing the problem closer to the home-front, let's examine how the CAPS system is
going to be affected.  As CAPS is a multifaceted system, I will focus on one area in particular,
ISIS.  ISIS (Integrated Student Information System) has the ability to admit students, register
them, bill them, and maintain an academic history of each student (grades, transcripts, transfer
information, etc.) inside of one system.  This student information system has hundreds and
hundreds of references to dates within it's OS.  This is a COBOL system accessing a ADABAS
database. ADABAS is the file and file access method used by ISIS to store student records on
and retrieve them from. (Shufelt, p.1)  ADABAS has a set of rules for setting up keys to specify
which record to access and what type of action (read, write, delete) is to be performed.  The
dates will have to have centuries appended to them in order to remain correct.  Their (CAPS)
"fix" is to change the code in the Procedure Division (using 30 as the cutoff  >30 century = "19"
<30 century = "20").   In other words, if the year in question is greater than 30 (>30) then it can
be assumed that you are referring to a year in the 20th century and a "19" will be moved to the
century field.  If the year is less than 30 (<30) then it will move a "20" to the century field.  If
absolutely necessary, ISIS will add a field and a superdescriptor index in order to keep record
retrieval in the order that the program code expects.  The current compiler at CAPS will not
work beyond the year 2000 and will have to be replaced.  The "temporary fix" (Kludge)  just
discussed (<30 or >30) will allow ISIS to operate until the year 2030, when they hope to have
replaced the current system by then.
     For those of you with your own home computers, let's get up close and personal.  This
problem will affect you as well!  Up to 80% of all personal PCs will fail when the year 2000
arrives.  More than 80,000,000 PCs will be shut down December 31, 1999 with no problems.


On January 1, 2000, some 80,000,000 PCs will go "belly up!"  (Jager, p. 1)  These computers
will think the Berlin Wall is still standing and that Nixon was just elected President!  There is
however, a test that you can perform in order to see if you are on of the "lucky" minority that do
not have a problem with the year 2000 affecting their PC. 
     First, set the date on your computer to December 31, 1999.  Next, set the time to 23:58
hours (if you use a 24 hour clock (Zulu time)) or 11:58 p.m. for 12 hour clocks.  Now, Power Off
the computer for at least 3 to 5 minutes.  Note: ( It is appropriate at this time to utter whatever
mantras or religious chants you feel may be beneficial to your psyche ).  Next, Power On the
computer, and check your time and date.  If it reads January 1, 2000 and about a minute or two
past midnight, breathe a sigh of relief, your OS is free from the year 2000 "bug."  If however,
your computer gives you wrong information, such as my own PC did (March 12, 1945 at 10:22
a.m.) welcome to the overwhelming majority of the population that has been found "infected."
     All applications, from spreadsheets to e-mail, will be adversely affected.  What can you
do?  Maybe you can replace your computer with one that is Year 2000 compatible.  Is the
problem in the RTC (Real Time Clock), the BIOS, the OS?  Even if you fix the hardware
problem, is all the software you use going to make the "transition" safely or is it going to corrupt
as well?!
     The answers to these questions and others like them are not answerable with a yes or a
no.  For one thing, the "leading experts" in the computer world cannot agree that there is even a
problem, let alone discuss the magnitude upon which it will impact society and the business
world.  CNN correspondant Jed Duvall illustrates another possible "problem" scenario.  Suppose
an individual on the East Coast, at 2 minutes after midnight in New York City on January 1,
2000 decides to mark the year and the century by calling a friend in California, where because of
the time zone difference, it is still 1999.  With the current configurations in the phone company
computers, the NewYorker will be billed from 00 to 99, a phone call some 99 years long!!! (p. 1)
     What if you deposit $100 into a savings account that pays 5% interest annually.  The
following year you decide to close your account.  The bank computer figures your $100 was
there for one year at 5% interest, so you get $105 back, simple enough.  What happens though, if
you don't take your money out before the year 2000?  The computer will re-do the calculation
exactly the same way.  Your money was in the bank from '95 to '00.  That's '00 minus '95, which
equals a negative 95 (-95).  That's -95 years at 5% interest.  That's a little bit more than $10,000,
and because of the minus sign, it's going to subtract that amount from your account.  You now
owe the bank $9,900.  Do I have your attention yet??!! 
     There is no industry that is immune to this problem, it is a cross-platform problem.  This
is a problem that will affect PCs, minis, and mainframes.  There are no "quick fixes" or what
everyone refers to as the "Silver Bullet."  The Silver Bullet is the terminology used to represent
the creation of an automatic fix for the Yk2 problem.  There are two major problems with this
philosophy.  First, there are too many variables from hardware to software of different types to
think that a "cure-all" can be found that will create an "across-the-board" type of fix.  Secondly,
the mentality of the general population that there is such a "fix" or that one can be created rather
quickly and easily, is creating situations where people are putting off addressing the problem due
to reliance on the "cure-all."  The " . . . sure someone will fix it."  type attitude pervades the
industry and the population, making this problem more serious than it already is.   (Jager, p. 1)
People actually think that there is a program that you can start running on Friday night . . .
everybody goes home, and Monday morning the problem has been fixed.  Nobody has to do
anything else, the Yk2 problem poses no more threat, it has been solved.  To quote Peter de
Jager,
"Such a tool, would be wonderful.
Such a tool, would be worth Billions of dollars.
Such a tool, is a na ve pipe dream.
Could someone come close?  Not very . . .
Could something reduce this problem by 90%?  I don't believe so.
Could it reduce the problem by 50%?  Possibly . . . but I still don't believe so.
Could it reduce the workload by 30%?  Quite likely."
                                   (p. 2)

Tools are available, but are only tools, not cures or quick fixes. 
     How will this affect society and the industry in 2000?  How stable will software design
companies be as more and more competitors offer huge "incentives" for people to "jump ship"
and come work for them on their problems!?  Cash flow problems will put people out of
business.  Computer programmers will make big bucks from now until 2000, as demand
increases for their expertise.  What about liability issues that arise because company "A" reneged
on a deal because of a computer glitch. Sue! Sue! Sue!  What about ATM lockups, or credit card
failures, medical emergencies, downed phone systems.  This is a wide spread scenario because
the Yk2 problem will affect all these elements and more. 
     As is obvious, the dimensions to this challenge are apparent.  Given society's reliance on
computers, the failure of the systems to operate properly can mean anything from minor
inconveniences to major problems: Licenses and permits not issued, payroll and social service
checks not cut, personnel, medical and academic records malfunctioning, errors in banking and
finance, accounts not paid or received, inventory not maintained, weapon systems
malfunctioning (shudder!), constituent services not provided, and so on, and so on, and so on.
Still think you'll be unaffected . . . highly unlikely.  This problem will affect computations which
calculate age, sort by date, compare dates, or perform some other type of specialized task.  The
Gartner Group has made the following approximations:
At $450 to $600 per affected computer program, it is estimated that a medium size company will
spend from $3.6 to $4.2 million to make the software conversion.  The cost per line of code is
estimated to be $.80 to $1.  VIASOFT has seen program conversion cost rise to $572 to $1,204.
ANDERSEN  CONSULTING estimates that it will take them more than 12,000 working days to
correct its existing applications.  YELLOW CORPORATION estimates it will spend
approximately 10,000 working days to make the change.  Estimates for the correction of this
problem in the United States alone is upward of $50 to $75 Billion dollars.
                                   (ITAA, p. 1) 
     Is it possible to eliminate the problem?  Probably not, but we can make the transition
much smoother with cooperation and the right approach.  Companies and government agencies
must understand the nature of the problem.  Unfortunately, the spending you find for new
software development will not be found in Yk2 research.  Ignoring the obvious is not the way to
approach this problem.  To assume that the problem will be corrected when the system is
replaced can be a costly misjudgment.  Priorities change, development schedules slip, and
system components will be reused, causing the problem to be even more widespread. 
     Correcting the situation may not be so difficult as it will be time consuming.  For
instance, the Social Security Administration estimates that it will spend 300 man-years finding
and correcting these date references in their information systems - systems representing a total of
30 million lines of code.  (ITAA, p. 3)  Common sense dictates that a comprehensive conversion
plan be developed to address the more immediate functions of an organization (such as invoices,
pay benefits, collect taxes, or other critical organization functions), and continue from there to
finish addressing the less critical aspects of operation.  Some of the automated tools may help to
promote the "repair" of the systems, such as in:
* line by line impact analysis of all date references within a system, both in terms of data and
procedures;
* project cost estimating and modeling;
* identification and listing of affected locations;
* editing support to make the actual changes required;
* change management;
* and testing to verify and validate the changed system.
                                        (ITAA, p. 3)
Clock simulators can run a system with a simulated clock date and can use applications that
append or produce errors when the year 2000 arrives while date finders search across
applications on specific date criteria, and browsers can help users perform large volume code
inspection.  As good as all these "automated tools" are, there are NO "Silver Bullets" out there.
There are no quick fixes.  It will take old fashioned work-hours by personnel in order to make
this "rollover" smooth and efficient.
     Another area to look at are the implications for public health information.  Public health
information and surveillance at all levels of  local, state, federal, and international public health
are especially sensitive to and dependent upon dates for epidemiological (study of disease
occurrence, location, and duration) and health statistics reasons.  The date of events, duration
between events, and other calculations such as age of people are core epidemiologic and health
statistic requirements. (Seligman, p. 1)   Along with this, public health authorities are usually
dependent upon the primary data providers such as physician practices, laboratories, hospitals,
managed care organizations, and out-patient centers etc., as the source for original data upon
which public health decisions are based.  The CDC (Centers for Disease Control and Prevention)
for example, maintains over 100 public health surveillance systems all of which are dependent
upon external sources of data. (Issa, p. 5)  This basically means that it is not going to be
sufficient to make the internal systems compliant to the year 2000 in order to address all of the
ramifications of this issue.  To illustrate this point, consider the following scenario: in April
2000, a hospital sends an electronic surveillance record to the local or state health department
reporting the death of an individual who was born in the year "00"; is this going to be a case of
infant mortality or a geriatric case??
     Finally, let's look at one of the largest software manufacturing corporations and see what
the implications of the year 2000 will be for Microsoft products.  Microsoft states that Windows
95 and Windows NT are capable of supporting dates up until the year 2099.  They also make the
statement however:
"It is important to note that when short, assumed dates (mm/dd/yy) are entered, it is impossible
for the computer to tell the difference between a day in 1905 and 2005.  Microsoft's products,
that assume the year from these short dates, will be updated in 1997 to make it easier to assume
a 2000-based year.  As a result, Microsoft recommends that by the end of the century, all PC
software be upgraded to versions from 1997 or later."
                                   (Microsoft, p. 1)



PRODUCT NAME
DATE LIMIT
DATE FORMAT
Microsoft Access 95
1999
assumed "yy" dates
Microsoft Access 95
9999
long dates ("yyyy")
Microsoft Access (next version)
2029
assumed "yy" dates
Microsoft Excel 95
2019
assumed "yy" dates
Microsoft Excel 95
2078
long dates ("yyyy")
Microsoft Excel (next version)
2029
assumed "yy" dates
Microsoft Excel (next version)
9999
long dates ("yyyy")
Microsoft Project 95
2049
32 bits
Microsoft SQL Server
9999
"datetime"
MS-DOS(r) file system (FAT16)
2099
16 bits
Visual C++(r) (4.x) runtime library
2036
32 bits
Visual FoxPro
9999
long dates ("yyyy")
Windows 3.x file system (FAT16)
2099
16 bits
Windows 95 file system (FAT16)
2099
16 bits
Windows 95 file system (FAT32)
2108
32 bits
Windows 95 runtime library (WIN32)
2099
16 bits
Windows for Workgroups (FAT16)
2099
16 bits
Windows NT file system (FAT16)
2099
16 bits
Windows NT file system (NTFS)
future centuries
64 bits
Windows NT runtime library (WIN32)
2099
16 bits
Microsoft further states that its development tools and database management systems provide
the flexibility for the user to represent dates in many different ways.  Proper training of
developers to use date formats that accommodate the transition to the year 2000 is of the utmost
importance.  For informational purposes,  I have included a chart that represents the more
popular Microsoft products, their date limits, and date formats.  (Chart on previous page)
(Microsoft, p. 3)
     So . . .  is everyone affected?  Apparently not.  In speaking with the owners of St. John
Valley Communications, an Internet-Access provider based in Fort Kent, they are eagerly
awaiting the coming of 2000.  They, Alan Susee and Dawn Martin had enough foresight to make
sure that when they purchased their equipment and related software, that it would all be year
2000 compliant.  It can be done, as evidenced by this industrious couple of individuals.  The key
is to get informed and to stay informed.  Effect the changes you can now, and look to remedy the
one's that you can't.  The year 2000 will be a shocker and thriller for many businesses, but St.
John Valley Communications seem to have it under control and are holding their partry hats in
one hand and the mouse in the other.
     As is obviously clear from the information presented, Yk2 is a problem to be reckoned
with.  The wide ranging systems (OS) and  software on the market lend credence to the idea that
a "silver bullet" fix is a pipe dream in the extreme.  This is not however, an insurmountable
problem.  Efficient training and design is needed, as well as a multitude of man-hours to effect
the "repairs" needed to quell the ramifications and repercussions that will inevitably occur
without intervention from within.  The sit back and wait for a cure-all approach will not work,
nor is it even imaginable that some people (IS people) with advanced knowledge to the contrary,
would buy into this propaganda of slow technological death.  To misquote an old adage, "The
time for action was 10 years ago."  Whatever may happen, January 1, 2000 will be a very
interesting time for some, a relief for others  . . . and a cyanide capsule for the "slackers."  What
will you do now that you are better "informed?"  Hopefully you will effect the necessary "repairs
and pass the word to the others who may be taking this a little too lightly.  It may not be a matter
of life or death, but it sure as heck could mean your job and financial future.