University of Bielefeld -  Faculty of technology
Networks and distributed Systems
Research group of Prof. Peter B. Ladkin, Ph.D.
Back to Abstracts of References and Incidents Back to Root
This page was copied from: http://catless.ncl.ac.uk/Risks/17.71.html


Previous Issue Index Next Issue Info Searching Submit Article

The Risks Digest

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Volume 17, Issue 71

Tuesday 13 February 1996

Contents

o The measurement of risk: community measures vs scientific measures
Dave Shaw
o Those fun-loving guys at LANL.GOV
Simson L. Garfinkel
o More on WWW-Robot false hits...
Debora Weber-Wulff
o Re: Risks of web robots
Cameron Simpson
o Re: RISKS (...) of typing credit-card numbers
Olin Sibert
Mark Fisher
o Subject: Leahy to introduce bill to repeal CDA!
Stanton McCandlish
o Foreign `replies' cause anxiety
Timothy Mowchanuk
o Correction: Train operators get permission to use manual backup
Jonathan Kamens
o Re: Electronic Medical Records and Images
David Coburn
Allan Noordvyk
Tom Olin
o Info on RISKS (comp.risks)
---------------------------------------------

The measurement of risk: community measures vs scientific measures

Dave Shaw <daves@gsms01.alcatel.com.au>
Mon, 12 Feb 1996 17:20:42 +1100

This e-mail is a summary of, and quotes heavily from, an article entitled "Outrage at the unknown" that appeared in the *Sydney Morning Herald* on 8 February 1996. The article was written by Simon Chapman, the associate professor of Public Health and Community Medicine at the University of Sydney, Australia. Chapman himself quotes Peter Sandman of Rutger's University.

Recently (late 1995 I think) Telstra (the public telecommunications company of Australia) tried to install a new mobile-phone tower adjacent to a kindergarten in Harbord (a suburb of Sydney). A group of residents, with their young children carrying placards, chained themselves to Telstra's fence, and called in the media complaining about (among other things) the danger of electro-magnetic radiation (EMR).

Within two weeks, Telstra had lost face and switched off the installation. A fortnight later the same battle erupted in another suburb, and it's happened again in Harbord after the antennas were moved to a different site.

This continuing row is proving a feast for people in public health who study the public's reaction to health risks. These "risks analysts" address questions of how people might be motivated about a serious risk and calmed down about risks that are trivial.

To a scientist, risk is the magnitude of the danger multiplied by the probability of exposure. When the furor over the towers erupted, Telstra called in the Australian Radiation Laboratory. Their EMR meters barely got the needle off zero. It would be hard to find a serious scientist who would call the towers a risk.

So why did the towers cause such outrage? Because this way of measuring risk is not how ordinary (i.e., non-technical people) think. Chapman lists Sandman's seven characteristics that determine an issue's "outrage valency" in a community.

Voluntary vs coerced risks: if something is voluntary it will be much more acceptable, eg. skiing while dangerous, is voluntary, and is not widely dreaded. A major theme of the Harbord protest is that Telstra (the "big company") rode roughshod over the Harbord residents (the "little people").

Natural vs industrial risks: there are countless examples of communities being upset by chemicals that have been "put there" by government/industry, while not showing the slightest concern for the natural supply of these chemicals. For example, anti-fluoridationists are often shocked to find that some areas contain naturally high levels of fluorine in the water. Certainly, Telstra's towers are quintessentially industrial.

Familiar vs exotic risks: mobile phones are new and exotic and hence attract outrage. The longstanding electricity substation, located at ground level opposite the kindergarten, which emits comparatively huge amounts of EMR, is familiar and attracts no outrage.

Not dreaded vs dreaded: diseases like cancer, AIDS, plague, tuberculosis create a great deal more public concern than others, eg. heart disease. The Telstra towers "might" cause cancer in children.

Chronic vs catastrophic: Thousands are killed on roads (as part of driving accidents) but rarely in large groups. Plane accidents are much rarer, cause fewer deaths, but because they can cause large fatalities, air travel is much more widely feared than car travel. [Perhaps this explains (in part) the number of articles regarding air safety in comp.risks ;-) ] Telstra's towers "might" cut swathes into the future health of the local children.

Knowable vs not-knowable: much to the dismay of Telstra, the fact that the Australian Radiation Laboratory could barely detect any EMR from the towers, actually seemed to increase outrage (not decrease it).

Morally irrelevant vs morally relevant: communities tend to have zero-tolerance for outrage-inducing problems. Telstra tried to favourably compare the amount of EMR in TV sets, microwaves, sunlight and a fluorescent lamp, but to no avail.

Well, that ends the summary, what follows are my thoughts.

The Telstra towers meet all of Sandman's characteristics. It would be no surprise to him I suspect of the outrage they induced in the community.

The RISKS [sorry you had to wait so long for it :-( ] for those of us in the scientific/industrial community is that we fail to understand how the community measures risk. As Sandman says: "Experts focus on hazard and ignore outrage. ... The public focuses on outrage and ignores hazard."

Understanding how the community measures risk should enable us to develop more acceptable products or at least make their introduction easier. While it is useful to have the public vet all products (to avoid the introduction of "nasties"), technology has the potential to benefit everyone, and unnecessary delay or fear, reduces that benefit.

So when we see cases like the Telstra towers we should not dismiss the reaction as the fears of an ill-informed and untrained public, but rather see it as the normal response of a community that measures risks according to different criteria.

Hence, if Telstra could have found a better way of measuring the risk of their towers (i.e., the Radiation Laboratory and their EMR meters), they may have avoided publicity like the angry Harbord father, jabbing his fingers on TV, at dour Telstra officials, and shouting: "There's no way I am going to let my daughter or any of my kids go to that place and be exposed to that sort of risk!"

David Shaw, Alcatel Australia Limited daves@gsms01.alcatel.com.au
---------------------------------------------

Those fun-loving guys at LANL.GOV

Simson L. Garfinkel <simsong@vineyard.net>
Sun, 11 Feb 1996 09:12:18 -0500

Reading about LANL's "(Click here to initiate automated `seek-and-destroy' against your site.)", I was reminding about a story that happened to a friend of mine a few weeks ago.

Turns out that my friend was writing a web-walking robot, and it made the mistake of walking into the LANL site. This robot was running at the end of a 28.8K SLIP link, so it wasn't capable of issuing more than 1 request every 2-3 seconds.

Well, the folks at LANL have some sort of monitoring software, because they noticed it immediately. What they did was they called up his Internet service provider and said that he was attacking a federal interest computer, and they threatened the ISP that unless they revealed the name and phone number of my friend, LANL would take legal action against the ISP.

Those fun-loving guys at LANL then called up my friend and left the following message on my friend's answering:

"YOU ARE RUNNING A WEB ROBOT THAT IS ATTACKING A FEDERAL INTEREST SITE. UNLESS YOU TURN IT OFF WITHIN AN HOUR, WE WILL SUE YOU AND SHUT YOUR COMPANY DOWN."

The folks at LANL then called my friend's Internet service provider and threatened them with legal prosecution for violation of various computer crime statutes, unless the ISP cut off my friend's Internet connection.

This is really scary --- the thought that some government official can call up your ISP and, through a combination of threats and legal citations, have somebody's Internet feed immediately terminated. What about due process of law? What about innocent until proven guilty? What about having to go through the mere formality of obtaining a court injunction before having action such as this taken?

---------------------------------------------

More on WWW-Robot false hits...

Debora Weber-Wulff <weberwu@compute.tfh-berlin.de>
12 Feb 1996 14:02:09 GMT

A few weeks ago our WWW server was brought to its knees. We were being inundated with thousands of URL requests for a student's home page. The page didn't look that interesting, but we closed out the account and put out an all-points search for the student in question and tried to figure out what the entire world wanted from this student. Theories varied from a viral attack to wayward robots.

When he was dragged into the computer services center, he confessed to what he had done: He had installed one of these nifty counters to see how many times his pages was read. Since he had no hits other than himself, he decided to include some good names on his page (a little racier than "sex 'n drugs 'n rock 'n roll", but this is a family publication) and then he registered his page with "some cyberporn list". He did not remember which one it was. So apparently all the robots in the world found a new site that seemed to have racy new stuff in it; It was duly registered and there appear to be quite a number of people that either automate the search for sex pictures or check out what's new first thing in the morning; many are so stupid and keep trying, even when the server tells them that this link is no longer in operation. It took days for things to calm down. Needless to say, the student currently has his net account revoked...

The moral of the story: don't attract robots with false claims, there are too many of them out there!

Debora Weber-Wulff, Technische Fachhochschule Berlin, FB Informatik, Luxemburger Str. 10, 13353 Berlin, Germany weberwu@tfh-berlin.de
---------------------------------------------

Re: Risks of web robots (Dellinger, RISKS-17.66,67)

Cameron Simpson <cameron@dap.CSIRO.AU>
Tue, 13 Feb 1996 16:36:23 +1100 (EDT)

There is a protocol called the Robot Exclusion Protocol designed explicitly to prevent robots from traversing on-the-fly datasets. It solves exactly the problem outlined above.

A moment's search through Yahoo reveals: http://info.webcrawler.com/mak/projects/robots/norobots.html entitled "A Standard for Robot Exclusion".

- Cameron Simpson
cameron@dap.csiro.au, DoD#743
http://www.dap.csiro.au/~cameron/
---------------------------------------------

Re: RISKS (...) of typing credit-card numbers (Sibert, RISKS-17.69)

Olin Sibert <wos@oxford.com>
Mon, 12 Feb 96 15:44:33 EST

In response to messages sent to, but not included in RISKS, Bob Dolan (Xerox Corp.) observes that countermeasures are a continuing battle:

From: Robert_Dolan.wbst129UL@xerox.com
Date: Thu, 8 Feb 1996 12:09:18 PST
Subject: RISKS (and lack thereof) of typing credit-card numbers

It appears to me that no matter what methods are employed to detect these types of intruder programs, new approaches will always develop. It is consistent with the cracker mentality.

Fortunately, the pace of development in malicious software seems to be fairly slow. Once an initial strain is reasonably well neutralized, later more virulent versions don't seem to propagate as fast.

Bob Dolan also identifies a fraud prevention problem:

The flaw in this system lies in the fact that anyone can use a credit card to order merchandise and have it shipped to any address. If the credit-card number implied a shipping address that matched the billing address, the numbers would lose their value to the crackers who steal them. I realize this does not work for all users. However, anyone who desires secure transactions, would certainly want to use this type of credit card.

Actually, the credit-card companies do make fair use of delivery address in guarding against fraud; some reader here no doubt has had a friendly call from American Express asking whether they really meant to order 24 leather coats for delivery to downtown L.A. However, with digital goods, this is a more serious problem, as there's no physical "delivery address" that can be meaningfully validated. As with existing fraud detection schemes, there will always be a battle of wits going on, and some cost of fraud to be absorbed in the costs of doing business. Delivery address validation for electronically ordered physical goods is something the credit-card companies have well in hand today.

Charlie Abzug questions the feasibility of untraceable delivery:

Date: Wed, 07 Feb 1996 22:06:37 EST From: Charles Abzug <cabzug@europa.umuc.edu> Reply-to: CharlesAbzug@ACM.org Subject: Capturing keystrokes of credit-card numbers

I would like to add in my two-cents' worth: The claim that the credit-card numbers would be sent out "sub rosa" and thereafter would be untraceable does not bear up to close examination. Once the Trojan Horse software is discovered on someone's computer, it can be decompiled or disassembled to yield the full information on the E-mail address or other scheme by which the information is sent out of the compromised host computer to some unsuspected destination.

The problem is not in determining how the data is delivered, but who the recipient might be. True, you can find out what the mechanism is by analysis and disassembly, but that doesn't tell you who actually gets the messages. Delivery to an anonymous recipient isn't hard--one need merely take advantage of the Internet's broadcast media, such as mailing lists and newsgroups. Cryptographic encoding of the 60-odd bits that make up a credit card number into, say, a Message-ID field in mail or a news posting would be sufficient. It doesn't even have to be a particular newsgroup: the perpetrator can simply watch all newsgroups for a value with the right format, much as Mr. Kibo watches for his name in postings everywhere. Knowing that the delivery is taking place, it can be stopped, but it's still going to be very hard to trace it to some perpetrator. However, the perpetrator still has to have a way to USE the credit-card numbers thus obtained, and that's the point at which traditional fraud detection and law-enforcement measures become relevant.


Re: RISKS (...) of typing credit-card numbers (Sibert, RISKS-17.69)

Fisher Mark <FisherM@is3.indy.tce.com>
Mon, 12 Feb 96 08:15:00 PST

Although Mr. Sibert understands the overall context of Nathaniel Borenstein's original message, there are a few points on which I would differ:

  1. Nathaniel's original message was going to state, "NEVER TYPE YOUR CREDIT
    CARD NUMBER INTO AN INSECURE COMPUTER". Because computer security is hard to understand for those who feel good just being sure which icons are the minimize and maximize boxes (i.e. most computer users), he dropped the "INSECURE" from his message. Unfortunately, most computers now in use (which means PCs) run operating systems that are insecure (MacOS, DOS, Win3.1, Win95). Although Windows NT is presently vulnerable to this attack, a simple additional API call that retrieves a string from the user that is not visible to other programs or hooks would prevent the attack. NT does this kind of operation now for logins.

  2. Although most virus software is primitive ("Sturgeon's Law: 90% of
    everything is junk"), there is no reason for this situation to continue when such an immense financial gain can be made. How many of us _really_ know all what software we are running at any given moment? Win3.1, DOS, and I believe Win95 do not normally come with a program that can list all running programs -- and I suspect the same is true of MacOS.

  3. Communicating the credit-card data to another system could be done via
    Windows Sockets TCP/IP API calls, which (if there is already an active connection) could be done entirely silently in the face of all errors. Since "netstat"-like utilities can list all active TCP/IP connections, determining whether or not there is already at least one connection ready to be piggybacked is a solvable problem. Even with the inevitable compatibility problems, a lot of credit cards could be stolen in just a week's time.

  4. Widespread distribution could likely be achieved by suitable social
    engineering -- adding the Trojan Horse to one of the interminable series of CompuServe CD-ROMs/diskettes I keep receiving, for example. I would suspect that most people who actually package software don't go through the kind of background checks I did to get my security clearance...
The real solutions to these combinations of attacks are:
  1. Run a secure OS; and/or
  2. Deal with vendors that do not use a self-verifying ID for purchases; and/or
  3. Use secure hardware to transmit/generate the user's purchase ID.
Mark Leighton Fisher, Thomson Consumer Electronics, Indianapolis, IN fisherm@indy.tce.com
---------------------------------------------

Leahy to introduce bill to repeal CDA!

Stanton McCandlish <mech@eff.org>
Thu, 8 Feb 1996 21:07:17 -0800 (PST)

[From (D-VT) Sen. Pat Leahy's WWW Pages, specifically at: http://www.senate.gov/~leahy/why.html It is advisable to send e-mail, and make phone calls, to his office in support of his upcoming legislation to repeal the Communications Decency Amendment to the Telecom Bill.]

U.S. Senator Patrick Leahy - why.html

I am turning my World Wide Web page black for 48 hours as part of the protest by my fellow Internet users against the online censorship provisions of the new telecommunications law. The online censorship provisions should be repealed, and I plan to introduce legislation to do just that.

I was one of five Senators who voted against this legislation, in large part because of what I believe are unconstitutional restrictions on what we can say online. I hope you will take the time to read my full statement on the telecommunications law contained on my web-page. While I do not condone the transmission of obscene material or child pornography on the Net, I believe the solution proposed in the telecommunications law will do more to harm the use and growth of the Net without combatting the problem for which it is intended.

Sen. Patrick Leahy (D-VT) senator_leahy@leahy.senate.gov
http://www.senate.gov/~leahy/
---------------------------------------------

Foreign `replies' cause anxiety

Timothy Mowchanuk <t.mowchanuk@qut.edu.au>
Sat, 10 Feb 1996 18:56:10 +0000

A bit of background first. There is a disorder called `high functioning autism'. These people are usually quite intelligent, (one is even an Associate Professor) but have have trouble with social and emotional relationships. They tend to interpret events and personal reactions poorly and very literally. They can be very sensitive about this deficiency.

While monitoring an Autism list an interesting event occurred. There was an ongoing argument (not a flame war) between one member (self-confessed high functioning autistic) and the rest of the group. Suddenly she sent a message to the group that he/she was unsubscribing. Apparently there had been some strong personal e-mail to this person including *several messages in a foreign language*. This person was *quite* upset that someone was apparently sending `flames' in a foreign language.

After several days the list finally figured out what was happening. There was a subscriber in Brazil who had not check his/her mailbox and it was full. It turns out that the Brazilian majordomo was sending return messages to the effect that this mailbox was full. (I don't understand why this person, and only a few others, got the messages and not the sending listserver so don't ask.)

This person is/was very sensitive to e-mail messages. In this case messages in a foreign language cause a significant amount of emotional distress. There are a significant number of electronic emotional support groups. By definition, these lists will contain a large percentage of people with emotional/mental problems. Is this a risk? If so, what is the solution?

Timothy Mowchanuk, Technology Education, Queensland University of Technology Brisbane, Queensland, Australia t.mowchanuk@qut.edu.au
---------------------------------------------

Correction: Train operators get permission to use manual backup (17.70)

Jonathan Kamens <jik@annex-1-slip-jik.cam.ov.com>
Fri, 9 Feb 1996 14:05:23 GMT

From: "Tom Comeau @ Space Telescope Science Institute" <tcomeau@stsci.edu>
Supervisors must approve taking a train out of manual mode,
^^^^^^^^^^^
I believe this should read "automatic mode".

Jonathan Kamens OpenVision Technologies, Inc. jik@cam.ov.com
[Darn, and I thought I had proofread very carefully. You're correct. (Though actually, supervisors until last Sunday had to approve both going into and out of manual mode, so I guess we're both right ;-) ) Tom Comeau]

---------------------------------------------

Re: Electronic Medical Records and Images (Brown, RISKS-17.70)

David Coburn <coburn@informix.com>
Mon, 12 Feb 96 14:29:48 PST

Jay Brown points out that

> One potential risk with this type of system would be the "configuration
> management" challenge - what image goes with what patient?

I would point out that this isn't too much of a risk. This is a classic database problem, and providers of these services are for the most part using commercial RDBMS solutions, including those of my employer, to overcome these sorts of risks.

I would have actually seen this as _reducing_ the risk of misidentification of the information, as opposed to an increase. It is rather hard to drop a video tape or CD ROM and have the pictures spill out on the floor, after all. Having worked many years in a hospital, I can ASSURE you that this is not currently the case.

David Coburn, Informix Software, Inc. coburn@informix.com ...uunet!infmx!coburn

Re: Electronic Medical Records and Images (Brown, RISKS-17.70)

Allan Noordvyk <allan@cetus.ali.bc.ca>
Fri, 9 Feb 96 08:06:06 -0800

As a software developer at a company which makes filmless medical imaging systems, I can say that this risk is well understood and has been dealt with by a variety of vendors for quite a number of years. In fact, the news item which prompted Mr. Brown's article describes deployment of a system which is not terribly new or revolutionary with the exception that it deals with the bandwidth-hungry full-motion cardiac video. Our company, for example, has customers have been operating completely filmless for over four years.

In addition to strong database integrity measures and a robust fault-tolerant message model, we, like most PACS (Picture Archiving and Communications Systems) vendors, take great pains to embed patient and exam identifying information into each series of images in *addition* to the information stored in the indexing database. In the event of some sort of horrible disaster, a known good back-up the database can be loaded in place of a damaged one and then the more recent information determined by a walk of the on-line and (if necessary) off-line archives.

Furthermore, the new DICOM (DIgital Communications in Medicine) standard which is used by different vendors to network their systems in fact this makes required in the header for each image itself. The overhead required to do this is minimal relative to the size of each medical image (range 256 x 256 8-bit grayscale, through 640 x 484 24-bit color, to 2K x 2K 16-bit grayscale) and the fact that lossy image compression algorithms generally can't be used if the image is to remain diagnostic quality.

The risk of the wrong images being associated with a patient actually predates computerization of radiology. Given the sheer number of sheets of film generated each day at a radiology department and the resulting likelihood of the sheets going astray, most modalities long ago adopted the practice of burning the name and identification of the patient *visibly* into each image itself. I expect this practice to continue in the digital age. Thus there is an immediately visible human check on the operation of any computerized system in addition to "Why does Mr. Johnson have a uterus?".

All in all, I would say that there is more of a risk of being wheeled into the wrong operating room than having a radiologist accidentally diagnose your case from someone else's images.

Allan Noordvyk, Software Artisan, ALI Technologies Richmond, Canada Voice: 604.279.5422 x 317 allan@ali.bc.ca Fax: 604.279.5468
[Postscript added later by Allan:]

While the original proposed risk (i.e., the wrong images being associated with a patient) is quite low (as discussed in my previous missive), there is another slightly different risk which is more likely and has just a large potential for harmful results to the patient. The risk is that when a radiologist is looking at digitally transferred images of a patient's exam, can he or she be absolutely sure that *all* of the images captured are actually being shown? A fairly serious medical condition can often be evidenced in only a single image from a large set, and thus omission of one or more images runs the risk of a misdiagnosis (eg. a false negative).

The basic network dependability problem is complicated by the possible presence of WANs (eg. we have a number of sites where outlying clinics feed images to expert radiologists at a central institution). These lower-bandwidth conduits can introduce significant lags in the transfer of large images. Unexpected communications difficulties (eg. momentary Telco line outage) may occasionally increase these lags without the knowledge of the users of the system.

My company's system contains extensive interprocess status messages and sanity checks to ensure that this risk is virtually eliminated. However this only works when our system handles both the capturing and viewing of the images at either end of the network as the protocols are a careful proprietary design developed over the years. With the wide-spread adoption of DICOM as the lingua franca by which different vender's medical imaging equipment uses to talk amongst themselves we are seeing heterogeneous deployment of components in imaging networks. Unfortunately the DICOM standard has, up to now, avoided a lot of the hard issues involved in handling this risk. Recent proposed additions to the standard address the problem somewhat, but these would be optional for vendors to implement. Thus there is always the risk of a minimalist DICOM implementation breaking end-to-end robustness of more complete systems with which they are interacting.


Re: Electronic Medical Records and Images (Brown, RISKS-17.70)

Tom Olin <tro@partech.com>
Sun, 11 Feb 1996 09:31:59 +0500

I suppose it is worth repeating the obvious risks of life from time to time, but after a while, that approach turns RISKS into a pretty monotonous, boring, and ultimately worthless publication.

The precise same risk for electronically stored and transmitted medical images could just as easily be applied to any type of information storage on any type of media. Consider a slight rewording of the last sentence quoted above: As any secretary knows, keeping track of a lot of different files in a filing cabinet, many of which have more than one version, is not a trivial task.

Or take it back to the first cave dweller who tried to keep track of his kids on stone tablets:

As any caveman knows, keeping track of a lot of different kids on stone tablets, many of which have more than one version, is not a trivial task.

Too often, I think one of the risks of RISKS is that too many people try too hard to find risks in every little thing. Perhaps you (the moderator) and I disagree on what risks are worth mentioning, but it seems that you usually keep things on the interesting side. Not this time.

Or maybe I'm just grumpy because I've been working too many weekends lately.

Tom Olin, PAR Government Systems Corporation, 8383 Seneca Turnpike, New Hartford, NY 13413-4991 +1 315 738 0600 x638 tom_olin@partech.com
---------------------------------------------

Previous Issue Index Next Issue Info Searching Submit Article


Report problems with the web pages to Lindsay.Marshall@newcastle.ac.uk.
This page was copied from: http://catless.ncl.ac.uk/Risks/17.71.html
COPY!
COPY!
Last modification on 1999-06-15
by Michael Blume