|HOME FAQ CONTACTS LINKS MEDCOM SITEMAP ARMY.MIL AKO SEARCH|
ACCESS TO CARE
The State of the Art
The State of the Art
The years between 1818 and the start of the Civil War were in many ways the darkest in the history of medicine in the United States. Doubts as to the validity of time honored medical practices were growing. Licensing requirements fell victim to egalitarianism, and medical education became a profit-making venture. In any army, disease still caused more deaths than wounds, even during wartime. A few significant new developments, however, stood in stark contrast to the generally stagnant state of the art, and disillusionment with old ways was already beginning to stimulate a search for more scientific methods. Before the start of the Civil War in 1861, an increasing awareness of the need for research and critical observation was emphasizing the Army Medical Department's potential for major contributions to medical science.
Early in the nineteenth century, elaborate theoretical systems that used one or two principles to explain all disease and its cure began to lose popularity among orthodox physicians, who came to regard those who devised and supported such simplistic explanations as quacks and cultists. The public's distrust for conventional medicine led to increased success for homeopaths, hydropaths, thomsonians, and eclectics, some of whom had their own medical schools and journals, and many of whom had a common distrust for the traditional massive dosing with mercurials.1
The decline of their faith in rigid theoretical systems left many orthodox doctors confused or uncertain. Fact did not always substantiate the newer theories concerning the causes, nature, and spread of disease, but suggestions that minuscule animals or fungi might cause disease were generally ridiculed. The possibility of insect vectors was rarely mentioned. Vigorous debate raged over the question of contagion, but the most popular theory on the cause of disease attributed much sickness to malignant fumes or miasmas arising from de-
1Much of the background material in this chapter is based upon the works of Richard Harrison Shryock, especially Medicine and Society in America, 1660-1860 (Ithaca, N.Y., and London: Cornell Paperbacks, 1960), Medicine in America: Historical Essays (Baltimore: Johns Hopkins Press, 1966), and The Development of Modern Medicine: An Interpretation of the Social and Scientific Factors Involved (Philadelphia: University of Pennsylvania Press, 1936), and the works of Erwin Heinz Ackerknecht, including History and Geography of the Most Important Diseases (New York: Hafner Publishing Co., 1965), Malaria in the Upper Mississippi Valley, 1760-1900 (Baltimore: Johns Hopkins Press, 1945), and A Short History of Medicine (New York: Ronald Press Co., 1955). See also George H. Daniels, American Science in the Age of Jackson (New York: Columbia University Press, 1968); John Duffy, The Healers: The Rise of the Medical Establishment (New York: McGraw-Hill Book Co., 1976); and Martin Kaufman, American Medical Education: The Formative Years, 1765-1910 (Westport, Conn: Greenwood Press, 1976).
OPHTHALMOSCOPE, as illustrated in Medical and Surgical Reporter 4 (1860):323. (Courtesy of National Library of Medicine.)
caying vegetable or animal matter. Although some regarded fever as a symptom of disease, many still believed that conditions involving fever were all merely outward manifestations of a single underlying ailment whose symptoms varied with the climate and weather, the passage of time, the medicines and treatments used, or the condition of the patient .2
Their loss of faith left many physicians uncertain not only about the cause of disease but also about its treatment. Critics of prevalent practice inveighed against "the tendency to self-delusion, which seems inseparable from the practice of the art of healing" and also against the tendency "to afflict" the patient "with unnecessary practice" while ignoring the fact that "some diseases are controlled by nature alone." An increasing number of doctors were aware that overmedication could cause harm, but
2Charles Caldwell, Analysis of Fever (Lexington, Ky.: Privately printed, 1825), p. 15; see also William N. Bispham, Malaria: Its Treatment and Prophylaxis (Baltimore: Williams & Wilkins Co., 1944); and Mark Frederick Boyd, ed., Malariology: A Comprehensive Survey of All Aspects of This Group of Diseases From a Global Standpoint, 2 vols. (Philadelphia: W. B. Saun. Jers Co., 1949).
he pressure upon physicians to prescribe active treatment was great, partially because the patient himself might insist "on being poisoned." As a result, in the pre-Civil War period, practice did not quickly reflect the discoveries of researchers and statisticians.3
Nevertheless, as the nineteenth century progressed, many of the most prominent physicians in both the United States and Europe turned hopefully to the careful, detailed statistical studies being undertaken in the new and highly respected clinics of the Paris Medical School. Since Army surgeons were stationed at posts scattered over the entire nation and were subject to an overall discipline, their experience as a group was readily available to those gathering data for statistical studies in the United States. With the support and encouragement of civilian physicians, the Army Medical Department collected information concerning the influence of meteorological factors on health from post surgeons, who recorded the details of the weather, climate, and geographical features at the various forts and of the nature of the diseases affecting the men under their care. The science of gathering and using such data in the study of disease, however, was in its infancy. More effort continued to be expended in collecting this information than in forming valid conclusions based on it, and speculations based on personal impressions still played an important role in ostensibly scientific discussions.
ADVERTISEMENT FOR ACHROMATIC MICROSCOPES, 1859. (Courtesy of National Library of Medicine.)
The break with the old medical theories and their one- or two-cause explanations for disease placed an increased emphasis upon finding ways to differentiate between various diseases. The new instruments and techniques that physicians began to use added to their understanding of disease as well as to the accuracy of their diagnoses. The laryngoscope and ophthalmoscope were of recent introduction and had not become popular tools by the time of the Civil War, but the microscope, while still available only to a few doctors, permitted
3Quotes from Oliver Wendell Holmes, "Currents and Countercurrents in Medical Science," and Jacob Bigelow, "Self-limited Diseases;' both in Medical Communications of the Massachusetts Medical Society, 2d ser., 9 (1860):318, 319 and 5 (1836):338, 343, respectively; John Harley Warner, "The Nature-Trusting Heresy: American Physicians and the Concept of the Healing Power of Nature in the 1850s and 1860s," Perspectives in American History 11 (1977-78):299-300; see also "Review of A Practical Essay on Typhous Fever by Nathan Smith," New England Journal of Medicine and Surgery 13 (1824):409-10.
ADVERTISEMENT FOR STETHOSCOPE, 1858. (Courtesy of National Library of Medicine.)
the study of cells whose very existence was unknown at the start of the century, The stethoscope was growing in popularity, while the thermometer came into its own in the great medical centers of Europe with the realization that fever was a symptom, not a disease. The old procedure of examining the urine to detect disease received new life with the growth of interest in pathological anatomy and the discovery of the relationship of albumin found in urine to disease. Following the lead of Paris-trained physicians, doctors attempted to correlate symptoms and test findings with abnormal changes found in the body's organs and tissues after death. New medicines were also used in the treatment of disease. Most important among them was quinine, long used in the treatment of fevers but before 1820, always in the form of the bark of the cinchona tree. Not long after quinine sulfate was first extracted from cinchona bark, those using it began to comment on the ways in which its operation differed from that of the bark itself; reports maintained that quinine was more consistently retained in the stomach than bark and that it could be used in larger amounts, thus increasing its effectiveness. In the South, where malaria posed a particularly formidable problem, some physicians began to realize that as many as twenty or thirty grains of quinine at a time, a dose much larger than had been custom-
ary, were exceptionally effective. They also learned that administering this drug while the patient was in the midst of a fever paroxysm did not trigger the dire consequences that had been expected.
As a result, in 1843, shortly after the end of the Second Seminole War in Florida, the surgeon general of the Army Medical Department sent a questionnaire to the surgeons serving under him inquiring about their experiences with quinine sulfate. He asked about the exact nature of the drug they were using, the impurities present in their supply, the size and timing of the doses administered, and any adverse effects that might have resulted from its use. Although the fifty-seven replies that he received contained many subjective impressions and conclusions, they revealed that, in the case of malaria, the use of a few large doses of quinine instead of many small ones was a relatively well-accepted practice in the Army. Most Army surgeons did not blame the increase in bowel disorders among soldiers on heavy dosing with quinine. In spite of its relative scarcity and the resultant high cost, they were using the new drug almost indiscriminately for many diseases other than malaria. Its ability to reduce fever and relieve physical discomfort may well have given it an undeserved reputation for aiding the treatment of typhus and typhoid fever, cholera, pneumonia, and other illnesses .4
In the early years of the nineteenth century, opium was prescribed mainly for dysentery or extreme pain, many physicians deploring its constipating effects and regarding it as harmful if used alone for a fever. Nevertheless, a wider use of narcotics in general followed the refinement of morphine and codeine from opium in the first half of the century. By the Civil War era, tincture of opium, or laudanum, was a familiar household remedy. The addiction that sometimes resulted was not always condemned, even when the addict was an Army officer on active duty who spoke openly of his problem. Doctors experimented with the dosages of narcotics in an effort to obtain a maximum of benefit with a minimum of unpleasant side effects. Both new opiates were, like opium itself, given by mouth. Doctors also made attempts to administer morphine by scraping away the top layer of the skin and applying the drug to the surface thus revealed. By midcentury, a few also administered morphine by injection, initially by cutting open a vein and using a regular syringe and, after 1860, with the newly developed hypodermic. Physicians favored the new approach because it enabled them to use drugs that otherwise would be adversely affected by their passage through the digestive system. By the 1850s doctors were also resorting increasingly to alcohol, long also used to dull pain, as a stimulant for the weak and debilitated.5
4Charles McCormick, "On the Use and Action
of the Sulphate of Quinine in Large Doses," New Orleans Medical
and Surgical Journal 2 (1845-46):290; Richard H. Coolidge, "On
the Medical Topography and Diseases of Fort Gibson, Arkansas," Southern
Medical Reports 2 (1850):451; Surgeon General's Office (SGO), Statistical
Report on the Sickness and Mortality in the Army From January, 1839, to
January, 1855 (Washington: A. O. P. Nicholson, 1856), pp. 644-90, passim
(hereafter cited as Statistical Report, 1839-55); see also Dale
C. Smith, "Quinine and Fever: The Development of the Effective Dosage,"
Journal of the History of, Medicine and Allied Sciences 31 (1976):343-67.
As far as drugs were concerned, purges and emetics were still traditional standbys. The mercurials remained the favorite purges. Calomel, in particular, had enthusiastic proponents, but as time passed, a growing number of physicians expressed reservations concerning its use. Impressed by the havoc that mercury could wreak upon the body, some began to seek ways to eliminate its horrible side effects by either reducing the dosage or eliminating its use. Some physicians also recognized the need to avoid constant irritation to the digestive system, and although emetics remained popular because of the belief that they relieved inflammation and fever, the profession in general used greater restraint in prescribing both these drugs and purgatives.
Although faith in emetics and purgatives was still strong in 1818, venesection, once the mainstay of any course of treatment, had already become somewhat controversial. Physicians used it with increasing caution as a generalized treatment for disease. By 1825, even those who favored its use against fevers and inflammations questioned the desirability of venesection in hot climates and for old, weak patients. A growing number of doctors preferred local bleeding to venesection, using for the purpose leeches or vacuum cups placed over cuts made in the skin. A controversial but influential book published in 1830, Researches Principally Relative to the Morbid and Curative Effects of Loss of Blood by Marshall Hall, emphasized the unfavorable short-term reactions that could follow excessive blood loss from any cause. It emphasized that very little had been done to determine the long-range effects of bleeding. In 1836 Pierre Louis, the father of the use of statistical studies in medicine and a teacher of American students in Paris, pointed out that bleeding as therapy for inflammatory disease, although possibly helpful, had been overrated. The use of venesection declined slowly throughout the 1840s and 1850s, but by the time of the start of the Civil War, many surgeons resorted to bleeding only when the desired relaxation of the body could not be achieved by cathartics; or changes in the diet.6
The principal diseases against which Army surgeons used their remedies in the period 1818 to 1861 were those then classified as fevers. Of these, malaria represented perhaps the greatest continuing health problem facing the Army Medical Department. Although it was receding from the eastern seaboard, malaria remained to varying degrees a threat throughout the entire nation, posing a surprisingly severe problem in the Midwest. It was generally regarded as a disease of the countryside, where mosquitoes could abound in numbers almost inconceivable today, but it appeared occasionally in large towns and cities as well. The fact that a so-called animalcule-bearing mosquito (Anopheles) rather than evil vapors spread the disease was not definitely established, however, until near the end of the century.
In the mid-nineteenth century, malaria in its various forms had many names. It appeared in the records of the time as, among other things, remittent, intermittent, marsh, miasmatic, and autumnal fever, as well as the ague. Classified as intermittent fevers were those that followed a regular pattern of fever alternating
6James Jackson, Preface to Researches on the Effects of Bloodletting ...., by P. C. A. Louis, trans. C. G. Putnam (Boston: Hilliard, Gray & Co., 1836), p. vi; B . M. Randolph, "The Blood Letting Controversy in the Nineteenth Century," Annals of Medical History, n.s., 7 (1935):180-81; Marshall Hall, Researches Principally Relative to the Morbid and Curative Effects of Loss of Blood (Philadelphia: E. L. Carey & A. Hart, 1830).
with a return to normal temperatures. The remittent form, in which the fever fluctuated but never quite reached normal, was presumably either typhoid fever or today's falciparum malaria, always dangerous if not treated promptly, capable of mimicking many other diseases, and most prevalent in warmer climates. Patients who were infected with malaria on separate occasions might experience symptoms that followed different cycles. Once infected by malaria, a soldier's health might be permanently threatened. The enlarged spleen and liver could be accompanied by a slight jaundice resembling that of yellow fever, A scarlatina-type rash could be present, or the patient's flushed face, reddened eyes, hot, dry skin, agonizing headaches, and aching muscles might suggest typhoid. Bronchitis or even pneumonia could develop to further complicate the diagnosis. The presence of diarrhea might be misleading; diarrhea accompanied by nausea, vomiting, and stomach pain mimicked cholera. The difficulty of distinguishing malaria from other diseases made credible a theory that malarial fevers were antagonistic to typhus and typhoid fever and that the prevalence of malaria and the typhoid-typhus types of fever varied inversely. Some authors applied this theory of antagonism also to tuberculosis, but it has been since noted that tuberculosis as a complication of malaria leads to a quick death.7
Relapses were frequent in all forms of malaria, even after treatment, as is typical even today in patients treated exclusively with quinine, and could be triggered by surgery. At least one Army surgeon blamed relapses on the effects of drafts when the victim was hot, overexposure to the sun, overeating, and constipation. In patients weakened by poor diet, overwork, stress, or disease, moreover, any form of malaria could be fatal. Although spontaneous recovery was possible, when chronic, malaria could lead to varying degrees of anemia, even to permanently impaired health.8
Before quinine sulfate became available, the treatment of choice for malaria involved various combinations of bleeding,
7Dale C. Smith, "The Rise and Fall of
Typhomalarial Fever. 1: Origins," Journal of the History of Medicine
and Allied Sciences 37 (1982):19 1; R. La Roche, Pneumonia: ...
Including an Inquiry into the Existence and Morbid Agency of Malaria
(Philadelphia: Blanchard & Lea, 1854), pp. 310, 314-15; Bispham, Malaria,
p. 85; Charles Franklin Craig and Ernest Carroll Faust, Clinical Parasitology,
2d ed., rev. (Philadelphia: Lea & Febiger, 1940), p. 199; Maxwell Wintrobe
et al., Harrison's Principles of Internal Medicine, 7th ed. (New
York: McGraw-Hill Book Co., 1974), pp. 1020-21.
purging, and vomiting, in addition to the administration of' cinchona bark. Some physicians believed that causing the patient to perspire heavily was also helpful. Cathartics and emetics continued to be administered with the quinine, but as quinine sulfate gained in popularity, bleeding became less popular. Opiates were used for pain. Army Assistant Surgeon Jonathan Letterman recorded that he believed ten to thirty drops of chloroform taken with a little water could be helpful in soothing the irritable stomach of the malaria patient, although he somewhat paradoxically warned against the use of chloroform when the patient's digestive system was inflamed.9
Unlike malaria, true typhus was apparently rarely seen in the United States, but the diagnosis was not uncommon because in the earliest decades of the nineteenth century, when typhoid fever was very common, the term typhusfever was used almost interchangeably with the term typhoid. Occasionally typhus fever was actually used to denote any type of fever believed to originate in miasmas. By 1837, research by William Gerhard, an American physician who was a former student of Louis' in Paris, had made it possible to distinguish between the two diseases. Even after this point, the cause of typhoid remained unclear and its treatment debatable, but the disease was probably widespread. The use of large doses of quinine was urged in 1851 in the belief that this drug had not been given a fair trial against typhoid, but it was suggested that the quinine be accompanied by opium and possibly a mercurial as well.10
A fourth disease described several times in Army records from the decades before the Civil War was dengue, spread by two different mosquitoes, one of which, the Aedes aegypti, might also carry yellow fever. In its initial stages, dengue was characterized by fever and nausea. Contemporary descriptions noted that, in this disease, vomiting generally began several days after the fever broke for the first time and that a rash then followed the nausea. The rash suggested scarlet fever rather than measles and consisted of "minute papulae of a florid red, slightly elevated, and distributed in irregularly shaped patches," appearing first on the face and trunk. After the rash was fully developed, the fever generally reappeared. The swollen glands and painful joints that also characterized dengue could persist for weeks or months. The disease tended to cause great suffering but few, if any, deaths. In the late 1820s, the recommended treatment called for antimonials in the earliest stage, bleeding in the so-called inflammatory stage, diaphoretics such as Dover's powder, which contained opium and ipecac, and of course, cathartics.11
9Thomas Neville Bonner, The Kansas Doctor:
A Century of Pioneering (Lawrence: University of Kansas Press, 1959),
p. 27; John Esten Cooke, A Treatise of Pathology and Therapeutics,
2 vols. (Lexington, Ky., 1828), 2:127-28; Statistical Report (1839-55),
pp. 77, 331-32; Thomas Lawson, quoted in Percy M. Ashbum, "One Century
Ago," Military Surgeon 59 (July 1926):38; Bernard M. Byrne,
Proceedings of a Court Martial for the Trial of Surgeon B. M. Byrne
... (Charleston, W. Va.: Walker, Evans & Co., 1859), p. 82.
Yellow fever was yet another mosquito spread disease, dreaded in northern ports as well as in the South, where it was a relatively regular visitor along the coast. During this period, through their study of pathological anatomy, scientists first began to suspect that yellow fever was a distinct disease. Although it was not found endemically where the temperature dropped below 71° F., it could be imported into areas of moderate temperatures during the summer by ships from tropical ports carrying infected Aedes aegypti mosquitoes, which were capable of spreading the disease for up to sixty days. The role of the mosquito was suggested as early as 1848, but authorities generally blamed yellow fever on the combined effects of heat, moisture, and decaying animal and vegetable matter. Although they recognized the fact that few ever had the disease twice, they debated its contagiousness and the value of quarantines against it. Writers urged that treatment be started early in the course of an attack of yellow fever. In addition to the usual purges, doctors might use quinine to treat it. Some also advocated the raising of blisters on the spine and upper abdomen.
A number of less devastating diseases also bore the name cholera in the nineteenth century, but only the disease known as Asiatic cholera inspired more terror than yellow fever. At their worst, the symptoms of this disease were dreadful to witness, although they could actually vary considerably in intensity. A patient with a slight case might suffer only a diarrhea so mild that, without modem techniques, the diagnosis was uncertain. The victim of severe cholera, however, often died in hideous agony. His sunken face might be "rendered peculiarly ghastly by the removal of all the soft solids," according to a surgeon stationed at Fort Armstrong during the 1832 epidemic. His hands and feet were "bluish white, wrinkled as when long macerated in cold water," and his eyes "fallen to the bottom of their orbs," with a "glaring vitality, but without mobility." His blood was reduced by dehydration to a sludge his heart could not pump, his viscera were congested, his skin and limbs chilled and bloodless. Such a case might begin with a profuse but painless diarrhea that became progressively more liquid until it reached a stage where the stools resembled rice water. Vomiting, often projectile, followed and continued even if the patient took nothing by mouth. Muscle cramps appeared, first in the calves of the legs, then in the toes, spreading to the thighs and the arms and hands. Bowel movements might become involuntary. As the dehydration continued, the heart began to falter and circulatory failure followed. The flow of urine decreased or stopped. Exhaustion deepened; the eyes sank into the head and the skin shriveled. The temperature dropped below normal, the respiration became ever more rapid, and, in the final moments before death, the victim's dehydration might be so complete that even his diarrhea ceased at last.12
12Ltr, Samuel B. Smith to Capt Wilson, Niles' Weekly Register 43 (24 Nov 1832):203. Unless otherwise indicated, the background information on cholera in this chapter is based on Oscar Felsenfeld, The Cholera Problem (St. Louis: Warren H. Green, 1967); Ahmed Mohamed Kamal, Epidemiology of Communicable Diseases (Cairo: Anglo-Egyptian Bookshop, 1958); R. Pollitzer, Cholera, World Health Organization (WHO) Monograph Series, no. 43 (Geneva: WHO, 1959); Charles Rosenberg, The Cholera Years: The United States in 1832, 1849, and 1866 (Chicago: University of Chicago Press, 1962); and G. F. Pyle, "The Diffusion of Cholera in the United States in the Nineteenth Century," Geographical Analysis 1 (1969):59-75.
Modern methods of restoring body fluids make the chance of surviving cholera today excellent for those fortunate enough to live where they are available. Because of the present understanding of the way in which it is transmitted and because modem sanitation methods limit a population's exposure to cholera, deaths from this disease now usually occur only in areas that are both extremely backward and crowded. A water-borne bacterium (Vibrio cholerae) causes cholera, entering the body through the mouth, usually in drinking water, in fish or shellfish, or on vegetables and similar foods. The organisms may also be spread by flies or by healthy human carriers and can live in moist earth one to two weeks, in shallow well water three to fifteen days, and as long as four months in ice. Heat, sunlight, or an acid medium kills the vibrio quickly. The incubation period varies, but rarely exceeds five to ten days and averages about three.
Endemic in parts of India, cholera first brought widespread fear in 1814 to 1815, when it struck Asia, the Near East, and Russia, bringing with it a death rate of from 40 to 75 percent. It began to inspire terror on a worldwide scale only after 1830, reappearing periodically in the United States from 1832 until the time of the Civil War, and striking with particular frequency in the years 1848 to 1852. The overall death rate of the first epidemic in the 1830s varied between 10 and 15 percent, although during its initial appearance in New York City in 18.32, the death rate among those contracting the disease reportedly approached a frightening 50 percent. In the 1840 epidemic the national mortality rate dropped below 10 percent. 13
Before the Civil War, Asiatic cholera was most commonly blamed on miasmas or meteorological conditions, but even in the 1830s a few scientists theorized that small living creatures played a role in this disease. Very few physicians thought that it was actually contagious, but some believed that its spread must in some way be related to its human victims rather than to the atmosphere. Others, however, doubted that Asiatic cholera existed as an entity separate from ordinary digestive upsets with similar symptoms or from fevers such as typhus or malaria. Excessive nervous strain as well as intemperance, debauchery, and filth were believed to predispose to the disease. 14
Steps suggested to protect the population from cholera varied. Cleanliness and temperance and the avoidance of dampness, chills, and fear were often urged upon the populace as were regular exercise and a proper diet. The most popular approaches to the treatment of the disease resembled those followed for other illnesses. Many authorities undertook bleeding at the slightest sign of a fever and administered calomel and opium early in the course of an illness. Less popular remedies included cupping the temples and abdomen, giving ice internally, and administering enemas composed of such ingredients as hot water and brandy, which aided in warming the patient, or a tobacco infusion, which stimulated the circulation, eased cramps, and stopped vomiting and diarrhea. The possibility of giving intravenous treatment was
13John Duffy, "The History of Asiatic
Cholera in the United States," Bulletin of the New York Academy
of Medicine 47 (1971):1153-54, 1158; A. Laveran, Raite des maladies
et epidemies des armees (Paris: G. Masson, 1875), pp. 700-701; James
Joseph Walsh, History of Medicine in New York: Three Centuries of Medical
Progress, 5 vols. (New York: National Americana Society, 1919), 1:109.
considered during the 1832 epidemic, and the injection of a saline solution was actually tried. Although the new technique was used several times on moribund patients, whose demise it failed to prevent, several alleged successes were also reported. 15
By 1849 cholera was once again sweeping the United States. In St. Louis it caused almost 68 percent of the deaths recorded in the period from 23 April to 6 August 1849. In 1852 it struck with force once again, only to fade out two years later. During this epidemic a British scientist, noting the relationship of cases of cholera to water drawn from a particular well, began to suspect that cholera was a water-borne disease, but animalcules and fungi were still generally dismissed as possible causes. In the United States, treatment during the second and third nationwide epidemics varied, but did not differ appreciably from that favored in 1832. The injection of a saline solution into a victim's veins remained experimental. 16
Milder cases of cholera were sometimes mistaken for dysentery or diarrhea, both common ailments in the period preceding the Civil War. As with other diseases, the question of whether either was contagious continued to be a topic for debate. Little distinction was made between the two; an Army surgeon stated in 1852 that the principal difference between diarrhea and dysentery lay in the fact that the former affected the small intestine and the latter the large. Theories attributed the cause of this form of illness to bad air, inappropriate food, too much purging with harsh medicines, meteorological factors, especially high temperatures, and alcohol, when applied internally and too liberally. Both diarrhea and dysentery had been known as camp diseases for centuries, but it was only in 1859 that the ameba causing one form of dysentery was actually identified.17
The treatment for dysentery and diarrhea in many ways resembled that for other diseases: bleeding, emetics, purgatives, quinine, opium, blisters, and occasionally less common remedies such as combinations of alum with either white vitriol (zinc sulfate) or sulfate of iron, or of acetate of lead with opium and ipecac. At least one physician stated that there was no proof that mercurials were helpful in cases of dysentery. Doctors cast doubt on the possibility that a patient with either dysentery or diarrhea could be cured while remaining in a tropical climate; they suggested sea voyages, visits to mineral springs, and travel to a cooler climate for chronic cases.18
15Bernard M. Byme, An Essay to Prove the
Contagious Nature of Malignant Cholera ... (Baltimore: Cary, Hart &
Co., 1833), pp. 128-29, 131, 138, 140-41, 143; Niles' Weekly Register
42 (18 Aug 1832):439; Elam Stimson, The Cholera Beacon ... (Dundas,
Ontario: Hackstaff, 1835), pp. 25-26; "Saline Injections in Cholera,"
Medical Magazine 1 (1933):192-93; J. Mauran, "Case of Cholera
Treated With Saline Injections With Observations Thereon," Medical
Magazine 1 (1933):254-56; Drake, Practical Treatise, pp. 123-24,
144-47; Floyd T. Ferris, A Treatise on Epidemic Cholera ... (New
York: Harper & Bros., 1835), pp. 35-36.
Scurvy had by no means been eliminated in the decades immediately preceding the Civil War, although in the Army, the incidence of the disease and the number of deaths varied considerably from year to year. In 1819, for example, the Army recorded 7 cases with no deaths, but in 1820 there were 734 cases with 190 deaths. In most years during the period from 1819 to 1838, no deaths were recorded. Factors that contributed to scurvy's appearance were recognized, but the unique importance of fruits and vegetables as both preventives and cures only gradually came to be understood. Such factors as fatigue, cold, anxiety, dampness, overwork, and the excessive use of salt continued to be implicated as causes rather than merely conditions that diminished ascorbic acid reserves. The disease appeared at widely differing posts from Florida to Texas and New Mexico to Wyoming, wherever climate and isolation made an adequate diet difficult to achieve.19
Mercury had particularly destructive effects upon patients suffering from scurvy, but physicians of the period were beginning to recognize this fact and to seek better methods of prevention and treatment of the disease. They still placed much unwarranted faith in the antiscorbutic effects of vinegar, but more correctly inferred that potatoes relieved the symptoms of scurvy. Many posts were relatively isolated, especially during the winter, and the difficulty of bringing in antiscorbutics caused surgeons to look for native plants with such properties. In the Southwest, medical officers discovered that the plant called the maguey proved even more effective in treating scurvy than lime juice, while poke-weed, prickly pear, and wild onions were also effective antiscorbutics easily found near many forts.20
Among other health problems afflicting soldiers in the 1818 to 1861 period were rheumatic and respiratory conditions and venereal diseases. The number of cases of venereal disease reported in this period dropped considerably. As a result, Army records devote little space to this problem, although it appears that, without the benefit of blood tests to prove them wrong, physicians assumed that they could easily cure syphilis in its earliest stages. They still relied heavily on mercury as a treatment not merely for syphilis but for all forms of venereal disease and applied both internally and externally. A new refinement of an almost abandoned technique was introduced in this period, however, to treat respiratory disease. When a patient's pleural cavity filled with fluid, relief was provided by means of thoracentesis, a surgical puncture of the chest, which eliminated the need
19Forry, Climate, pp. 332-33; John Russell
Bartlett, Personal Narrative of Explorations and Incidents in Texas,
2 vols. (New York: Rio Grande Press, 1854), 1:237; Leroy R. Hafen and Francis
Marion Young, Fort Laramie and the Pageant of the West, 1834-1890 (Glendale,
Calif.: Arthur H. Clark Co., 1938), p. 157; M. L. Crimmins, ed., "Colonel
J. K. F. Mansfield's Report of the Inspection of the Department of Texas
in 1856," Southwestern Historical Quarterly 42 (1938-39):138;
John Norris, "The 'Scurvy Disposition': Heavy Exertion as an Exacerbating
Influence on Scurvy in Modern Times," Bulletin of the History of
Medicine 57 (1983):325, 338.
for an incision to achieve drainage. The pneumonia patient would probably still be bled, as would the patient with rheumatism or similar problems. The victim of gout might also be afflicted with blisters or dosed with colchicum, guaiacum, iodide of potash, opium, or turpentine. Another problem that was far from rare was ophthalmia, which might be treated with silver nitrate in the form of a solution called "lunar caustic in distilled water."21
A more unexpected health problem that also appeared in the Army during this period was lead poisoning. On one post the source of the problem was traced to drinking water collected after it had run down the surface of a lead-covered roof. At another, sheet lead covering kitchen equipment was held responsible, while at a third, white lead used to clean soldiers' gloves and boots had caused illness. Lead poisoning was treated with bleedings, purgatives, including calomel, and blisters, but some patients remained partly palsied despite all that could be done for them.22
Preventive medicine in the pre-Civil War period was a relatively unsophisticated art, and few American physicians displayed an interest in the growing public health movement in Europe. Outside the Army, attempts to prevent disease in the United States were handled on a local or even an individual basis and were generally ineffective. In both civilian and military life, emphasis was placed on the need for increased cleanliness and improved ventilation and water supplies.
Medical officers, like their civilian counterparts, also recognized intemperance as an important cause of ill health. Surgeons often worked closely with temperance societies to reduce the consumption of alcohol by soldiers. Post commanders tried varying approaches to the problem. In Florida in 1837, for example, double rations of sugar and coffee were substituted for liquor and, although it was impossible to prevent the soldier from purchasing liquor on his own, drunkenness was severely punished.23
More successful than the temperance movement was the Army's requirement that all soldiers be vaccinated. The Medical Department usually shipped material for vaccination in the form of crusts, the scabs from a cowpox pustule, although in 1849 the department also tried shipping the virus in liquid form in glass tubes. A departmental regulation issued in September 1818 specified that every soldier not already immune to smallpox be vaccinated at once and that surgeons keep the necessary material for vaccination on hand and in good condition. The regulation was rigorously enforced; in 1847 an Army surgeon maintained that he had never seen a single case of smallpox among regular troops. Epidemics among the civilian population proved, however, that vaccination did not always protect completely against this dis-
21Quote from George R. Melin, "Report
of Ocular Diseases at the General Hospital at Fort Pitt, From 21st Dec
1822 to 20 Dec 1823," American Medical Recorder 8 (1825):193;
Owen H. Wangensteen and Sarah D. Wangensteen, The Rise of Surgery: From
the Empire Craft to Scientific Discipline (Minneapolis: University
of Minnesota Press, 1978), pp. 188-89; Edgar Erskine Hume, Victories of
Army Medicine ... (Philadelphia: J. B. Lippincott Co., 1943), p. 121; Byrne,
Court Martial, pp. 7-14, 25; Elisha Bartlett, "An Inquiry Into the
Degree of Certainty in Medicine; and Into the Nature and Extent of Its
Power Over Disease," in Brieger, Medical America, pp. 123-25;
Ltr, Lawson to W. V. Wheaton (9 Sep 1843), RG 112, entry 2, 14:417.
ease. The apparent failures of the procedures led to speculation that some vaccine had been improperly obtained, perhaps from an immature vesicle, or that failure to become immunized might be related in some way to the temperament of the patient. Also considered was the possibility that lymph obtained from a human patient who had himself been recently vaccinated was weaker than that obtained directly from the cow or that the passage of time might reduce the effectiveness of vaccination. Suggestions were made that the process be repeated at intervals, although at least one physician insisted that the effect of the initial vaccination was as strong after twenty-five years as after one. In the 1840s the surgeon general began urging that his surgeons time their vaccinations so that a continuous supply of fresh vaccine was always available, but requests for crusts continued to come into Washington from the field .24
Diet as a factor in the maintenance of health was a problem of particular concern to the Army and a matter to which Surgeon General Joseph Lovell devoted considerable thought from the beginning of his tenure in 1818. He was concerned with the state of the culinary art as practiced by the average soldier who, unaccustomed to cooking for himself, would eat salt pork raw and, when confronted with fresh meat, broil it "to a cinder." Instead of tea, the soldier "warms his stomach with a gill of diluted, corroding whiskey; and, after living, a few weeks in this way, is sent to the surgeon, worn down with dysentery, diarrhea, and other complaints of the stomach and bowels." Lovell urged that the American soldier be given less meat and more bread. Believing that man was naturally herbivorous, he thought a vegetable diet best for the maintenance of health and vigor. He commented unhappily upon the fact that, although the Army allowed each man, as part of his regular rations, twenty ounces of beef, twelve ounces of pork, and a gill of whiskey a day, it made no allowances for vegetables."
Lovell's specific recommendations for a soldier's diet included a warm beverage with morning and evening meals; peas, beans, and rice as dietary staples; kiln dried cornmeal instead of flour; and as high a proportion as possible of fresh in preference to salted meat. Beer or molasses and water should be substituted for hard liquor, and a plentiful supply of pickles should be available to aid the digestion and correct any superabundance of bile. Ideally, a large proportion of the soldier's rations should be served in the form of soup. The need for fresh vegetables and ripe fruit in the diet was being emphasized by 1861, and the use of fresh fish and meat was still encouraged. By 1847 the notion that those sick with fever should be starved to reduce it and the accompanying tension had apparently been abandoned. 26
Lovell's successor as surgeon general, Thomas Lawson, believed that most people in the United States ate more than was good for them, and certainly more than they needed to. When asked to give his opinion on decreasing the size of the Army's food allowance, he suggested that the
24Harvey E. Brown, The Medical Department
of the United States Army From 1775 to 1873 (Washington: SGO, 1873),
P. 121; Porter, "Notes," p. 323; Ltrs, Lawson to Charles McCormick
and to Samuel De Camp (both 10 Jun 1843) and to other surgeons, RG 112,
entry 2, 14:309, 325, 331, 337; Henry Heiskell to Benjamin King (20 Aug
1849), RG 112, entry 2, 20:216. Immunity from vaccination varies from two
to ten years or occasionally longer, and revaccination extends the immunity
amount of food as well as the amount of clothing issued could well be reduced. The advice on diet given by Lawson and Lovell has a very modem sound, but it does not appear to have been acted upon.27
The sick formed a majority of the Army surgeon's patients, but even in peacetime he occasionally cared for an injured man. In either situation he was handicapped. He did not understand the cause of the infections that so often frustrated his attempts to save lives, and in the first half of the century, he had no way to completely deaden the suffering of his patients on the operating table. As a result, for the tormented and struggling patient as well as for the surgeon, speed was of the essence and elaborate or time consuming procedures were impossible. With its horrors and uncertain results, surgery before the era of anesthesia tended to be a procedure of last resort and a specialty to which few doctors cared to restrict themselves. Specialization within the field was only in its infancy at the outbreak of the Civil War. Nevertheless, since major surgery required skills that the quack could not pretend to possess, the surgeon's prestige did not suffer from the competition of cults.28
When ether was first formally and publicly used as an anesthetic in 1846, it was pronounced a success. Not all surgeons, however, recognized its introduction as a critical milestone in the history of medicine. Enthusiasm spread with time, but some surgeons remained skeptical and complained of its poisonous side effects. Chloroform was introduced not long after ether and quickly became popular, especially in the South. Those who preferred it to ether maintained that less of it was necessary to achieve the same result, that it acted more quickly, that its stage of excitation was shorter, that it was easier and more pleasant to take, and that its odor disappeared more quickly.29
The device initially used for the administration of ether was a somewhat complex glass container with two openings, one for the insertion of an ether-soaked sponge, the other for the admission of air. Opposite the hole for air was a glass tube fitted with a mouthpiece containing a valve that permitted the patient to draw ether vapors into his lungs from the inhaler and then exhale into the room. Authorities initially suggested that no one inhale ether vapors for longer than ten minutes at a time and, if it were to be readministered, that it be only after a five to ten-minute rest. Some surgeons discovered that it was possible to keep a patient anesthetized as long as forty-five minutes, but experts believed that only those experienced with the procedure
27Ltr, Lawson to Sec War (5 May 1841), RG 112,
entry 2, 12:441-43.
CIRCULAR AMPUTATION as illustrated in Richard Upton Piper's Operative Surgery (Boston: Tichnor, Reed and Fields, 1852). (Courtesy of National Library of Medicine)
should attempt to do so. When the inventor of this device attempted to patent his creation, many physicians substituted a billor cone-shaped sponge over the patient's mouth and nose. Surgeons soon discovered that anesthetics were best administered by enclosing the sponge in a cone made of a folded towel or cut from cardboard, leather, or a similar material. Experts recommended that the patient be carefully observed while under anesthesia so that if he began to snore or his pulse became weak or slow, the sponge could be removed until he returned to a more normal state.30
The Army used anesthetic agents only very briefly during the Mexican War (1846-1848), trying ether but quickly abandoning it as too dangerous. One physician serving with U.S. troops in Mexico even commented that "anesthetics poison the blood and depress the nervous system, and, in consequence, hemorrhage is more apt to occur, and union by adhesion is prevented." Ether was first used in the Army with official sanction in 1849 to detect a malingerer whose supposedly immobile knee flexed readily when he was anesthetized. The standard supply table allowed two pounds of sulfuric ether for every 100 men by 1849, at which time chloroform was also added to the list of regularly used drugs, and by 1852 medical officers at many and perhaps all posts administered anesthetics during operations.31
Reports of deaths resulting from anesthetics, especially from chloroform, began to appear early in the history of their use, although a physician who had administered one or the other to 600 patients recorded no fatalities. Chloroform was blamed for occasional sudden convulsions and deaths. Although no way had been
30Henry Jacob Bigelow, Surgical Anesthesia,
Addresses and Other Papers (Boston: Little, Brown & Co., 1900), p.
found to prevent these tragedies, some surgeons still used this anesthetic at the time of the Civil War, and it became popular during that conflict because, unlike ether, it was not flammable and it put the patient under quickly and peacefully. Chloroform remained the anesthetic of choice for eye surgery until the introduction of cocaine as a local anesthetic after the Civil War. Sulfuric ether, however, was regarded with less suspicion. Some saw its odor as its only significant drawback, and poorly trained doctors tended to prefer it because of its supposedly foolproof character. Statistics on deaths associated with the use of ether became available at midcentury. They revealed that, whatever the reason, death rates during and after surgery without anesthesia were higher than those for operations conducted upon an etherized patient.32
In the period before 1861, sulfuric ether and chloroform (or a combination of the two) were the two most popular anes-thetics, but other forms were tried. Most often mentioned was a product misleadingly called chloric ether, which was actually chloroform dissolved in alcohol in various proportions. Many surgeons at Boston's Massachusetts General Hospital preferred chloric ether to either sulfuric ether or chloroform in its regular form. Also mentioned in the literature of the period were amylene, ethylene, and a product called "the vapor of Benzid."33
FLAP AMPUTATION as illustrated in Richard Upton Piper's Operative Surgery (Boston: Tichnor, Reed and Fields, 1852). (Courtesy of National Library of Medicine.)
Yet as late as 1860, some surgeons still had reservations that extended beyond chloroform to include all anesthetic agents. One such physician wrote that "if my patients will have an anesthetic agent [I] will give them as much good whiskey as they will drink," adding that if he had his way, he would have all anesthetic agents aban-
32Edward Treacher Collins, The History and
Traditions of the Moorfields Eye Hospital ... (London: H. K. Lewis
& Co., 1929), p. 85; "Influence of Etherization on the Mortality
of Surgical Operations", Charleston Medical Journal and Review
3 (1848):464; Morton, Anaesthetic Agents, p. 15.
WOUND CLOSURE: METHODS AND MATERIALS as illustrated in Richard Upton Piper's Operative Surgery (Boston: Tichnor, Reed and Fields, 1852). (Courtesy of National Library of Medicine.)
doned except for alcohol and opium. Other surgeons also relied heavily upon opium, believing that it had an added advantage over other drugs intended for the relief of pain because it prevented inflammation by lowering irritability.34
Because the danger of infection was not reduced by the introduction of anesthesia, advances in the field of surgery remained modest. Despite the willingness of younger surgeons to be more adventurous, as late as 1861 severe compound fractures or extensive damage to a joint, blood vessels, or nerves still dictated immediate amputation, particularly for a soldier wounded in the field. Each instance of compound fracture was usually considered individually, however, before a decision for or against amputation was made. The mortality rate appeared to vary with the amount of limb removed.35
Other types of surgery performed before the Civil War included the removal of large portions of bone from shattered limbs, a procedure used in an attempt to avoid amputation; ear and eye surgery; restorative surgery, which included the rebuilding of damaged lips and checks and the relief of the binding of burn scars; and abdominal surgery. Successful abdominal surgery to deal with intussusception (prolapse of a length of the intestine into itself) and obstructions, though rare, was reported as early as the 1830s. Doctors often dosed the victim of a penetrating abdominal wound with opium to quiet the bowel and then left him to nature's mercy, but at least one surgeon, Samuel Gross, had experimented
34Joseph N. McDowell, "Report on the Improvements
in the Art and Science of Surgery in the Last Fifty Years," Transactions
of the American Medical Association 13 (1860):436-37, quote from p.
436; J. N. McDowell, "The Effects of Opium in the Treatment of Wounds;
or, the Use of Narcotics in Surgical Operations," Missouri Medical
and Surgical Journal 1 (1845-46):11-12, 16.
with animals in an attempt to determine the best way in which to handle such injuries. Venesection was customary for the victim of an abdominal wound although this practice was losing popularity by 1861.36
By the outbreak of the Civil War, the instruments regularly used by the military surgeon might well include at least three tourniquets, two saws of different sizes, several bone-cutters, silver catheters of varying diameters, a stomach pump, and small and large syringes. Artery forceps were used to clamp off arteries during surgery, and the tenaculum, a hooklike instrument, played a progressively more important role in surgery as greater care came to be taken in isolating blood vessels. Surgeons were beginning to use catgut for ligatures because it could be left in place and silver wire for sutures and ligatures because it seemed to promote healing and lessen the danger of hemorrhage. A new splint had been developed in 1858 that proved simpler to use and less expensive to make than its predecessors. 37
Although the role played by germs in wound infection was not generally recognized before the Civil War, a few surgeons appear to have at least suspected the role of heat in preventing the spread of infection and took the precaution of boiling their instruments. Chlorine vapors were sometimes used to disinfect the sickroom, despite the inconvenience caused by the need to remove the patient while the room was treated. Physicians used such antiseptics as chlorine or creosote in alcohol, iodine, or even whiskey to prevent the spread of infection, without necessarily understanding how these substances worked. It was more probable, however, that wounds would be dressed with warm water, to which might be added a small amount of spirits of camphor. Regardless of the precautions taken, infection was always expected.38
The ignorance of physicians concerning the cause of infection and the ways in which it could spread led to complications not often seen today. With hospitals both few and small, hospital gangrene, an infection probably caused by streptococci, was not common in the United States before the Civil War. Septicemia was frequent and the outcome generally fatal despite bleedings, blisterings, and purgings. Erysipelas was also often seen during the entire period, especially in hospitals. In 1818 its treatment called for purging and the application of warm, weak mineral water to the inflamed area, but by 1861, dilute tincture of iodine and soothing lotions were favored as local applications, and quinine, tincture of iron, and nutritious food and drink were also recommended.39
Surgeons of the pre-Civil War period were familiar with tetanus, or lockjaw, although they did not understand its cause, and the various approaches they suggested
36Samuel D. Gross, An Experimental and Critical
Inquiry Into the Nature and Treatment of Wounds of the Intestines, Illustrated
by Engravings (Louisville, Ky.: Prentice & Weissinger, 1843), pp.
12, 34-35, 37-38, 42-44, 46-47, 50-51, 73, 156.
to its treatment were ineffective. Opium was popular throughout the period, applied externally or given internally. Tincture of Cannabis, a blister along the spine, and croton oil were also considered for this purpose. The inhaling of chloroform was suggested, as was the use of morphine with camphor and antimony.40
Even before the wounded could be treated, however, Army surgeons faced the problems of moving them from battlefield to hospital. In an era when the ill and injured were for the most part cared for at home, the problem of transporting these patients was essentially a military one, one that the Army did not face on a large scale until the war with Mexico. During the Seminole wars in Florida, there was no organized system for moving the wounded to safe and convenient places where they could be treated. During the conflict in the swamps and jungles, wagons or horse litters were used to carry those unable to walk, with the latter favored for particularly difficult terrain. Since ready-made litters were not issued to Army units, it was necessary to construct them on the battlefield. The litter was generally slung between two horses, which also required both a large number of horses and a considerable period of time to fashion the litters from blankets and rawhide; the desirability of having litters made up before the need for them arose soon became obvious. The two-horse litter was used to move the wounded after clashes with the Indians in the West and on at least one occasion in the Mexican War, but wheeled vehicles were more popular for this purpose. Until the Civil War, however, efforts to have the Army obtain the proper equipment and animals for the evacuation of the wounded were in vain .41
Growing uncertainty about the common assumptions of accepted medical practice and a legacy of contempt for intellectualism bequeathed by Andrew Jackson and his followers accompanied a rapid westward dispersal of the population, leading to a deterioration in the quality of the medical education available in the United States. The number of medical schools increased in response to the need for physicians to care for an increasingly scattered population. Most schools were designed to bring in a profit, and since licensing requirements were generally absent, the competition among them resulted in a general lowering of the demands made upon their students. Although a few institutions stood out above the others, most were, as medical historian Richard Harrison Shryock concluded, "mediocre schools ministering to mediocre personnel." Even the best schools fell short of the standard set by the best European teaching institutions, and the conscientious physician who did not go abroad for his training might be forced to attempt to complete his professional education on his own initiative after he had received his medical degree. If he wished to expand the bounds of medical knowledge through research, he would find himself almost alone.42
40Hand, House Surgeon, p. 33; Chisolm,
Manual, pp. 230-31; Eugene Hillhouse Pool and Frank J. McGowan,
Surgery at the New York Hospital One Hundred Years Ago (New York:
Paul B. Hoeber, 1930), pp. 19-20.
The relatively low caliber of many of those admitted to medical schools as well as that of the schools themselves tended to give a bad reputation to all who attended them. A prominent physician of the time, Daniel Drake, maintained that students "too stupid for the bar, or too immoral for the Pulpit chose to study medicine." Another commented that because of the deficiencies in his premedical education, such a student might be "so ignorant and illiterate as to be unable to write his own language." Although proprietary medical schools often required a three-year apprenticeship in addition to attendance at two sessions of lectures, the knowledge gained during the apprenticeship varied widely, according to the interest and competence of the preceptor. Work as an apprentice might, nevertheless, offer the future physician his only opportunity for training at the patient's bedside. Clinical instruction did not receive the emphasis in the United States that it did in Europe; as late as 1849 only nine medical schools in this country made hospital attendance a requirement, while only sixteen schools so much as offered their students an opportunity for clinical training. Few students were likely ever to have seen a stethoscope, clinical thermometer, ophthalmoscope, or microscope, since most schools did not own such instruments. Medical college libraries tended to be quite small. The training offered in anatomy was likely to be weak since cadavers were difficult to obtain, and rumors of grave robbing could still precipitate scandals .43
In the decades before the Civil War, increasing numbers of American medical students went abroad for some part of their training, and efforts were made throughout the period to upgrade the training available in the United States. The U.S. Army, which after 1832 accepted physicians only after they had passed an examination, found that, as late as 1860, 50 percent of the applicants were unqualified, many being defective in the areas of preparatory education, practical anatomy, pathology, and clinical medicine. The fact that these candidates had failed the Army's examination did not, of course, prevent them from practicing medicine as civilians wherever they chose, no matter how slight their skill, since very few states retained licensing requirements.
When a physician concerned by the weakness of his own professional training wished to remedy its deficiencies by private study, he often turned to medical societies and professional publications. Although their number grew with time, medical societies during the first half of the century were few and were usually located in large cities. The number of medical publications available also grew, although some journals were very short-lived. By 1851 eighteen were being published within the United States. These journals, as well as books, many from abroad, enabled the American physician to keep abreast of medical developments in the United States and in Europe.44
Many Army surgeons in the period con-
43Quotes from Daniel Drake, Practical Essays
on Medical Education, and the Medical Profession, in the United States
(Cincinnati, Roff & Young, 1832), pp. 6-7, and F. Campbell Stewart,
"An Anniversary Address Delivered Before the New York Medical and
Surgical Society," in Brieger, Medical America, p. 66; Andrew
Boardman, "An Essay on the Means of Improving Medical Education and
Elevating Medical Character," in Brieger, Medical America, pp.25-26,34.
WILLIAM BEAUMONT. (Courtesy of National Library of Medicine.)
tributed to the medical literature. Their articles covered such disparate topics as cures for scurvy, medical topography, or anatomy, erysipelas of the head, amputation at joints, inflammation and fever, and vaccination. Among books by Army surgeons were Samuel Forry's The Climate of the United States and Its Epidemic Influences and several volumes replete with detailed statistics and other data concerning meteorology and disease published by the Medical Department. Army surgeons were also caught up in what has been called a "compulsive obsession" to learn more about natural science and anthropology and worked to catalogue or record their observations and collections made on the nation's frontiers.45
American physicians of the period preceding the Civil War devoted little time or interest to learning through research. One of the few exceptions to this rule was an Army surgeon, William Beaumont. Beaumont took advantage of the opportunity presented him in the form of a permanent fistula in the abdomen of a private patient, Alexis St. Martin, who was suffering from the effects of a shotgun wound. From 1822 to 1834, whenever his reluctant patient could be persuaded to remain with him long enough, the Army surgeon performed experiments aimed at discovering the secrets of human digestion.46
The book Beaumont published in 1833 on his experiments described how he inserted various morsels of food by means of a silk string into St. Martin's stomach and observed their fate in order to study the process of digestion. In one series of experiments Beaumont recorded the temperature of that organ during the process. It is interesting to note that he was intrigued by the possible effects of weather and soil conditions upon the digestion. He enrolled St. Martin in the Army as an orderly in 1832 to ease the burden of supporting him, but early in 1834 the unenthusiastic patient crossed the Canadian border in the general direction of his former home. On this occasion, Beaumont was able to persuade St. Martin to return to him so that he could resume his research.
45Edward Lurie, "An Interpretation of
Science in the Nineteenth Century: A Study in History and Historiography,"
Cahiers d' historie mondiale 8 (1964-65):684.
BEAUMONT'S ILLUSTRATION OF ST. MARTIN'S WOUND as ordinarily seen. (Courtesy of National Library of Medicine.)
BEAUMONT'S ILLUSTRATION OF ST. MARTIN'S WOUND with the stomach prolapsed.(Courtesy of National Library of Medicine.)
Beaumont's efforts to learn the exact composition of the gastric juice he obtained from St. Martin's stomach, however, met with repeated failure. None of the experts he consulted could add much to what he was able to ascertain for himself. Even the great Swedish biochemist Jacob Berzelius disappointed Beaumont, blaming his inability to determine its exact chemical composition on the age and small quantity of the fluid sent to him. The final blow fell in the spring of 1834 when, after accompanying him on a series of visits to medical societies, Beaumont's sole source of the vital fluid left again on what was to have been a short visit to his family in Canada, but this time never returned. Repeated attempts over a period of many years to get him to do so failed, and thus Beaumont's experiments came to an end.
Beaumont was among a very few American physicians conducting research of real significance in the first half of the nineteenth century, and his work received favorable international attention. The attitude toward such endeavors prevalent in medical circles in the United States, however, was perhaps accurately reflected in the opinion of Lawson, the second surgeon general of the Army Medical Department, that Lovell's efforts to encourage and aid Beaumont's research constituted unwarranted favoritism.
In the period from 1818 to 1861, medicine and surgery were slowly approaching a new era. These four decades, however, were for many doctors a time of increased confusion and uncertainty. The gradual death of the dogmatic, unscientific medicine of centuries past brought both physicians and their patients face to face with the profession's ignorance and helplessness in the face of infection and disease. The use of anesthesia brought some relief from suffering in the 1850s, but it was not accompanied by a significant saving of lives. Although the life- and health-saving properties of quinine sulfate were recognized, they were not understood. The drive for the reform of medical education and the search for fact rather than theory would gain
strength in the generation after 1870 and lead to rapid progress in medical science. Only in the next century, however, would the United States assume a position of leadership in the medical world. For the patient, military or civilian, who was concerned with his health and his life, the period from 1818 to 1861 was a bleak one.