text
stringlengths 366
374k
| id
stringlengths 47
47
| dump
stringclasses 1
value | url
stringlengths 20
277
| file_path
stringclasses 21
values | language
stringclasses 1
value | language_score
float64 0.68
1
| token_count
int64 84
77.4k
| score
float64 2.52
4.59
| int_score
int64 3
5
| domain
stringclasses 1
value |
---|---|---|---|---|---|---|---|---|---|---|
If you look back on your own days as a student, you can recall with ease teachers who made learning fun. You wanted to be in their classes. Best of all, you were certain that those teachers liked you and missed you on those days when you had to be absent. If you succeed at nothing else in your career, you should aim to create that same effect on your students. What makes certain teachers gifted? Charisma. According to a standard definition, charisma is "a unique personal power belonging to those individuals who secure the allegiance of large numbers of people." Fortunately, classroom charisma is a learned trait. It is something that you should begin working on the first day you teach to your last.
How do you rate your classroom charisma? How do you create an environment in your classroom in which your students are made to feel that they are accepted and necessary to the proper functioning of the class? How do you create a nurturing, and positive climate in your classroom? Be specific. For example, I smile and greet students as they come into the classroom. I use questioning techniques that engage all students. My lessons are packed with a variety of interesting activities. I dress professionally. I use techniques that appeal to all of my students' learning styles. I establish procedures and routines and stick to them. | <urn:uuid:2734668d-2672-41ee-976b-b939ca944c81> | CC-MAIN-2013-20 | http://www.rcsnc.org/blog/One.aspx?portalId=9683844 | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706499548/warc/CC-MAIN-20130516121459-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.969053 | 260 | 2.578125 | 3 | HuggingFaceFW/fineweb-edu/sample-100BT |
Egypt's Mubarak -- from presidency to prison
REUTERS - Here is a look at Hosni Mubarak from the start of his presidency to when he was sentenced to life imprisonment on Saturday for his role in killing protesters involved in a uprising that toppled him from power last year:
October 6, 1981 - Vice-President Mubarak is thrust into office when Islamist radicals gun down President Anwar Sadat at a military parade. He is approved as president in a referendum in November.
June 26, 1995 - Gunmen attack Mubarak's car as he arrives at an African summit in Ethiopia's capital Addis Ababa. He escapes unhurt and returns to Egypt.
November 17, 1997 - Islamist militant group al-Gama'a al-Islamiya (Islamic Group) kills 58 tourists and four Egyptians at an ancient temple near Luxor. It is the most dramatic act in a 1990s rebellion by Islamists seeking to establish an Islamic state. The revolt is eventually crushed by state security.
March 2005 - Street protests by the Kefaya (Enough) Movement draw hundreds across Egypt to oppose a fifth six-year term for Mubarak or any attempt to install his son Gamal in his place.
May 11, 2005 - Parliament votes to change the constitution to allow contested presidential elections, dismissing opposition complaints that strict rules would prevent genuine competition.
September 27, 2005 - Mubarak is sworn in for a fifth consecutive term after winning the first multi-candidate presidential vote on September 7. Rights groups say the vote was marred by abuses. His closest rival, Ayman Nour, comes a distant second and is later jailed on charges he says are politically motivated.
December 8, 2005 - The Muslim Brotherhood wins 20 percent of the seats in parliament, its best showing. Rights groups say the vote was vitiated by irregularities to ensure Mubarak's ruling party retains a big majority.
April 2008 - Riots erupt in a number of cities over wages, rising prices and shortages of subsidised bread.
March 27, 2010 - Mubarak reassumes presidential powers after three weeks recovering from gallbladder surgery in Germany.
November 29, 2010 - A parliamentary election virtually eliminates opposition to Mubarak's ruling party in the assembly before a 2011 presidential vote. The Brotherhood and several other opposition groups boycott the parliamentary election.
January 25, 2011 - Anti-government protests begin across Egypt, driven by discontent over poverty, repression and corruption.
January 28 - Mubarak orders troops and tanks into cities overnight to quell the demonstrations.
January 31 - Egypt swears in a new government. New Vice-President Omar Suleiman says Mubarak has asked him to start dialogue with all political forces.
February 10 - Mubarak says national dialogue under way, transfers powers to vice-president but refuses to leave office immediately. Protesters in Cairo's Tahrir Square are enraged.
February 11 - Mubarak steps down and a military council takes control.
April 12 - Mubarak is hospitalised after being questioned by prosecutors. The next day, Egypt orders Mubarak detained for questioning on accusations he abused his power, embezzled funds and had protesters killed.
August 3 - Mubarak, wheeled into a courtroom cage on a bed to face trial, denies the charges against him. His two sons, Gamal and Alaa, also deny the charges. In subsequent sessions, Mubarak always appears on a hospital stretcher.
June 2, 2012 - Mubarak is sentenced to life in prison for his role in the killing of protesters and is flown from the Cairo court to Tora prison on the outskirts of the capital, where he is admitted to a hospital facility.
(Reporting by David Cutler, London Editorial Reference Unit)
- Tweet this
- Share this
- Digg this | <urn:uuid:c2eed8c9-2cce-41ab-b987-1c9185fb47fa> | CC-MAIN-2013-20 | http://www.reuters.com/article/2012/06/03/egypt-mubarak-events-idINDEE85202G20120603 | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706499548/warc/CC-MAIN-20130516121459-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.956127 | 752 | 2.703125 | 3 | HuggingFaceFW/fineweb-edu/sample-100BT |
A group of researchers at DTU Space is developing an observatory to be mounted on the International Space Station. Called ASIM, the observatory will among other things photograph giant lightning discharges above the clouds. The objective is to determine whether giant lightning discharges affect the Earth’s climate.
The question is whether giant lightning discharges, which shoot up from the clouds towards space, are simply a spectacular natural phenomenon, or whether they alter the chemical composition of the atmosphere, affecting the Earth’s climate and the ozone layer.
In recent years, scientists at DTU Space have studied giant lightning using high-altitude mountain cameras. From time to time, the cameras have succeeded in capturing low-altitude lightning flashes which have shot up from a thundercloud. The International Space Station provides a clear view of these giant lightning discharges, and the opportunity to study them will be significantly improved with the introduction of the observatory.
The researchers will also use ASIM to study how natural and man-made events on the ground – such as hurricanes, dust storms, forest fires and volcanic eruptions – influence the atmosphere and climate. | <urn:uuid:64609457-8d80-4c2f-9854-ad43579b4866> | CC-MAIN-2013-20 | http://www.space.dtu.dk/English/Research/Climate_and_Environment/Electric_storms.aspx | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706499548/warc/CC-MAIN-20130516121459-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.885559 | 231 | 3.90625 | 4 | HuggingFaceFW/fineweb-edu/sample-100BT |
HISTORY OF SCULPTURE
Chronological summary of major movements, styles, periods and artists that have contributed to the evolution and development of visual art.
STONE AGE ART (c.
2,500,000 - 3,000 BCE)
ORIGINS OF ART
STONE AGE ART
BRONZE AGE ART
IRON AGE ART
DARK AGES/MEDIEVAL ART
QUESTIONS ABOUT FINE ARTS
Prehistoric art comes from three epochs of prehistory: Paleolithic, Mesolithic and Neolithic. The earliest recorded art is the Bhimbetka petroglyphs (a set of 10 cupules and an engraving or groove) found in a quartzite rock shelter known as Auditorium cave at Bhimbetka in central India, dating from at least 290,000 BCE. However, it may turn out to be much older (c.700,000 BCE). This primitive rock art was followed, no later than 250,000 BCE, by simple figurines (eg. Venus of Berekhat Ram [Golan Heights] and Venus of Tan-Tan [Morocco]), and from 80,000 BCE by the Blombos cave stone engravings, and the cupules at the Dordogne rock shelter at La Ferrassie. Prehistoric culture and creativity is closely associated with brain-size and efficiency which impacts directly on "higher" functions such as language, creative expression and ultimately aesthetics. Thus with the advent of "modern" homo sapiens painters and sculptors (50,000 BCE onwards) such as Cro-Magnon Man and Grimaldi Man, we see a huge outburst of magnificent late Paleolthic sculpture and painting in France and the Iberian peninsular. This comprises a range of miniature obese venus figurines (eg. the Venuses of Willendorf, Kostenky, Monpazier, Dolni Vestonice, Moravany, Brassempouy, Garagino, to name but a few), as well as mammoth ivory carvings found in the caves of Vogelherd and Hohle Fels in the Swabian Jura. However, the greatest art of prehistory is the cave painting at Chauvet, Lascaux and Altamira.
These murals were painted in caves reserved as a sort of prehistoric art gallery, where artists began to paint animals and hunting scenes, as well as a variety of abstract or symbolic drawings. In France, they include the monochrome Chauvet Cave pictures of animals and abstract drawings, the hand stencil art at Cosquer Cave, and the polychrome charcoal and ochre images at Pech-Merle, and Lascaux. In Spain, they include polychrome images of bison and deer at Altamira Cave in Spain. Outside Europe, major examples of rock art include: Ubirr Aboriginal artworks (from 30,000 BCE), the animal figure paintings in charcoal and ochre at the Apollo 11 Cave (from 25,500 BCE) in Namibia, the Bradshaw paintings (from 17,000 BCE) in Western Australia, and the hand stencil images at the Cuevas de las Manos (Cave of the Hands) (from 9500 BCE) in Argentina, among many others.
Against a background of a new climate, improved living conditions and consequent behaviour patterns, Mesolithic art gives more space to human figures, shows keener observation, and greater narrative in its paintings. Also, because of the warmer weather, it moves from caves to outdoor sites in numerous locations across Europe, Asia, Africa, Australasia and the Americas. Mesolithic artworks include the bushman rock paintings in the Waterberg area of South Africa, the paintings in the Rock Shelters of Bhimbetka in India, and Australian Aboriginal art from Arnhem Land. It also features more 3-D art, including bas-reliefs and free standing sculpture. Examples of the latter include the anthropomorphic figurines uncovered in Nevali Cori and Göbekli Tepe near Urfa in eastern Asia Minor, and the statues of Lepenski Vir (eg. The Fish God) in Serbia. Other examples of Mesolithic portable art include bracelets, painted pebbles and decorative drawings on functional objects, as well as ceramic pottery of the Japanese Jomon culture. The greatest Mesolithic work of art is the sculpture "Thinker From Cernavoda" from Romania.
The more "settled" and populous Neolithic era saw a growth in crafts like pottery and weaving. This originated in Mesolithic times from about 9,000 BCE in the villages of southern Asia, after which it flourished along the Yellow and Yangtze river valleys in China (c.7,500 BCE) - see Neolithic Art in China - then in the fertile crescent of the Tigris and Euphrates river valleys in the Middle East (c.7,000), before spreading to India (c.5,000), Europe (c.4,000), China (3,500) and the Americas (c.2,500). Although most art remained functional in nature, there was a greater focus on ornamentation and decoration. For example, calligraphy - one of the great examples of Chinese art - first appears during this period. Neolithic art also features free standing sculpture, bronze statuettes (notably by the Indus Valley Civilization), primitive jewellery and decorative designs on a variety of artifacts. The most spectacular form of Neolithic art was architecture: featuring large-stone structures known as megaliths, ranging from the Egyptian pyramids, to the passage tombs of Northern Europe - such as Newgrange and Knowth in Ireland - and the assemblages of large upright stones (menhirs) such as those at the Stonehenge Stone Circle and Avebury Circle in England. (For more, please see: megalithic art.) However, the major medium of Neolithic art was ceramic pottery, the finest examples of which were produced around the region of Mesopotamia (see Mesopotamian art) and the eastern Mediterranean. Towards the close of this era, hieroglyphic writing systems appear in Sumer, heralding the end of prehistory.
The most famous examples of Bronze Age art appeared in the 'cradle of civilization' around the Mediterranean in the Near East, during the rise of Mesopotamia (present-day Iraq), Greece, Crete (Minoan civilization) and Egypt. The emergence of cities, the use of written languages and the development of more sophisticated tools led the creation of a far wider range of monumental and portable artworks.
Egypt, arguably the greatest civilization in the history of ancient art, was the first culture to adopt a recognizable style of art. Egyptian painters depicted the head, legs and feet of their human subjects in profile, while portraying the eye, shoulders, arms and torso from the front. Other artistic conventions laid down how Gods, Pharaohs and ordinary people should be depicted, regulating such elements as size, colour and figurative position. A series of wonderful Egyptian encaustic wax paintings, known as the Fayum portraits, offer a fascinating glimpse of Hellenistic culture in Ancient Egypt. In addition, the unique style of Egyptian architecture featured a range of massive stone burial chambers, called Pyramids. Egyptian expertise in stone had a huge impact on later Greek architecture. Famous Egyptian pyramids include: The Step Pyramid of Djoser (c.2630 BCE), and The Great Pyramid at Giza (c.2550 BCE), also called the Pyramid of Khufu or 'Pyramid of Cheops'.
In Mesopotamia and Ancient Persia, Sumerians were developing their own unique building - an alternative form of stepped pyramid called a ziggurat. These were not burial chambers but man-made mountains designed to bring rulers and people closer to their Gods who according to legend lived high up in mountains to the east. Ziggurats were built from clay bricks, typically decorated with coloured glazes.
For most of Antiquity, the art of ancient Persia was closely intertwined with that of its neighbours, especially Mesopotamia (present-day Iraq), and influenced - and was influenced by - Greek art. Early Persian works of portable art feature the intricate ceramics from Susa and Persepolis (c.3000 BCE), but the two important periods of Persian art were the Achaemenid Era (c.550-330 BCE) - exemplified by the monumental palaces at Persepolis and Susa, decorated with sculpture, stone reliefs, and the famous "Frieze of Archers" (Louvre, Paris) created out of enameled brick - and the Sassanid Era (226-650 CE) - noted for its highly decorative stone mosaics, gold and silver dishes, frescoes and illuminated manuscripts as well as crafts like carpet-making and silk-weaving. But, the greatest relics of Sassanian art are the rock sculptures carved out of steep limestone cliffs at Taq-i-Bustan, Shahpur, Naqsh-e Rostam and Naqsh-e Rajab.
The first important strand of Aegean art, created on Crete by the Minoans, was rooted in its palace architecture at Knossos, Phaestus, Akrotiri, Kato Zakros and Mallia, which were constructed using a combination of stone, mud-brick and plaster, and decorated with colourful murals and fresco pictures, portraying mythological animal symbols (eg. the bull) as well as a range of mythological narratives. Minoan art also features stone carvings (notably seal stones), and precious metalwork. The Minoan Protopalatial period (c.1700 BCE), which ended in a major earthquake, was followed by an even more ornate Neopalatial period (c.1700-1425 BCE), which witnessed the highpoint of the culture before being terminated by a second set of earthquakes in 1425. Minoan craftsmen are also noted for their ceramics and vase-painting, which featured a host of marine and maritime motifs. This focus on nature and events - instead of rulers and deities - is also evident in Minoan palace murals and sculptures.
Named after the metal which made it prosperous, the Bronze Age period witnessed a host of wonderful metalworks made from many different materials. This form of metallugy is exemplified by two extraordinary masterpieces: The "Ram in the Thicket" (c.2500 BCE, British Museum, London) a small Iraqi sculpture made from gold-leaf, copper, lapis lazuli, and red limestone; and The "Maikop Gold Bull" (c.2500 BCE, Hermitage, St Petersburg) a miniature gold sculpture of the Maikop Culpture, North Caucasus, Russia. The period also saw the emergence of Chinese bronzeworks (from c.1750 BCE), in the form of bronze plaques and sculptures often decorated with Jade, from the Yellow River Basin of Henan Province, Central China.
For Bronze Age civilizations in the Americas, see: Pre-Columbian art, which covers the art and crafts of Mesoamerican and South American cultures.
The Iron Age saw a huge growth in artistic activity, especially in Greece and around the eastern Mediterranean. It coincided with the rise of Hellenic (Greek-influenced) culture.
Although Mycenae was an independent Greek city in the Greek Peloponnese, the term "Mycenean" culture is sometimes used to describe early Greek art as a whole during the late Bronze Age. Initially very much under the influence of Minoan culture, Mycenean art gradually achieved its own balance between the lively naturalism of Crete and the more formal artistic idiom of the mainland, as exemplified in its numerous tempera frescoes, sculpture, pottery, carved gemstones, jewellery, glass, ornaments and precious metalwork. Also, in contrast to the Minoan "maritime trading" culture, Myceneans were warriors, so their art was designed primarily to glorify their secular rulers. It included a number of tholos tombs filled with gold work, ornamental weapons and precious jewellery.
Ancient Greek art is traditionally divided into the following periods: (1) the Dark Ages (c.1100-900 BCE). (2) The Geometric Period (c.900-700 BCE). (3) The Oriental-Style Period (c.700-625 BCE). (4) The Archaic Period (c.625-500 BCE). (5) The Classical Period (c.500-323 BCE). (6) The Hellenistic Period (c.323-100 BCE). Unfortunately, nearly all Greek painting and a huge proportion of Greek sculpture has been lost, leaving us with a collection of ruins or Roman copies. Greek architecture, too, is largely known to us through its ruins. Despite this tiny legacy, Greek artists remain highly revered, which demonstrates how truly advanced they were.
Like all craftsmen of the Mediterranean area, the ancient Greeks borrowed a number of important artistic techniques from their neighbours and trading partners. Even so, by the death of the Macedonian Emperor Alexander the Great in 323 BCE, Greek art was regarded in general as the finest ever made. Even the Romans - despite their awesome engineering and military skills - never quite overcame their sense of inferiority in the face of Greek craftsmanship, and (fortunately for us) copied Greek artworks assiduously. Seventeen centuries later, Greek architecture, sculptural reliefs, statues, and pottery would be rediscovered during the Italian Renaissance, and made the cornerstone of Western art for over 400 years.
Greek pottery developed much earlier than other art forms: by 3000 BCE the Peloponnese was already the leading pottery centre. Later, following the take-over of the Greek mainland by Indo-European tribes around 2100 BCE, a new form of pottery was introduced, known as Minyan Ware. It was the first Greek type to be made on a potter's wheel. Despite this, it was Minoan pottery on Crete - with its new dark-on-light style - that predominated during the 2nd Millennium BCE. Thereafter, however, Greek potters regained the initiative, introducing a series of dazzling innovations including: beautifully proportioned Geometric Style pottery (900-725), as well as Oriental (725-600), Black-Figure (600-480) and Red-Figure (530-480) styles. Famous Greek ceramicists include Exekias, Kleitias, Ergotimos, Nearchos, Lydos, the Amasis Painter, Andokides, Euthymides, and Sophilos (all Black-Figure), plus Douris, Brygos and Onesimos (Red-Figure).
In Etruria, Italy, the older Villanovan Culture gave way to Etruscan Civilization around 700 BCE. This reached its peak during the sixth century BCE as their city-states gained control of central Italy. Like the Egyptians but unlike the Greeks, Etruscans believed in an after-life, thus tomb or funerary art was a characteristic feature of Etruscan culture. Etruscan artists were also renowned for their figurative sculpture, in stone, terracotta and bronze. Above all Etruscan art is famous for its "joi de vivre", exemplified by its lively fresco mural painting, especially in the villas of the rich. In addition, the skill of Etruscan goldsmiths was highly prized throughout Italy and beyond. Etruscan culture, itself strongly influenced by Greek styles, had a marked impact on other cultures, notably the Hallstatt and La Tene styles of Celtic art. Etruscan culture declined from 396 BCE onwards, as its city states were absorbed into the Roman Empire.
From about 600 BCE, migrating pagan tribes from the Russian Steppes, known as Celts, established themselves astride the Upper Danube in central Europe. Celtic culture, based on exceptional trading skills and an early mastery of iron, facilitated their gradual expansion throughout Europe, and led to two styles of Celtic art whose artifacts are known to us through several key archeological sites in Switzerland and Austria. The two styles are Hallstatt (600-450) and La Tene (450-100). Both were exemplified by beautiful metalwork and complex linear designwork. Although by the early 1st Millennium CE most pagan Celtic artists had been fully absorbed into the Roman Empire, their traditions of spiral, zoomorphic, knotwork and interlace designs later resurfaced and flourished (600-1100 CE) in many forms of Hiberno-Saxon art (see below) such as illuminated Gospel manuscripts, religious metalwork, and High Cross Sculpture. Famous examples of Celtic metalwork art include the Gundestrup Cauldron, the Petrie Crown and the Broighter gold torc.
Unlike their intellectual Greek neighbours, the Romans were primarily practical people with a natural affinity for engineering, military matters, and Empire building. Roman architecture was designed to awe, entertain and cater for a growing population both in Italy and throughout their Empire. Thus Roman architectural achievements are exemplified by new drainage systems, aqueducts, bridges, public baths, sports facilities and amphitheatres (eg. the Colosseum 72-80 CE), characterized by major advances in materials (eg. the invention of concrete) and in the construction of arches and roof domes. The latter not only allowed the roofing of larger buildings, but also gave the exterior far greater grandeur and majesty. All this revolutionized the Greek-dominated field of architecture, at least in form and size, if not in creativity, and provided endless opportunity for embellishment in the way of scultural reliefs, statues, fresco murals, and mosaics. The most famous examples of Roman architecture include: the massive Colosseum, the Arch of Titus, and Trajan's Column.
If Roman architecture was uniquely grandiose, its paintings and sculptures continued to imitate the Greek style, except that its main purpose was the glorification of Rome's power and majesty. Early Roman art (c.200-27 BCE) was detailed, unidealized and realistic, while later Imperial styles (c.27 BCE - 200 CE) were more heroic. Mediocre painting flourished in the form of interior-design standard fresco murals, while higher quality panel painting was executed in tempera or in encaustic pigments. Roman sculpture too, varied in quality: as well as tens of thousands of average quality portrait busts of Emperors and other dignitaries, Roman sculptors also produced some marvellous historical relief sculptures, such as the spiral bas relief sculpture on Trajan's Column, celebrating the Emperor's victory in the Dacian war.
Early Art From Around the World
Although the history of art is commonly seen as being mainly concerned with civilizations that derived from European and Chinese cultures, a significant amount of arts and crafts appeared from the earliest times around the periphery of the known world. For more about the history and artifacts of these cultures, see: Oceanic art (from the South Pacific and Australasia), African art (from all parts of the continent) and Tribal art (from Africa, the Pacific Islands, Indonesia, Burma, Australasia, North America, and Alaska).
Constantinople, Christianity and Byzantine Art
With the death in 395 CE, of the Emperor Theodosius, the Roman empire was divided into two halves: a Western half based initially in Rome, until it was sacked in the 5th century CE, then Ravenna; and an eastern half located in the more secure city of Constantinople. At the same time, Christianity was made the exclusive official religion of the empire. These two political developments had a huge impact on the history of Western art. First, relocation to Constantinople helped to prolong Greco-Roman civilization and culture; second, the growth of Christianity led to an entirely new category of Christian art which provided architects, painters, sculptors and other craftsmen with what became the dominant theme in the visual arts for the next 1,200 years. As well as prototype forms of early Christian art, much of which came from the catacombs, it also led directly to the emergence of Byzantine art. See also: Christian Art, Byzantine Period.
Byzantine art was almost entirely religious art, and centred around its Christian architecture. Masterpieces include the awesome Hagia Sophia (532-37) in Istanbul; the Church of St Sophia in Sofia, Bulgaria (527-65); and the Church of Hagia Sophia in Thessaloniki. Byzantine art also influenced the Ravenna mosaics in the Basilicas of Sant'Apollinare Nuovo, San Vitale, and Sant' Apollinare in Classe. Secular examples include: the Great Palace of Constantinople, and Basilica Cistern. As well as new architectural techniques such as the use of pendentives to spread the weight of the ceiling dome, thus permitting larger interiors, new decorative methods were introduced like mosaics made from glass, rather than stone. But the Eastern Orthodox brand of Christianity (unlike its counterpart in Rome), did not allow 3-D artworks like statues or high reliefs, believing they glorified the human aspect of the flesh rather than the divine nature of the spirit. Thus Byzantine art (eg. painting, mosaic works) developed a particular style of meaningful imagery (iconography) designed to present complex theology in a very simple way. For example, colours were used to express different ideas: gold represented Heaven; blue, the colour of human life, and so on.
After 600 CE, Byzantine architecture progressed through several periods - such as, the Middle Period (c.600-1100) and the Comnenian and Paleologan periods (c.1100-1450) - gradually becoming more and more influenced by eastern traditions of construction and decoration. In Western Europe, Byzantine architecture was superceded by Romanesque and Gothic styles, while in the Near East it continued to have a significant influence on early Islamic architecture, as illustrated by the Umayyad Great Mosque of Damascus and the Dome of the Rock in Jerusalem.
In the absence of sculpture, Byzantine artists specialized in 2-D painting, becoming masters of panel-painting, including miniatures - notably icons - and manuscript illumination. Their works had a huge influence on artists throughout western and central Europe, as well as the Islamic countries of the Middle East.
Located on the remote periphery of Western Europe, Ireland remained free of interference from either Rome or the barbarians that followed. As a result, Irish Celtic art was neither displaced by Greek or Roman idioms, nor buried in the pagan Dark Ages. Furthermore, the Church was able to establish a relatively secure network of Irish monasteries, which rapidly became important centres of religious learning and scholarship, and gradually spread to the islands off Britain and to parts of Northern England. This monastic network soon became a major patron of the arts, attracting numerous scribes and painters into its scriptoriums to create a series of increasingly ornate illuminated gospel manuscripts: examples include: the Cathach of Colmcille (c.560), the Book of Dimma (c.625), the Durham Gospels (c.650), the Book of Durrow (c.670), and the supreme Book of Kells (also called the Book of Columba), considered to be the apogee of Western calligraphy. These gospel illuminations employed a range of historiated letters, rhombuses, crosses, trumpet ornaments, pictures of birds and animals, occasionally taking up whole pages (carpet pages) of geometric or interlace patterns. The creative success of these decorated manuscripts was greatly enhanced by the availability of Celtic designs from jewellery and metalwork - produced for the Irish secular elite - and by increased cultural contacts with Anglo-Saxon craftsmen in England.
Another early Christian art form developed in Ireland was religious metalwork, exemplified by such masterpieces as the Tara Brooch, the Ardagh Chalice, the Derrynaflan Chalice, and the Moylough Belt Shrine, as well as processional crosses like the 8th/9th century Tully Lough Cross and the great 12th century Cross of Cong, commissioned by Turlough O'Connor. Finally, from the late eighth century, the Church began commissioning a number of large religious crosses decorated both with scenes from the bible and abstract interlace, knotwork and other Celtic-style patterns. Examples include Muiredach's Cross at Monasterboice, County Louth, and the Ahenny High Cross in Tipperary. These scripture high crosses flourished between 900 and 1100, although construction continued as late as the 15th century.
Unfortunately, with the advent of the Vikings (c.800-1000), the unique Irish contribution to Western Civilization in general and Christianity in particular, began to fade, despite some contribution from Viking art. Thereafter, Roman culture - driven by the Church of Rome - began to reassert itself across Europe.
A Word About Asian Art
In contrast to Christianity which permits figurative representation of Prophets, Saints and the Holy family, Islam forbids all forms of human iconography. Thus Islamic art focused instead on the development of complex geometric patterns, illuminated texts and calligraphy.
In East Asia, the visual arts of India and Tibet incorporated the use of highly coloured figures (due to their wide range of pigments) and strong outlines. Painting in India was extremely diverse, as were materials (textiles being more durable often replaced paper) and size (Indian miniatures were a specialty). Chinese art included bronze sculpture, jade carving, Chinese pottery, calligraphic and brush painting, among other forms. In Japan, Buddhist temple art, Zen Ink-Painting, Yamato-e and Ukiyo-e woodblock prints were four of the main types of Japanese art.
On the continent, the revival of medieval Christian art began with Charlemagne I, King of the Franks, who was crowned Holy Roman Emperor, by Pope Leo III in 800. Charlemagne's court scriptoriums at Aachen produced a number of magnificent illuminated Christian texts, such as: the Godscalc Evangelistary, the Lorsch Gospels and the Gospels of St Medard of Soissons. Ironically, his major architectural work - the Palatine Chapel in Aachen (c.800) - was influenced not by St Peter's or other churches in Rome, but by the Byzantine-style Basilica of San Vitale in Ravenna. The Carolingian empire rapidly dissolved but Carolingian Art marked an important first step in the revitalization of European culture. Furthermore, many of the Romanesque and Gothic churches were built on the foundations of Carolingian architecture. Charlemagne's early Romanesque architectural achievements were continued by the Holy Roman Emperors Otto I-III, in a style known as Ottonian Art, which morphed into the fully fledged "Romanesque." (In England and Ireland, the Romanesque style is usually called Norman architecture.)
The Church Invests in Art to Convey Its Message
The spread of Romanesque art in the 11th century coincided with the reassertiveness of Roman Christianity, and the latter's influence on secular authorities led to the Christian re-conquest of Spain (c.1031) as well as the Crusade to free the Holy Land from the grip of Islam. The success of the Crusaders and their acquisition of Holy Relics triggered a wave of new cathedrals across Europe. In addition to its influence over international politics, Rome exercised growing power via its network of Bishops and its links with Monastic orders such as the Benedictines, the Cistercians, Carthusians and Augustinian Canons. From these monasteries, its officials exercised growing administrative power over the local population, notably the power to collect tax revenues which it devoted to religious works, particularly the building of cathedrals (encompassing sculpture and metalwork, as well as architecture), illuminated gospel manuscripts, and cultural scholarship - a process exemplified by the powerful Benedictine monastery at Cluny in Burgundy.
Romanesque Architecture (c.1000-1200)
Although based on Greek and Roman Antiquity, Romanesque architecture displayed neither the creativity of the Greeks, nor the engineering skill of the Romans. They employed thick walls, round arches, piers, columns, groin vaults, narrow slit-windows, large towers and decorative arcading. The basic load of the building was carried not its arches or columns but by its massive walls. And its roofs, vaults and buttresses were relatively primitive in comparison with later styles. Above all, interiors were dim and comparatively hemmed in with heavy stone walls. Even so, Romanesque architecture did reintroduce two important forms of fine art: sculpture (which had been in abeyance since the fall of Rome), and stained glass, albeit on a minor scale. (For details of sculptors, painters, and architects from the Middle Ages, see: Medieval Artists.)
Largely financed by monastic orders and local bishops, Gothic architecture exploited a number of technical advances in pointed arches and other design factors, in order to awe, inspire and educate the masses. Thus, out went the massively thick walls, small windows and dim interiors, in came soaring ceilings ("reaching to heaven"), thin walls and stained glass windows. This transformed the interior of many cathedrals into inspirational sanctuaries, where illiterate congregations could see the story of the bible illustrated in the beautiful stained glass art of its huge windows. Indeed, the Gothic cathedral was seen by architects as representing the universe in miniature. Almost every feature was designed to convey a theological message: namely, the awesome glory of God, and the ordered nature of his universe. Religious Gothic art - that is, architecture, relief sculpture and statuary - is best exemplified by the cathedrals of Northern France, notably Notre Dame de Paris; Reims and Chartres, as well as Cologne Cathedral, St Stephen's Cathedral Vienna and, in England, Westminster Abbey and York Minster.
Strongly influenced by International Gothic, the European revival of fine art between roughly 1300 and 1600, popularly known as "the Renaissance", was a unique and (in many respects) inexplicable phenomenon, not least because of (1) the Black Death plague (1346), which wiped out one third of the European population; (2) the 100 Years War between England and France (1339-1439) and (3) the Reformation (c.1520) - none of which was conducive to the development of the visual arts. Fortunately, certain factors in the Renaissance heartland of Florence and Rome - notably the energy and huge wealth of the Florentine Medici family, and the Papal ambitions of Pope Sixtus IV (1471-84), Pope Julius II (1503-13), Pope Leo X (1513-21) and Pope Paul III (1534-45) - succeeded in overcoming all natural obstacles, even if the Church was almost bankrupted in the process.
Renaissance art was founded on a new appreciation of the arts of Classical Antiquity, a belief in the nobility of Man, as well as artistic advances in both linear perspective and realism. It evolved in three main Italian cities: first Florence, then Rome, and lastly Venice. Renaissance chronology is usually listed as follows:
Renaissance architecture employed precepts derived from ancient Greece and Rome, but kept many modern features of Byzantine and Gothic invention, such as domes and towers. Important architects included: Donato Bramante (1444-1514) the greatest exponent of High Renaisance architecture; Baldassare Peruzzi (1481-1536), an important architect and interior designer; Michele Sanmicheli (1484-1559), the leading pupil of Bramante; Jacopo Sansovino (1486-1570), the most celebrated Venetian architect; Giulio Romano (1499-1546), the chief practitioner of Italian Late Renaissance-style building design; Andrea Palladio (1508-1580), an influential theorist; and of course Michelangelo himself, who helped to design the dome for St Peter's Basilica in Rome.
Among the greatest sculptors of the Northern Renaissance were: the German limewood sculptor Tilman Riemenschneider (1460-1531), noted for his reliefs and freestanding wood sculpture; and the wood-carver Veit Stoss (1450-1533) noted for his delicate altarpieces.
It was during this period that the Catholic Counter-Reformation got going in an attempt to attract the masses away from Protestantism. Renewed patronage of the visual arts and architecture was a key feature of this propaganda campaign, and led to a grander, more theatrical style in both areas. This new style, known as Baroque art was effectively the highpoint of dramatic Mannerism.
Baroque architecture took full advantage of the theatrical potential of the urban landscape, exemplified by Saint Peter's Square (1656-67) in Rome, in front of the domed St Peter's Basilica. Its architect, Gianlorenzo Bernini (1598-1680) employed a widening series of colonnades in the approach to the cathedral, conveying the impression to visitors that they are being embraced by the arms of the Catholic Church. The entire approach is constructed on a gigantic scale, to induce feelings of awe.
In painting, the greatest exponent of Catholic Counter-Reformation art was Peter Paul Rubens (1577-1640) - "the Prince of painters and the painter of Princes". Other leading Catholic artists included Diego Velazquez (1599-1660), Francisco Zurbaran (1598-1664) and Nicolas Poussin (1594-1665).
In Protestant Northern Europe, the Baroque era was marked by the flowering of Dutch Realist painting, a style uniquely suited to the new bourgeois patrons of small-scale interiors, genre-paintings, portraits, landscapes and still lifes. Several schools of Dutch Realism sprang up including those of Delft, Utrecht, and Leiden. Leading members included the two immortals Rembrandt (1606-1669) and Jan Vermeer (1632-1675), as well as Frans Snyders (1579-1657), Frans Hals (1581-1666), Adriaen Brouwer (1605-38), Jan Davidsz de Heem (1606-84), Adriaen van Ostade (1610-85), David Teniers the Younger (1610-90), Gerard Terborch (1617-81), Jan Steen (1626-79), Pieter de Hooch (1629-83), and the landscape painters Aelbert Cuyp (1620-91), Jacob van Ruisdael (1628-82) and Meyndert Hobbema (1638-1709), among others.
This new style of decorative art, known as Rococo, impacted most on interior-design, although architecture, painting and sculpture were also affected. Essentially a reaction against the seriousness of the Baroque, Rococo was a light-hearted, almost whimsical style which grew up in the French court at the Palace of Versailles before spreading across Europe. Rococo designers employed the full gamut of plasterwork, murals, tapestries, furniture, mirrors, porcelain, silks and other embellishments to give the householder a complete aesthetic experience. In painting, the Rococo style was championed by the French artists Watteau (1684-1721), Fragonard (1732-1806), and Boucher (1703-70). But the greatest works were produced by the Venetian Giambattista Tiepolo (1696-1770) whose fantastic wall and ceiling fresco paintings took Rococo to new heights. See in particular the renaissance of French Decorative Art (1640-1792), created by French Designers especially in the form of French Furniture, at Versailles and other Royal Chateaux, in the style of Louis Quatorze (XIV), Louis Quinze (XV) and Louis Seize (XVI). As it was, Rococo symbolized the decadent indolence and degeneracy of the French aristocracy. Because of this, it was swept away by the French Revolution which ushered in the new sterner Neoclassicism, more in keeping with the Age of Enlightenment and Reason.
In architecture, Neoclassicism derived from the more restrained "classical" forms of Baroque practised in England by Sir Christopher Wren (1632-1723), who designed St Paul's Cathedral. Yet another return to the Classical Orders of Greco-Roman Antiquity, the style was characterized by monumental structures, supported by columns of pillars, and topped with classical Renaissance domes. Employing innovations like layered cupolas, it lent added grandeur to palaces, churches, and other public structures. Famous Neoclassical buildings include: the Pantheon (Paris) designed by Jacques Germain Soufflot (1756-97), the Arc de Triomphe (Paris) designed by Jean Chalgrin, the Brandenburg Gate (Berlin) designed by Carl Gotthard Langhans (1732-1808), and the United States Capitol Building, designed by English-born Benjamin Henry Latrobe (1764-1820), and later by Stephen Hallet and Charles Bulfinch. See also the era of American Colonial Art (c.1670-1800).
Neoclassicist painters also looked to Classical
Antiquity for inspiration, and emphasized the virtues of heroicism, duty
and gravitas. Leading exponents included the French political artist Jacques-Louis
David (1748-1825), the German portrait and history painter Anton Raphael
Mengs (1728-79), and the French master of the Academic
art style, Jean Auguste Dominique Ingres (1780-1867). Neoclassical
sculptors included: Antonio Canova (1757-1822),
In contrast to the universal values espoused by Neo-Classicism, Romantic artists expressed a more personal response to life, relying more on their senses and emotions rather than reason and intellect. This idealism, like Neoclassism, was encouraged by the French Revolution, thus some artists were affected by both styles. Nature was an important subject for Romantics, and the style is exemplified, by the English School of Landscape Painting, the plein air painting of John Constable (1776-1837), Corot (1796-1875) along with members of the French Barbizon School and the American Hudson River School of landscape painting, as well as the more expressionistic JMW Turner (1775-1851). Arguably, however, the greatest Romantic landscape painter is arguably Caspar David Friedrich (1774-1840). Narrative or history painting was another important genre in Romanticism: leading exponents include: Francisco Goya (1746-1828) Henry Fuseli (1741-1825), James Barry (1741-1806), Theodore Gericault (1791-1824) and Eugene Delacroix (1798-63), as well as later Orientalists, Pre-Raphaelites and Symbolists.
As the 19th century progessed, growing awareness of the rights of man plus the social impact of the Industrial Revolution caused some artists to move away from idealistic or romantic subjects in favour of more mundane subjects, depicted in a more true-life, style of naturalism. This new focus (to some extent anticipated by William Hogarth in the 18th century, see English Figurative Painting) was exemplified by the Realism style which emerged in France during the 1840s, before spreading across Europe. This new style attracted painters from all the genres - notably Gustave Courbet (1819-77) (genre-painting), Jean Francois Millet (1814-75) (landscape, rural life), Honore Daumier (1808-79) (urban life) and Ilya Repin (1844-1930) (landscape and portraits).
History of Modern Art
French Impressionism, championed above all by Claude Monet (1840-1926), was a spontaneous colour-sensitive style of pleinairism whose origins derived from Jean-Baptiste Camille Corot and the techniques of the Barbizon school - whose quest was to depict the momentary effects of natural light. It encompassed rural landscapes [Alfred Sisley (1839-1899)], cityscapes [Camille Pissarro (1830-1903)], genre scenes [Pierre-Auguste Renoir (1841-1919), Edgar Degas (1834-1917), Paul Cezanne (1839-1906), and Berthe Morisot (1841-95)] and both figurative paintings and portraits [Edouard Manet (1832-83), John Singer Sargent (1856-1925)]. Other artists associated with Impressionism include, James McNeil Whistler (1834-1903) and Walter Sickert (1860-1942).
Impressionists sought to faithfully reproduce fleeting moments outdoors. Thus if an object appeared dark purple - due perhaps to failing or reflected light - then the artist painted it purple. Naturalist "Academic-Style" colour schemes, being devised in theory or at least in the studio, did not allow for this. As a result Impressionism offered a whole new pictorial language - one that paved the way for more revolutionary art movements like Cubism - and is often regarded by historians and critics as the first modern school of painting.
In any event, the style had a massive impact on Parisian and world art, and was the gateway to a series of colour-related movements, including Post-Impressionism, Neo-Impressionism, Pointillism, Divisionism, Fauvism, Intimism, the American Luminism or Tonalism, as well as American Impressionism, the Newlyn School and Camden Town Group, the French Les Nabis and the general Expressionist movement.
Essentially an umbrella term encompassing a number of developments and reactions to Impressionism, Post-Impressionism involved artists who employed Impressionist-type colour schemes, but were dissatisfied with the limitations imposed by merely reproducing nature. Neo-Impressionism with its technique of Pointillism (an offshoot of Divisionism) was pioneered by Georges Seurat and Paul Signac (1863-1935), while major Post-Impressionists include Paul Gauguin, Vincent Van Gogh and Paul Cezanne. Inspired by Gauguin's synthetism and Bernard's cloisonnism, the Post-Impressionist group Les Nabis promoted a wider form of decorative art; another style, known as Intimisme, concerned itself with genre scenes of domestic, intimate interiors. Exemplified by the work of Pierre Bonnard (1867-1947) and Edouard Vuillard (1868-1940), it parallels other tranquil interiors such as those by James McNeil Whistler, and the Dutch Realist-influenced Peter Vilhelm Ilsted (1861-1933). Another very important movement - anti-impressionist rather than post-impressionist - was Symbolism (flourished 1885-1900), which went on to influence Fauvism, Expressionism and Surrealism.
For more about art politics in France, see: the Paris Salon.
The term "Fauves" (wild beasts) was first used by the art critic Louis Vauxcelles at the 1905 Salon d'Automne exhibition in Paris when describing the vividly coloured paintings of Henri Matisse (1869-1954), Andre Derain (1880-1954), and Maurice de Vlaminck (1876-1958). Other Fauvists included the later Cubist Georges Braque (1882-1963), Raoul Dufy (1877-1953), Albert Marquet (1875-1947) and Georges Rouault (1871-1958). Most followers of Fauvism moved on to Expressionism or other movements associated with the Ecole de Paris.
Sculptural traditions, although never independent from those of painting, are concerned primarily with space and volume, while issues of scale and function also act as distinguishing factors. Thus on the whole, sculpture was slower to reflect the new trends of modern art during the 19th century, leaving sculptors like Auguste Rodin (1840-1917) free to pursue a monumentalism derived essentially from Neoclassicism if not Renaissance ideology. The public dimension of sculpture also lent itself to the celebration of Victorian values and historical figures, which were likewise executed in the grand manner of earlier times. Thus it wasn't until the emergence of artists like Constantin Brancusi (1876-1957) and Umberto Boccioni (1882-1916) that sculpture really began to change, at the turn of the century.
Expressionism is a general style of painting that aims to express a personal interpretation of a scene or object, rather than depict its true-life features, it is often characterized by energetic brushwork, impastoed paint, intense colours and bold lines. Early Expressionists included, Vincent Van Gogh (1853-90), Edvard Munch (1863-1944) and Wassily Kandinsky (1866-1944). A number of German Expressionist schools sprang up during the first three decades of the 20th century. These included: Die Brucke (1905-11), a group based in Dresden in 1905, which mixed elements of traditional German art with Post-Impressionist and Fauvist styles, exemplified in works by Ernst Ludwig Kirchner, Karl Schmidt-Rottluff, Erik Heckel, and Emil Nolde; Der Blaue Reiter (1911-14), a loose association of artists based in Munich, including Wassily Kandinsky, Franz Marc, August Macke, and Paul Klee; Die Neue Sachlichkeit (1920s) a post-war satirical-realist group whose members included Otto Dix, George Grosz, Christian Schad and to a lesser extent Max Beckmann. Expressionism duly spread worldwide, spawning numerous derivations in both figurative painting (eg. Francis Bacon) and abstract art (eg. Mark Rothko). See also: History of Expressionist Painting (c.1880-1930).
Art Nouveau (Late 19th Century - Early 20th Century)
Art Nouveau (known as Jugendstil in Germany, Sezessionstil in the Vienna Secession, Stile Liberty in Italy, and Modernista in Spain) derived from William Morris and the Arts and Crafts Movement in Britain, and was also influenced by both the Celtic Revival arts movement and Japanonisme. It's popularity stemmed from the 1900 Exposition Universelle in Paris, from where it spread across Europe and the United States. It was noted for its intricate flowing patterns of sinuous asymetrical lines, based on plant-forms (dating back to the Celtic Hallstatt and La Tene cultures), as well as female silhouettes and forms. Art Nouveau had a major influence on poster art, design and illustration, interior design, metalwork, glassware, jewellery, as well as painting and sculpture. Leading exponents included: Alphonse Mucha (1860-1939), Aubrey Beardsley (1872-98), Eugene Grasset (1845-1917) and Albert Guillaume (1873-1942). See also: History of Poster Art.
The Bauhaus School (Germany, 1919-1933)
Derived from the two German words "bau" for building and "haus" for house, the Bauhaus school of art and design was founded in 1919 by the architect Walter Gropius. Enormously influential in both architecture and design - and their teaching methods - its instructors included such artists as Josef Albers, Lyonel Feininger, Paul Klee, Wassily Kandinsky, Oskar Schlemmer, Laszlo Moholy-Nagy, Anni Albers and Johannes Itten. Its mission was to bring art into contact with everyday life, thus the design of everyday objects was given the same importance as fine art. Important Bauhaus precepts included the virtue of simple, clean design, massproduction and the practical advantages of a well-designed home and workplace. The Bauhaus was eventually closed by the Nazis in 1933, whereupon several of its teachers emigrated to America: Laszlo Moholy-Nagy settled in Chicago where he founded the New Bauhaus in 1937, while Albers went to Black Mountain College in North Carolina.
Art Deco (1920s, 1930s)
The design style known as Art Deco was showcased in 1925 at the International Exhibition of Modern Decorative and Industrial Arts in Paris and became a highly popular style of decorative art, design and architecture during the inter-war years (much employed by cinema and hotel architects). Its influence was also seen in the design of furniture, textile fabrics, pottery, jewellery, and glass. A reaction against Art Nouveau, the new idiom of Art Deco eliminated the latter's flowing curvilinear forms and replaced them with Cubist and Precisionist-inspired geometric shapes. Famous examples of Art Deco architecture include the Empire State Building and the New York Chrysler Building. Art Deco was also influenced by the simple architectural designs of The Bauhaus.
Invented by Pablo Picasso (1881-1973) and Georges Braque (1882-1963) and considered to be "the" revolutionary movement of modern art, Cubism was a more intellectual style of painting that explored the full potential of the two-dimensional picture plane by offering different views of the same object, typically arranged in a series of overlapping fragments: rather like a photographer might take several photos of an object from different angles, before cutting them up with scissors and rearranging them in haphazard fashion on a flat surface. This "analytical Cubism" (which originated with Picasso's "Les Demoiselles d'Avignon") quickly gave way to "synthetic Cubism", when artists began to include "found objects" in their canvases, such as collages made from newspaper cuttings. Cubist painters included: Juan Gris (1887-1927), Fernand Leger (1881-1955), Robert Delaunay (1885-1941), Albert Gleizes (1881-1953), Roger de La Fresnaye (1885-1925), Jean Metzinger (1883-1956), and Francis Picabia (1879-1953), the avant-garde artist Marcel Duchamp (1887-1968), and the sculptors Jacques Lipchitz (1891-1973), and Alexander Archipenko (1887-1964). (See also Russian art.) Short-lived but highly influential, Cubism instigated a whole new style of abstract art and had a significant impact the development of later styles such as: Orphism (1910-13), Collage (1912 onwards), Purism (1920s), Precisionism (1920s, 1930s), Futurism (1909-1914), Rayonism (c.1912-14), Suprematism (1913-1918), Constructivism (c.1919-32), Vorticism (c.1914-15) the De Stijl (1917-31) design movement and the austere geometrical style of concrete art known as Neo-Plasticism.
Largely rooted in the anti-art traditions of the Dada movement (1916-24), as well as the psychoanalytical ideas of Sigmund Freud and Carl Jung, Surrealism was the most influential art style of the inter-war years. According to its chief theorist, Andre Breton, it sought to combine the unconscious with the conscious, in order to create a new "super-reality" - a "surrealisme". The movement spanned a huge range of styles, from abstraction to true-life realism, typically punctuated with "unreal" imagery. Important Surrealists included Salvador Dali (1904-89), Max Ernst (1891-1976), Rene Magritte (1898-1967), Andre Masson (1896-1987), Yves Tanguy (1900-55), Joan Miro (1893-1983), Giorgio de Chirico (1888-1978), Jean Arp (1886-1966), and Man Ray (1890-1976). The movement had a major impact across Europe during the 1930s, was the major precursor to Conceptualism, and continues to find adherents in fine art, literature and cinematography.
American painting during the period 1900-45 was realist in style and became increasingly focused on strictly American imagery. This was the result of the reaction against the Armory Show (1913) and European hypermodernism, as well as a response to changing social conditions across the country. Later it became a patriotic response to the Great Depression of the 1930s. See also the huge advances in Skyscraper architecture of the early 20th century. For more, see: American architecture (1600-present). Specific painting movements included the Ashcan School (c.1900-1915); Precisionism (1920s) which celebrated the new American industrial landscape; the more socially aware urban style of Social Realism (1930s); American Scene Painting (c.1925-45) which embraced the work of Edward Hopper and Charles Burchfield, as well as midwestern Regionalism (1930s) championed by Grant Wood, Thomas Hart Benton and John Steuart Curry.
The first international modern art movement to come out of America (it is sometimes referred to as The New York School - see also American art), it was a predominantly abstract style of painting which followed an expressionist colour-driven direction, rather than a Cubist idiom, although it also includes a number of other styles, making it more of a general movement. Four variants stand out in Abstract Expressionism: first, the "automatic" style of "action painting" invented by Jackson Pollock (1912-56) and his wife Lee Krasner (19081984). Second, the monumental planes of colour created by Mark Rothko (1903-70), Barnett Newman (1905-70) and Clyfford Still (1904-80) - a style known as Colour Field Painting. Third, the gestural figurative works by Willem De Kooning (19041997). Four, the geometric "Homage to the Square" geometric abstracts of Josef Albers (1888-1976).
Highly influential, Abstract Expressionist painting continued to influence later artists for over two decades. It was introduced to Paris during the 1950s by Jean-Paul Riopelle (1923-2002), assisted by Michel Tapie's book, Un Art Autre (1952). At the same time, a number of new sub-movements emerged in America, such as Hard-edge painting, exemplified by Frank Stella. In the late 1950s/early 1960s, a purely abstract form of Colour Field painting appeared in works by Helen Frankenthaler and others, while in 1964, the famous art critic Clement Greenberg helped to introduce a further stylistic development known as "Post-Painterly Abstraction". Abstract Expressionism went on to influence a variety of different schools, including Op Art, Fluxus, Pop Art, Minimalism, Neo-Expressionism, and others.
The bridge between modern art and postmodernism, Pop art employed popular imagery and modern forms of graphic art, to create a lively, high-impact idiom, which could be understood and appreciated by Joe Public. It appeared simultaneously in America and Britain, during the late 1950s, while a European form (Nouveau Realisme) emerged in 1960. Pioneered in America by Robert Rauschenberg (1925-2008) and Jasper Johns (b.1930), Pop had close links with early 20th century movements like Surrealism. It was a clear reaction against the closed intellectualism of Abstract Expressionism, from which Pop artists sought to distance themselves by adopting simple, easily recognized imagery (from TV, cartoons, comic strips and the like), as well as modern technology like screen printing. Famous US Pop artists include: Jim Dine (b.1935), Robert Indiana (b.1928), Alex Katz (b.1927), Roy Lichtenstein (1923-97), Claes Oldenburg (b.1929), and Andy Warhol (1928-87). Important Pop artists in Britain were: Peter Blake (b.1932), Patrick Caulfield (1936-2006), Richard Hamilton (b.1922), David Hockney (b.1937), Allen Jones (b.1937), RB Kitaj (b.1932), and Eduardo Paolozzi (1924-2005).
From the early works of Brancusi, 20th century sculpture broadened immeasurably to encompass new forms, styles and materials. Major innovations included the "sculptured walls" of Louise Nevelson (1899-1988), the existential forms of Giacometti (1901-66), the biomorphic abstraction of both Barbara Hepworth (1903-75) and Henry Moore (1898-1986), and the spiders of Louise Bourgeois (1911-2010). Other creative angles were pursued by Salvador Dali (1904-89) in his surrealist "Mae West Lips Sofa" and "Lobster Telephone" - by Meret Oppenheim (1913-85) in her "Furry Breakfast", by FE McWilliam (1909-1992) in his "Eyes, Nose and Cheek", by Sol LeWitt (b.1928) in his skeletal box-like constructions, and by Pop-artists like Claes Oldenburg (b.1929) and Jasper Johns (b.1930), as well as by the Italians Jonathan De Pas (1932-91), Donato D'Urbino (b.1935) and Paolo Lomazzi (b.1936) in their unique "Joe Sofa".
For more about the history of painting, sculpture, architecture and crafts during this period, see: Modern Art Movements.
History of Contemporary Art
The word "Postmodernist" is often used to describe contemporary art since about 1970. In simple terms, postmodernist art emphasizes style over substance (eg. not 'what' but 'how'; not 'art for art's sake', but 'style for stye's sake'), and stresses the importance of how the artist comunicates with his/her audience. This is exemplified by movements such as Conceptual art, where the idea being communicated is seen as more important than the artwork itself, which merely acts as the vehicle for the message. In addition, in order to increase the "impact" of visual art on spectators, postmodernists have turned to new art forms such as Assemblage, Installation, Video, Performance, Happenings and Graffiti - all of which are associated in some way or other with Conceptualism- and this idea of impact continues to inspire.
Painters since the 1970s have experimented with numerous styles across the spectrum from pure abstraction to figuration. These include: Minimalism, a purist form of abstraction which did little to promote painting as an attractive medium; Neo-Expressionism, which encompassed groups like the "Ugly Realists", the "Neue Wilden", "Figuration Libre", "Transavanguardia", the "New Image Painters" and the so-called "Bad Painters", signalled a return to depicting recognizable objects, like the human body (albeit often in a quasi-abstract style), using rough brushwork, vivid colours and colour harmonies; and the wholly figurative styles adopted by groups such as "New Subjectivity" and the "London School". At the other extreme from Minimalism is the ultra-representational art form of photorealism (superrealism, hyperrealism). Conspicuous among this rather bewildering range of activity are figure painters like Francis Bacon, the great Lucien Freud (b.1922), the innovative Fernando Botero (b.1932), the precise David Hockney (b.1937), the photorealists Chuck Close (b.1940) and Richard Estes (b.1936), and the contemporary Jenny Saville (b.1970). See also: Contemporary British Painting (1960-2000).
Sculpture since 1970 has appeared in a variety of guises, including: the large scale metal works of Mark Di Suvero (b.1933), the minimalist sculptures of Walter de Maria (b.1935), the monumental public forms of Richard Serra (b.1939), the hyper-realist nudes of John De Andrea (b.1941), the environmental structures of Anthony Gormley (b.1950), the site-specific figures of Rowan Gillespie (b.1953), the stainless steel works of Anish Kapoor (b.1954), the high-impact Neo-Pop works of Jeff Koons (b.1955), and the extraordinary 21st century works by Sudobh Gupta (b.1964) and Damian Ortega (b.1967). In addition, arresting public sculpture includes the "Chicago Picasso" - a series of metal figures produced for the Chicago Civic Centre and the architectural "Spire of Dublin" (the 'spike'), created by Ian Ritchie (b.1947), among many others.
The pluralistic "anything goes" view of contemporary art (which critics might characterize as exemplifying the fable of the "Emperor's New Clothes"), is aptly illustrated in the works of Damien Hirst, a leading member of the Young British Artists school. Renowned for "The Physical Impossibility of Death in the Mind of Someone Living", a dead Tiger shark pickled in formaldehyde, and lately for his diamond encrusted skull "For the Love of God", Hirst has managed to stimulate audiences and horrify critics around the world. And while he is unlikely ever to inherit the mantle of Michelangelo, his achievement of sales worth $100 million in a single Sotheby's auction (2008) is positively eye-popping.
On a more sobering note, in March 2009 the prestigious Georges Pompidou Centre of Contemporary Art in Paris staged an exhibition entitled "The Specialisation of Sensibility in the Raw Material State into Stabilised Pictorial Sensibility". This avant-garde event consisted of 9 completely empty rooms - in effect, a reincarnation of John Cage's completely silent piece of "musical" conceptual art entitled "4.33". If one of the great contemporary art venues like the Pompidou Centre regards nine completely empty spaces as a worthy art event, we are all in deep trouble.
For more about the history of postmodernist painting, sculpture, and avant-garde art forms, see: Contemporary Art Movements.
One might say that 19th century architecture aimed to beautify the new wave of civic structures, like railway stations, museums, government buildings and other public utilities. It did this by taking ideas from Neo-Classicism, Neo-Gothic, French Second Empire and exoticism, as well as the new forms and materials of so-called "industrial architecture", as exemplified in factories along with occasional landmark structures like the Eiffel Tower. In comparison, 20th century architecture has been characterized by vertical development (skyscrapers), flagship buildings, and post-war reconstruction. More than any other era, its design has been dominated by the invention of new materials and building methods. It began with the exploitation of late 19th century innovations developed by the Chicago School of architecture, such as the structural steel frame, in a style known as Early Modernism. In America, architects started incorporating Art Nouveau and Art Deco design styles into their work, while in Germany and Russia totalitarian architecture pursued a separate agenda during the 1930s. Famous architects of the first part of the century included: Louis Sullivan (1856-1924), Frank Lloyd Wright (1867-1959), Victor Horta (1861-1947), Antoni Gaudi (1852-1926), Peter Behrens (1868-1940), Walter Gropius (1883-1969) and Le Corbusier (1887-1965). After 1945, architects turned away from functionalism and began creating new forms facilitated by reinforced concrete, steel and glass. Thus Late Modernism gave way to Brutalism, Corporate Modernism and High Tech architecture, culminating in structures like the Georges Pompidou Centre in Paris, and the iconic Sydney Opera House - one of the first buildings to use industrial strength Araldite to glue together the precast structural elements. Since 1970, postmodernist architecture has taken several different approaches. Some designers have stripped buildings of all ornamentation to create a Minimalist style; others have used ideas of Deconstructivism to move away from traditional rectilinear shapes; while yet others have employed digital modeling software to create totally new organic shapes in a process called Blobitecture. Famous post-war architects include: Miers van der Rohe (1886-1969), Louis Kahn (1901-74), Jorn Utzon; Eero Saarinen (1910-61), Kenzo Tange (1913-2005), IM Pei (b.1917), Norman Foster (b.1935), Richard Rogers, James Stirling (1926-92), Aldo Rossi (1931-97), Frank O. Gehry (b.1929), Rem Koolhaas (b.1944), and Daniel Libeskind (b.1946). Famous architectural groups or firms, include: Skidmore, Owings & Merrill (est 1936); Venturi & Scott-Brown (est 1925); the New York Five - Peter Eisenman, Michael Graves, Charles Gwathmey, John Hejduk, Richard Meier; and Herzog & de Meuron (est 1950).
For our main index, see: Art Encyclopedia.
ENCYCLOPEDIA OF ART | <urn:uuid:7dbb42f1-28cf-4bd3-b19e-1cbdf4a7ab2f> | CC-MAIN-2013-20 | http://www.visual-arts-cork.com/history-of-art.htm | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706499548/warc/CC-MAIN-20130516121459-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.941338 | 14,008 | 3.59375 | 4 | HuggingFaceFW/fineweb-edu/sample-100BT |
Interface with computers using gestures of the human body, typically hand movements. In gesture recognition technology, a camera reads the movements of the human body and communicates the data to a computer that uses the gestures as input to control devices or applications. For example, a person clapping his hands together in front of a camera can produce the sound of cymbals being crashed together when the gesture is fed through a computer.
One way gesture recognition is being used is to help the physically impaired to interact with computers, such as interpreting sign language. The technology also has the potential to change the way users interact with computers by eliminating input devices such as joysticks, mice and keyboards and allowing the unencumbered body to give signals to the computer through gestures such as finger pointing.
Unlike haptic interfaces, gesture recognition does not require the user to wear any special equipment or attach any devices to the body. The gestures of the body are read by a camera instead of sensors attached to a device such as a data glove.
In addition to hand and body movement, gesture recognition technology also can be used to read facial and speech expressions (i.e., lip reading), and eye movements.
Featured Partners Sponsored
- Increase worker productivity, enhance data security, and enjoy greater energy savings. Find out how. Download the “Ultimate Desktop Simplicity Kit” now.»
- Find out which 10 hardware additions will help you maintain excellent service and outstanding security for you and your customers. »
- Server virtualization is growing in popularity, but the technology for securing it lags. To protect your virtual network.»
- Before you implement a private cloud, find out what you need to know about automated delivery, virtual sprawl, and more. » | <urn:uuid:e886e042-6a7d-431e-a669-b88b52d36b9c> | CC-MAIN-2013-20 | http://www.webopedia.com/index.php/TERM/G/gesture_recognition.html | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706499548/warc/CC-MAIN-20130516121459-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.928337 | 353 | 3.578125 | 4 | HuggingFaceFW/fineweb-edu/sample-100BT |
March 01, 2012
Environmental Protection: New ocean radar treaty of 153 nations covers spilled oil, debris, tsunamis, bodies.
Here's a positive upshot of America's months-long Gulf of Mexico spill in 2010. At Environmental Protection, do see "153 Countries Sign Treaty on Ocean Radar Improvements". The meeting of The International Telecommunication Union’s World Radiocommunication Conference 2012 (WRC-12) took place from Jan. 23 to Feb. 17 in Geneva, Switzerland, and
concluded with agreement on a number of items, including improved ocean radar technology. This will yield better tracking of tsunamis, oil spills, ocean debris, and people lost at sea, according to the National Science Foundation (NSF).
After recent destructive tsunamis and the Gulf of Mexico oil spill increased interest in ocean radars, which have operated informally and would be quickly shut down if they caused interference with other radio systems, according to NSF.
But action taken at the meeting provides specific radio frequency bands for ocean radars-–small systems typically installed on beaches and using radio signals to map ocean currents to distances as far as 100 miles.
Posted by JD Hull at March 1, 2012 04:38 PM | <urn:uuid:09022258-2f63-4466-8c92-0fb3210f40f0> | CC-MAIN-2013-20 | http://www.whataboutclients.com/archives/2012/03/environmental_p_2.html | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706499548/warc/CC-MAIN-20130516121459-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.920189 | 254 | 2.78125 | 3 | HuggingFaceFW/fineweb-edu/sample-100BT |
There’s a song on Bob Dylan’s 1989 album “Unplugged” named “Dignity,” which is a lyrical search for honor, worth, and self-respect. ”Lookin’ into the last forgotten years… for dignity.”
The question has been addressed often: why has respect for the office of president of the United States deteriorated? Several reasons stand out. Media coverage has become bold and intrusive, in the faces of political figures, since advanced technology has become nearly limitless, satiating the public’s hunger for scandal.
During Teddy Roosevelt’s presidency, in the early 1900s, came the expansion of White House power, a closer, more personal look by U.S. citizens at an activist “celebrity president,” the center of national attention. The human nature of an individual who travels and campaigns extensively soon seeps through their very public persona, revealing their faults, weaknesses, and indiscretions.
What was the fate of the most hated U.S. president, Richard M. Nixon? After a five-year, post-Watergate exile, he emerged from seclusion as a self-named “elder statesman.” In July of 1979 he and wife Pat had attempted to purchase a nine-room penthouse in New York City for $1 million. The other 34 residents in an uproar, he was turned down flat.
“Nixon was in constant danger from a multitude of would-be assassins who wanted the honor of taking him down,” Steven Gaines writes in his true chronicle “The Sky’s the Limit,” from which I found this account. Sacrificing a $92,500 deposit, Nixon was turned away from yet another N.Y. City condominium by would-be neighbors. The Nixons spent several unhappy years in a New York townhouse before Pat’s death. Nixon passed away, in exile from American citizens, in Yorba Linda, California, in April 1994.
Early presidents rarely spoke directly to the public. (President Clinton delivered 600 speeches in his first year in office.) From “Reason,” a libertarian journal, author Gene Healy states, “The modern vision of the presidency couldn’t be further from the view of the chief executive’s role held by the framers of the Constitution. In an age long before distrust of power was condemned as cynicism, the founding fathers designed a presidency of modest authority and limited responsibilities.”
The expansion of White House power was brought about by crisis situations, namely two World Wars and the Great Depression, when people panicked, consigning social power to one person. “By the end of his twelve-year reign, FDR had firmly established the president as a national protector and nurturer.”
In the 21st century, what or who has bestowed George W. Bush with so much power? Healy answers that question handily. In essence, members of the president’s legal team created an alternative version of the national charter, “in which the president has unlimited power to launch war, wiretap without judicial scrutiny,” and seize and hold American citizens on American soil for the duration of the war on terror “without having to answer to a judge.” Ouch!
Although few in the media noted it, the Bush administration was also granted enhanced authority for domestic use of the military. Healy notes, “No president should have the powers President Bush has sought and seized in the past seven years.”
Power and leadership are not one and the same. Was it flimsy leadership, for instance, that led to 42% of U.S. adults below age 65 to be underinsured or uninsured for health care coverage in 2007? How about the average mortgage debt for a typical U.S. household now at $84,911; home equity loans of $10,062; and credit card debts averaging $8,565? (Figures from AARP Bulletin.)
Gene Healy’s article “Supreme Warlord of the Earth,” in October’s Utne magazine, is very cogent. “The Constitution’s architects never conceived of the president as the person in charge of national destiny,” he concludes.
How did we, citizens of the United States of America, get from humble grass root’ dignity, devotion to God and country, and liberty and justice for all to extravagant and ruthless political campaigns and distrust in our leaders?
How much more debt can a nation endure before the walls come tumbling down?
(I wrote this before the Stock Market debacle.) That’s a circumstance beyond my capacity to comment on.
Janet Burns lives in Lewiston. She can be reached at [email protected]. | <urn:uuid:3c93a0a7-82d8-4c28-8a6b-dd24cd09de6b> | CC-MAIN-2013-20 | http://www.winonapost.com/stock/functions/VDG_Pub/detail.php?choice=27290&home_page=&archives=1 | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706499548/warc/CC-MAIN-20130516121459-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.962216 | 1,018 | 2.625 | 3 | HuggingFaceFW/fineweb-edu/sample-100BT |
Chapter 5 takes place an estimated 9 years after Nebuchadnezzar’s death and about 36 years after the previous chapter. Belshazzar was Nebuchadnezzar’s grandson who took control of the kingdom as his father was on extended leave fighting the Persians. It appears that Daniel had retired from his high place in government. He would have been pretty old at this point, though he also could have lost his position when Nebuchadnezzar died.
The walls of Babylon were 87 feet thick and 100 feet high
It was fairly common for the kings to dine with such large numbers of people as you can see in Esther 1. In this case though, the invading armies are right outside the city walls. This would seem to be incredible arrogance similar to his grandfather, but Herodotus tells us that Babylon had two walls surrounding the city with a moat in between. The walls were 87 feet thick and 100 feet high, so conquering Babylon was not something that happened easily. At the end of the chapter we find out that this would be the night it was captured. Herodotus corroborates the Bible and mentions a festival was going on the night the city was conquered.
Regardless of the city’s security, it was a bad decision to get drunk in front of your lords with an invading army outside. Even worse to taunt a god by desecrating sacred items collected from a temple. Maybe he was doing this to instill a sense of pride to his lords reminding them of past victories, though Daniel seems be very specific about his lack of sobriety.
The handwriting on the wall has always stood out to me as a bizarre miracle by God (bizarre by miracle standards that is). This seems like something you would see in a horror movie. The best interpretation I found of the Aramaic writing said it literally translated to “numbered, numbered, weighed, divided.”
Belshazzar was not given a message of repentance but rather a proclamation of impending judgement. It is evident to us that although the king is just informed of his doom, God had been moving the Medes and Persians in place to execute his plan for some time.
Extra-biblical writings tell us that the Persians blocked the flow of the Euphrates and walked on the riverbed to an unguarded portion of the wall where they climbed up without opposition. Since so many were gathered at the festival, the Babylonians were defeated with relative ease.
Numbered, numbered, weighed, divided
Darius the Mede is not found anywhere else in extra-biblical writings and is a serious point of contention for Bible critics. Cyrus was definitely the king of Persia, so Darius could either be a Babylonian nickname or title similar to Caesar or Pharoah. It also could be referring to the local ruler that Cyrus put in charge of that area. There is no evidence to identify Darius the Mede, but there is also no hard evidence contradicting it. | <urn:uuid:ef4017af-49a6-44b3-89ec-059a1f6c3964> | CC-MAIN-2013-20 | http://churchhopping.com/daniel-5 | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382584/warc/CC-MAIN-20130516092622-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.984932 | 614 | 3.046875 | 3 | HuggingFaceFW/fineweb-edu/sample-100BT |
First tropical depression of the season may form from 92L
An unusually large and well-developed African tropical wave for so early in the season has developed midway between the coast of Africa and South America. The storm was designated Invest 92L by the National Hurricane Center yesterday, and has a good chance of becoming the first tropical depression of the Atlantic hurricane season. Surface winds measured by the 8:23am EDT pass of the European ASCAT satellite revealed that 92L already has a closed surface circulation, though the circulation is large and elongated. Top winds seen by ASCAT were about 25 mph. METEOSAT visible satellite loops show a large and impressive circulation that is steadily consolidating, with spiral bands building inward towards center, and upper-level outflow beginning to be established to the northwest and north.
Figure 1. Morning satellite image of Invest 92L.
Climatology argues against development of 92L, since only one named storm has ever formed between Africa and the Lesser Antilles Islands in the month of June--Tropical Storm Ana of 1979 (Figure 2). However, sea surface temperatures (SSTs) underneath 92L are an extremely high 28 - 30°C, which is warmer than the temperatures reached during the peak of hurricane season last year, in August - September. In fact, with summer not even here, and three more months of heating remaining until we reach peak SSTs in the Atlantic, ocean temperatures across the entire Caribbean and waters between Africa and the Lesser Antilles are about the same as they were during the peak week for water temperatures in 2009 (mid-September.) While 92L will cross over a 1°C cooler patch of water on Monday, the storm will encounter very warm SSTs of 28-29°C again by Tuesday.
The disturbance doesn't have to worry about dry air--Total Precipitable Water (TPW) loops show a very moist plume of air accompanies 92L, and water vapor satellite loops show that the center of 92L is at least 300 - 400 miles from any substantial areas of dry air. The 60-day cycle of enhanced thunderstorm activity called the Madden-Jullian Oscillation is currently favoring upward motion over eastern tropical Atlantic, and this enhanced upward motion helps create stronger updrafts and higher chances of tropical cyclone development.
Figure 2. Tropical Storm Ana of 1979 was the only June named storm on record to form between Africa and the Lesser Antilles Islands.
The forecast for 92L
A major issue for 92L, like it is for most June disturbances, is wind shear. The subtropical jet stream has a branch flowing through the Caribbean and tropical Atlantic north of 10° N that is bringing 20 - 40 knots of wind shear to the region. Our disturbance is currently located at 7°N, well south of this band of high shear, and is only experiencing 5 - 15 knots of shear. This moderate amount of shear should allow for some steady development of 92L over the next few days as it tracks west-northwest at 10 - 15 mph. The National Hurricane Center is giving 92L a medium (30% chance) of developing into a tropical depression by Tuesday morning. Based on visible satellite imagery over the past few hours, I believe this forecast is not aggressive enough, and that 92L has a 50% chance of developing into a tropical depression by Tuesday morning. Another factor holding 92L back is its proximity to the Equator. I would give 92L higher chances of developing if it were not so close to the Equator. The system is organizing at about 7°N latitude, which is so close to the Equator that it cannot leverage the Earth's spin much to help it get spinning. It is quite unusual for a tropical depression to form south of 8°N latitude.
The farther south 92L stays, the better chance it has at survival. With the system's steady west-northwest movement this week, 92L should begin encountering hostile wind shear in excess of 30 knots by Thursday, which should be able to greatly weaken or entirely destroy the storm before it gets to the Lesser Antilles Islands. However, residents of the islands--particularly the northern Lesser Antilles--should follow the progress of 92L closely, and anticipate heavy rains and high winds moving through the islands by Saturday or Sunday next weekend. The GFDL and HWRF models are predicting that 92L will develop into a moderate strength tropical storm that will then be weakened or destroyed by the end of the week, before it reaches the islands. This looks like a reasonable forecast.
Figure 3. The departure of sea surface temperature (SST) from average for June 10, 2010. Image credit: NOAA/NESDIS.
Oil spill wind forecast
There is little change to the oil spill wind forecast for the coming two weeks. Light winds of 5 - 10 knots mostly out of the south or southeast will blow in the northern Gulf of Mexico all week, according to the latest marine forecast from NOAA. These winds will keep oil near the coast of Louisiana, Alabama, Mississippi, and the extreme western Florida Panhandle, according to the latest trajectory forecasts from NOAA and the State of Louisiana. The long range 8 - 16 day forecast from the GFS model indicates a typical summertime light wind regime, with winds mostly blowing out of the south or southeast. This wind regime will likely keep oil close to the coastal areas that have already seen oil impacts over the past two weeks. | <urn:uuid:6de7ce61-5b96-4eb5-aa7f-7fe832fa5540> | CC-MAIN-2013-20 | http://dutch.wunderground.com/blog/JeffMasters/comment.html?entrynum=1505&page=63&theprefset=BLOGCOMMENTS&theprefvalue=0 | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382584/warc/CC-MAIN-20130516092622-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.956752 | 1,120 | 2.515625 | 3 | HuggingFaceFW/fineweb-edu/sample-100BT |
In computer science and logic, a dependent type is a type that depends on a value. Dependent types play a central role in intuitionistic type theory and in the design of functional programming languages like ATS, Agda and Epigram.
An example is the type of n-tuples of real numbers. This is a dependent type because the type depends on the value n.
Deciding equality of dependent types in a program may require computations. If arbitrary values are allowed in dependent types, then deciding type equality may involve deciding whether two arbitrary programs produce the same result; hence type checking may become undecidable.
The Curry–Howard correspondence implies that types can be constructed that express arbitrarily complex mathematical properties. If the user can supply a constructive proof that a type is inhabited (i.e., that a value of that type exists) then a compiler can check the proof and convert it into executable computer code that computes the value by carrying out the construction. The proof checking feature makes dependently typed languages closely related to proof assistants. The code-generation aspect provides a powerful approach to formal program verification and proof-carrying code, since the code is derived directly from a mechanically verified mathematical proof.
Systems of the lambda cube
Henk Barendregt developed the lambda cube as a means of classifying type systems along three axes. The eight corners of the resulting cube-shaped diagram each correspond to a type system, with simply typed lambda calculus in the least expressive corner, and calculus of constructions in the most expressive. The three axes of the cube correspond to three different augmentations of the simply typed lambda calculus: the addition of dependent types, the addition of polymorphism, and the addition of higher kinded type constructors (functions from types to types, for example). The lambda cube is generalized further by pure type systems.
First order dependent type theory
The system of pure first order dependent types, corresponding to the logical framework LF, is obtained by generalising the function space type of the simply typed lambda calculus to the dependent product type.
Writing for -tuples of real numbers, as above, stands for the type of functions which given a natural number n returns a tuple of real numbers of size n. The usual function space arises as a special case when the range type does not actually depend on the input, e.g. is the type of functions from natural numbers to the real numbers, written as in the simply typed lambda calculus.
Second order dependent type theory
The system of second order dependent types is obtained from by allowing quantification over type constructors. In this theory the dependent product operator subsumes both the operator of simply typed lambda calculus and the binder of System F.
Higher order dependently typed polymorphic lambda calculus
The higher order system extends to all four forms of abstraction from the lambda cube: functions from terms to terms, types to types, terms to types and types to terms. The system corresponds to the Calculus of constructions whose derivative, the calculus of inductive constructions is the underlying system of the Coq proof assistant.
Object-oriented programming
|Language||Actively developed||Paradigm[fn 1]||Tactics||Proof terms||Termination checking||Types can depend on[fn 2]||Universes||Proof irrelevance||Program extraction||Extraction erases irrelevant terms|
|ATS||Yes||Functional / imperative||No||Yes||Yes||?||?||?||Yes||?|
|Cayenne||No||Purely functional||No||Yes||No||Any term||No||No||?||?|
|Coq||Yes||Purely functional||Yes||Yes||Yes||Any term||Yes[fn 5]||No||Haskell, Scheme and OCaml||Yes|
|Dependent ML||No[fn 6]||?||?||Yes||?||Natural numbers||?||?||?||?|
|Epigram 2||Yes||Purely functional||No||Coming soon[dated info]||By construction||Any term||Coming soon[dated info]||Coming soon[dated info]||Coming soon[dated info]||Coming soon[dated info]|
|Guru||No||Purely functional||hypjoin||Yes||Yes||Any term||No||Yes||Carraway||Yes|
|Idris||Yes||Purely functional||Yes||Yes||Yes (optional)||Any term||No||No||Yes||Yes, aggressively|
|Matita||Yes||Purely functional||Yes||Yes||Yes||Any term||Yes||?||OCaml||?|
|NuPRL||No||Purely functional||Yes||Yes||Yes||Any term||Yes||?||Yes||?|
|Twelf||Yes||Logic programming||?||Yes||Yes (optional)||Any (LF) term||No||No||?||?|
See also
- This refers to the core language, not to any tactic or code generation sublanguage.
- Subject to semantic constraints, such as universe constraints
- Ring solver
- Optional universes, optional universe polymorphism, and optional explicitly specified universes
- Universes, automatically inferred universe constraints (not the same as Agda's universe polymorphism) and optional explicit printing of universe constraints
- Has been superseded by ATS
- Anton Setzer (2007). "Object-oriented programming in dependent type theory". In Henrik Nilsson. Trends in Functional Programming, vol. 7. Intellect. pp. 91–108.
- "Agda download page".
- "Agda Ring Solver".
- "Announce: Agda 2.2.8".
- "ATS Changelog".
- "email from ATS inventor Hongwei Xi".
- "Coq CHANGES in Subversion repository".
- "Epigram homepage".
- "Guru SVN".
- Aaron Stump (6 April 2009). "Verified Programming in Guru". Retrieved 28 September 2010.
- Adam Petcher (1 April 2008). "Deciding Joinability Modulo Ground Equations in Operational Type Theory". Retrieved 14 October 2010.
- "Idris git repository".
- "Idris, a language with dependent types - extended abstract".
- Edwin Brady. "How does Idris compare to other dependently-typed programming languages?".
- "Matita SVN".
- "Xanadu home page".
Further reading
- Martin-Löf, Per (1984). Intuitionistic Type Theory. Bibliopolis.
- Nordström, Bengt; Petersson, Kent; Smith, Jan M. (1990). Programming in Martin-Löf's Type Theory: An Introduction. Oxford University Press.
- Barendregt, Henk (1992). "Lambda calculi with types". In S. Abramsky, D. Gabbay and T. Maibaum. Handbook of Logic in Computer Science. Oxford Science Publications.
- McBride, Conor; McKinna, James (January 2004). "The view from the left". Journal of Functional Programming 14 (1): 69–111.
- Altenkirch, Thorsten; McBride, Conor; McKinna, James (April 2005). Why dependent types matter.
- Norell, Ulf. Towards a practical programming language based on dependent type theory. PhD thesis, Department of Computer Science and Engineering, Chalmers University of Technology, SE-412 96 Göteborg, Sweden, September 2007.
- Oury, Nicolas and Swierstra, Wouter (2008). "The Power of Pi". Accepted for presentation at ICFP, 2008.
- Norell, Ulf (2008). Dependently Typed Programming in Agda. | <urn:uuid:49bd6b86-3a40-480a-8870-ee3bed0d0e0b> | CC-MAIN-2013-20 | http://en.wikipedia.org/wiki/Dependent_types | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382584/warc/CC-MAIN-20130516092622-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.707123 | 1,635 | 3.09375 | 3 | HuggingFaceFW/fineweb-edu/sample-100BT |
Interferon type I
Human type I interferons comprise a vast and growing group of IFN proteins.
Mammalian types
The IFN-α proteins are produced by leukocytes. They are mainly involved in innate immune response against viral infection. They come in 13 subtypes that are called IFNA1, IFNA2, IFNA4, IFNA5, IFNA6, IFNA7, IFNA8, IFNA10, IFNA13, IFNA14, IFNA16, IFNA17, IFNA21. These genes for these IFN-α molecules are found together in a cluster on chromosome 9.
IFN-α is also made synthetically as medication. Types are:
The IFN-β proteins are produced in large quantities by fibroblasts. They have antiviral activity which is mainly involved in innate immune response. Two types of IFN-β have been described, IFN-β1 (IFNB1) and IFN-β3 (IFNB3) (a gene designated IFN-β2 is actually IL-6). IFN-β1 is used as a treatment for multiple sclerosis as it reduces the relapse rate.
IFN-ε, –κ, -τ, -δ, and –ζ
IFN-ε, –κ, -τ, and –ζ appear, at this time, to come in a single isoform in humans, IFNK. Only ruminants encode IFN-τ, a variant of IFN-ω. So far, IFN-ζ is found only in mice, while a structural homolog, IFN-δ is found in a diverse array of non-primate and non-rodent placental mammals. Most but not all placental mammals encode functional IFN-ε and IFN-κ genes.
IFN-ω, although having only one functional form described to date (IFNW1), has several pseudogenes: IFNWP2, IFNWP4, IFNWP5, IFNWP9, IFNWP15, IFNWP18, and IFNWP19 in humans. Many non-primate placental mammals express multiple IFN-ω subtypes
This subtype of Type I IFN was recently described as a pseudogene in human, but potentially functional in the domestic cat genome. In all other genomes of non-feline placental mammals, IFN-ν is a pseudogene; in some species, the pseudogene is well preserved, while in others, it is badly mutilated or is undetectable. Moreover, in the cat genome, the IFN-ν promoter is deleteriously mutated. It is likely that the IFN-ν gene family was rendered useless prior to mammalian diversification. Its presence on the edge of the Type I IFN locus in mammals may have shielded it from obliteration, allowing its detection.
Sources and functions
IFN-α and IFN-β are secreted by many cell types including lymphocytes (NK cells, B-cells and T-cells), macrophages, fibroblasts, endothelial cells, osteoblasts and others. They stimulate both macrophages and NK cells to elicit an anti-viral response, and are also active against tumors. Recently, plasmacytoid dendritic cells have been identified as being the most potent producers of type I IFNs in response to antigen, and have thus been coined natural IFN producing cells.
IFN-ω is released by leukocytes at the site of viral infection or tumors.
IFN-α acts as a pyrogenic factor by altering the activity of thermosensitive neurons in the hypothalamus thus causing fever. It does this by binding to opioid receptors and eliciting the release of prostaglandin-E2 (PGE2).
Non-mammalian types
Avian Type I IFNs have been characterized and preliminarily assigned to subtypes (IFN I, IFN II, and IFN III), but their classification into subtypes should await a more extensive characterization of avian genomes.
Functional lizard Type I IFNs can be found in lizard genome databases.
Turtle Type I IFNs have been purified (references from 1970s needed). They resemble mammalian homologs.
The existence of amphibian Type I IFNs have been inferred by the discovery of the genes encoding their receptor chains. They have not yet been purified, or their genes cloned.
Piscine (bony fish) Type I IFN has been cloned in several teleost species. With few exceptions, and in stark contrast to avian and especially mammalian IFNs, they are present as single genes (multiple genes are however seen in polyploid fish genomes, possibly arising from whole-genome duplication). Unlike amniote IFN genes, piscine Type I IFN genes contain introns, in similar positions as do their orthologs, certain interleukins.
- Schultz et al., The interferon system of non-mammalian vertebrates. Developmental and Comparative Immunology, Volume 28, pages 499-508.
- Samarajiwa et al., Type I interferons: genetics and structure. The Interferons: Characterization and Application, 2006 Wiley-VCH, pages 3-34.
- Oritani and Tomiyama, Interferon-ζ/limitin: Novel type I Interferon that displays a narrow range of biological activity. International journal of hematology, 2004, Volume 80, pages 325-331 .
- Hardy et al., Characterization of the type I interferon locus and identification of novel genes. Genomics, 2004, Volume 84 pages 331-345.
- Todd and Naylor, New chromosomal mapping assignments for argininosuccinate synthetase pseudogene 1, interferon-beta 3 gene, and the diazepam binding inhibitor gene. Somat. Cell. Mol. Genet. 1992 Volume 18, pages 381-5.
- Wang et al., Fever of recombinant human interferon-alpha is mediated by opioid domain interaction with opioid receptor inducing prostaglandin E2. J Neuroimmunol. 2004 Nov;156(1-2):107-12. | <urn:uuid:4c5bcdff-1f9c-4574-9c72-31bc7ba5e853> | CC-MAIN-2013-20 | http://en.wikipedia.org/wiki/Interferon_type_1 | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382584/warc/CC-MAIN-20130516092622-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.907784 | 1,322 | 3.125 | 3 | HuggingFaceFW/fineweb-edu/sample-100BT |
Sawmill process
A sawmill's basic operation is much like those of hundreds of years ago; a log enters on one end and dimensional lumber exits on the other end.
- After trees are selected for harvest, the next step in logging is felling the trees, and bucking them to length.
- Branches are cut off the trunk. This is known as limbing.
- Logs are taken by logging truck, rail or a log drive to the sawmill.
- Logs are scaled either on the way to the mill or upon arrival at the mill.
- Debarking removes bark from the logs.
- Decking is the process for sorting the logs by species, size and end use (lumber, plywood, chips).
- The head saw, head rig or primary saw, breaks the log into cants (unfinished logs to be further processed) and flitches (unfinished planks) with a smooth edge.
- Depending upon the species and quality of the log, the cants will either be further broken down by a resaw or a gang edger into multiple flitches and/or boards
- Edging will take the flitch and trim off all irregular edges leaving four-sided lumber.
- Trimming squares the ends at typical lumber lengths.
- Drying removes naturally occurring moisture from the lumber. This can be done with kilns or air-dried.
- Planing smooths the surface of the lumber leaving a uniform width and thickness.
- Shipping transports the finished lumber to market.
Early history
The Hierapolis sawmill, a Roman water-powered stone saw mill at Hierapolis, Asia Minor (modern-day Turkey) dating to the second half of the 3rd century AD is the earliest known sawmill. It is also the earliest known machine to incorporate a crank and connecting rod mechanism.
The earliest literary reference to a working sawmill comes from a Roman poet, Ausonius who wrote an epic poem about the river Moselle in Germany in the late 4th century AD. At one point in the poem he describes the shrieking sound of a watermill cutting marble. Marble sawmills also seem to be indicated by the Christian saint Gregory of Nyssa from Anatolia around 370/390 AD, demonstrating a diversified use of water-power in many parts of the Roman Empire.
Sawmills became widespread in medieval Europe again, as one was sketched by Villard de Honnecourt in c. 1250. They are claimed to have been introduced to Madeira following its discovery in c. 1420 and spread widely in Europe in the 16th century.
Prior to the invention of the sawmill, boards were rived and planed, or more often sawn by two men with a whipsaw, using saddleblocks to hold the log, and a saw pit for the pitman who worked below. Sawing was slow, and required strong and hearty men. The topsawer had to be the stronger of the two because the saw was pulled in turn by each man, and the lower had the advantage of gravity. The topsawyer also had to guide the saw so that the board was of even thickness. This was often done by following a chalkline.
Early sawmills simply adapted the whipsaw to mechanical power, generally driven by a water wheel to speed up the process. The circular motion of the wheel was changed to back-and-forth motion of the saw blade by a connecting rod known as a pitman arm (thus introducing a term used in many mechanical applications).
Generally, only the saw was powered, and the logs had to be loaded and moved by hand. An early improvement was the development of a movable carriage, also water powered, to move the log steadily through the saw blade.
A type of sawmill without a crank is known from Germany called a "knock and drop" or "drop mill": "The oldest sawmills in the Black Forest are "drop sawmills" also referred to as "knock and drop sawmills". They have all disappeared in Europe except for three in the Black Forest, one of which is in the Open Air Museum in Gutach. In these drop sawmills, the frame carrying the saw blade is knocked upwards by cams as the shaft turns. These cams are let into the shaft on which the waterwheel sits. When the frame carrying the saw blade is in the topmost position it drops by its own weight, making a loud knocking noise, and in so doing it cuts the trunk. From 1800 onwards.”
A small mill such as this would be the center of many rural communities in wood-exporting regions such as the Baltic countries and Canada. The output of such mills would be quite low, perhaps only 500 boards per day. They would also generally only operate during the winter, the peak logging season.
In the United States, the sawmill was introduced soon after the colonisation of Virginia by recruiting skilled men from Hamburg. Later the metal parts were obtained from the Netherlands, where the technology was far ahead of that in England, where the sawmill remained largely unknown until the late 18th century. The arrival of a sawmill was a large and stimulative step in the growth of a frontier community.
Industrial revolution
Early mills had been taken to the forest, where a temporary shelter was built, and the logs were skidded to the nearby mill by horse or ox teams, often when there was some snow to provide lubrication. As mills grew larger, they were usually established in more permanent facilities on a river, and the logs were floated down to them by log drivers. Sawmills built on navigable rivers, lakes, or estuaries were called cargo mills because of the availability of ships transporting cargoes of logs to the sawmill and cargoes of lumber from the sawmill.
The next improvement was the use of circular saw blades, and soon thereafter, the use of gangsaws, which added additional blades so that a log would be reduced to boards in one quick step. Circular saw blades were extremely expensive and highly subject to damage by overheating or dirty logs. A new kind of technician arose, the sawfiler. Sawfilers were highly skilled in metalworking. Their main job was to set and sharpen teeth. The craft also involved learning how to hammer a saw, whereby a saw is deformed with a hammer and anvil to counteract the forces of heat and cutting. The circular saw was a later introduction, perhaps invented in England in the late 18th century, but perhaps in 17th century Holland (Netherlands). Modern circular saw blades have replaceable teeth, but still need to be hammered.
The introduction of steam power in the 19th century created many new possibilities for mills. Availability of railroad transportation for logs and lumber encouraged building of rail mills away from navigable water. Steam powered sawmills could be far more mechanized. Scrap lumber from the mill provided a ready fuel source for firing the boiler. Efficiency was increased, but the capital cost of a new mill increased dramatically as well.
By 1900, the largest sawmill in the world was operated by the Atlantic Lumber Company in Georgetown, South Carolina, using logs floated down the Pee Dee River from as far as the edge of the Appalachian Mountains in North Carolina.
A restoration project for Sturgeon's Mill in Northern California is underway, restoring one of the last steam-powered lumber mills still using its original equipment.
Current trends
In the twentieth century the introduction of electricity and high technology furthered this process, and now most sawmills are massive and expensive facilities in which most aspects of the work is computerized. The cost of a new facility with 2 mmfbm/day capacity is up to CAN$120,000,000. A modern operation will produce between 100 mmfbm and 700 mmfbm annually.
Small gasoline-powered sawmills run by local entrepreneurs served many communities in the early twentieth century, and specialty markets still today.
A trend is the small portable sawmill for personal or even professional use. Many different models have emerged with different designs and functions. They are especially suitable for producing limited volumes of boards, or specialty milling such as oversized timber.
Technology has changed sawmill operations significantly in recent years, emphasizing increasing profits through waste minimization and increased energy efficiency as well as improving operator safety. The once-ubiquitous rusty, steel conical sawdust burners have for the most part vanished, as the sawdust and other mill waste is now processed into particleboard and related products, or used to heat wood-drying kilns. Co-generation facilities will produce power for the operation and may also feed superfluous energy onto the grid. While the bark may be ground for landscaping barkdust, it may also be burned for heat. Sawdust may make particle board or be pressed into wood pellets for pellet stoves. The larger pieces of wood that won't make lumber are chipped into wood chips and provide a source of supply for paper mills. Wood by-products of the mills will also make oriented strand board (OSB) paneling for building construction, a cheaper alternative to plywood for paneling.
Additional Images
Wood from Victorian mountain ash, Swifts Creek
A sawmill in Armata, on mount Smolikas, Epirus, Greece.
A preserved water powered sawmill, Norfolk, England.
See also
- "Lumber Manufacturing". Lumber Basics. Western Wood Products Association. 2002. Retrieved 2008-02-12.
- Ritti, Grewe & Kessener 2007, p. 161
- Ritti, Grewe & Kessener 2007, pp. 149–153
- Wilson 2002, p. 16
- C. Singer et at., History of Technology II (Oxford 1956), 643-4.
- Charles E. Peterson, 'Sawdust Trail: Annals of Sawmilling and the Lumber Trade' Bulletin of the Association for Preservation Technology Vol. 5, No. 2. (1973), pp. 84-5.
- Adam Robert Lucas (2005), "Industrial Milling in the Ancient and Medieval Worlds: A Survey of the Evidence for an Industrial Revolution in Medieval Europe", Technology and Culture 46 (1): 1-30 [10-1]
- Peterson, 94-5.
- Oakleaf p.8
- Norman Ball, 'Circular Saws and the History of Technology' Bulletin of the Association for Preservation Technology 7(3) (1975), pp. 79-89.
- Edwardian Farm: Roy Hebdige's mobile sawmill
- Steam traction engines
- IN-TIME Timber Supply Chain Optimization http://www.mjc2.com/real-time-manufacturing-scheduling.htm
- Grewe, Klaus (2009), "Die Reliefdarstellung einer antiken Steinsägemaschine aus Hierapolis in Phrygien und ihre Bedeutung für die Technikgeschichte. Internationale Konferenz 13.−16. Juni 2007 in Istanbul", in Bachmann, Martin, Bautechnik im antiken und vorantiken Kleinasien, Byzas 9, Istanbul: Ege Yayınları/Zero Prod. Ltd., pp. 429–454, ISBN 978-975-8072-23-1
- Ritti, Tullia; Grewe, Klaus; Kessener, Paul (2007), "A Relief of a Water-powered Stone Saw Mill on a Sarcophagus at Hierapolis and its Implications", Journal of Roman Archaeology 20: 138–163
- Oakleaf, H.B. (1920), Lumber Manufacture in the Douglas Fir Region, Chicago: Commercial Journal Company
- Wilson, Andrew (2002), "Machines, Power and the Ancient Economy", The Journal of Roman Studies 92: 1–32
|Wikimedia Commons has media related to: Sawmills|
- Steam powered saw mills
- The basics of sawmill (German)
- Nineteenth century sawmill demonstration
- Database of worldwide sawmills
- Reynolds Bros Mill, northern foothills of Adirondack Mountains, New York State
- L. Cass Bowen Mill, Skerry, New York | <urn:uuid:f077b2f9-cedd-4343-ad36-cb14791a2c0c> | CC-MAIN-2013-20 | http://en.wikipedia.org/wiki/Sawmill | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382584/warc/CC-MAIN-20130516092622-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.945071 | 2,574 | 3.890625 | 4 | HuggingFaceFW/fineweb-edu/sample-100BT |
Théodore Simon was born on July 10, 1872 in Dijon, Burgundy, France. During much of his early life, he was fascinated by Alfred Binet's work and constantly read his books. His interest in psychology continually increased, especially as the need for clinical experience in the field decreased.
In 1899, he became an intern at the asylum in Perray-Vaucluse where he began his famous work on abnormal children. This drew Binet's attention, who was at the time studying the correlation between physical growth and intellectual development. Binet came to the asylum and continued his work there with Simon. This research led to Simon's medical thesis on the topic in 1900.
From 1901-1905, Simon worked in various hospitals, from Sainte-Anne to Dury-les-Amiens. 1905 is the year during which Simon and Binet made public their famous Binet-Simon Intelligence Scale, the first intelligence measuring device ever devised. It premiered in L'Année psychologique, a journal founded by Binet in 1895.
Throughout his life after this point, Simon always remained critical of immoderate and improper use of the scale. He believed that its over-use and inappropriate use prevented other psychologists from achieving Binet's ultimate goal: understanding human beings, their nature, and their development.
The scale was revised in 1908 and again in 1911, but Simon kept it the same after Binet's death in respect for one of history's greatest psychologists and Simon's true idol.
After 1905 until 1920, Simon worked as the head psychiatrist at St. Yon hospital. In 1920, he returned as medical director at Perray-Vaucluse until 1930. From there, he moved to act as medical director until late 1936, when he retired. Throughout his life (starting in 1912 until 1960) he was also an editor for Bulletin of Société Alfred Binet. He died of natural causes in 1961.
Wolf, T. H. (1961). American Psychologist, 16: 245-248.
|This article about a psychologist is a stub. You can help Wikipedia by expanding it.| | <urn:uuid:298577ed-54d5-448b-8e64-b0712bfd8598> | CC-MAIN-2013-20 | http://en.wikipedia.org/wiki/Theodore_Simon | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382584/warc/CC-MAIN-20130516092622-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.975395 | 441 | 2.71875 | 3 | HuggingFaceFW/fineweb-edu/sample-100BT |
|Graphing Random Walk|
I am trying a produce a random walk for a distance n and my task is to
find how many times along the walk the difference between the two values
produced is 0. The walk gives a binary output and is linked to two
selective groups. I feel I have successfully completed this bit, but am
having difficulty plotting a graph to show this. The plot should be the x
value against the natural numbers, thus a visual description of how many
times the graph goes through the x-axis, to prove the Y value produced.
What function should I use to plot this graph?
Does the function need to be done within the "For" function?
If anyone has any ideas or clues as to how I can overcome this problem,
please be in touch.
Attachment: RandomWalk.nb, URL: , | <urn:uuid:ba405fca-7a87-437c-bc01-cb88a605d0a3> | CC-MAIN-2013-20 | http://forums.wolfram.com/student-support/topics/21870 | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382584/warc/CC-MAIN-20130516092622-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.920599 | 180 | 3.015625 | 3 | HuggingFaceFW/fineweb-edu/sample-100BT |
Dossi, Dosso (Giovanni Luteri) (c.1490?-1542). The outstanding painter of the Ferrarese School in the 16th century.
His early life and training are obscure, but Vasari's assertion that he was born around 1474 is now thought unlikely. He is first recorded in 1512 at Mantua (the name `Dosso' probably comes from a place near Mantua--he is not called `Dosso Dossi' until the 18th century). By 1514 he was in Ferrara, where he spent most of the rest of his career, combining with the poet Ariosto in devising court entertainments, triumphs, tapestries, etc. Dosso painted various kinds of pictures--mythological and religious works, portraits, and decorative frescos--and is perhaps most important for the part played in his work by landscape, in which he continues the romantic pastoral vein of Giorgione and Titian. The influence from these two artists is indeed so strong that it is thought he must have been in Venice early in his career. Dosso's work, however, has a personal quality of fantasy and an opulent sense of color and texture that gives it an individual stamp (Melissa, Borghese Galleria, Rome, c.1523). His brother Battista Dossi (c.1497-1548) often collaborated with him, but there is insufficient evidence to know whether he made an individual contribution.
Photographs by Carol Gerten-Jackson. | <urn:uuid:edca70c3-130a-4403-bb60-6a8906c86782> | CC-MAIN-2013-20 | http://ibiblio.org/wm/paint/auth/dossi/ | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382584/warc/CC-MAIN-20130516092622-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.980567 | 324 | 2.765625 | 3 | HuggingFaceFW/fineweb-edu/sample-100BT |
- Yes, this is a good time to plant native grass seed in the ground. You may have to supplement with irrigation if the rains stop before the seeds have germinated and made good root growth.
- Which grasses should I plant? The wonderful thing about California is that we have so many different ecosystems; the challenging thing about California is that we have so many different ecosystems. It’s impossible for us to know definitively which particular bunchgrasses used to grow or may still grow at your particular site, but to make the best guesses possible, we recommend the following:
- Bestcase scenario is to have bunchgrasses already on the site that you can augment through proper mowing or grazing techniques.
- Next best is to have a nearby site with native bunchgrasses and similar elevation, aspect, and soils, that you can use as a model.
- After that, go to sources such as our pamphlet Distribution of Native Grasses of California, by Alan Beetle, $7.50.
- Also reference local floras of your area, available through the California Native Plant Society.
Container growing: We grow seedlings in pots throughout the season, but ideal planning for growing your own plants in pots is to sow six months before you want to put them in the ground. Though restorationists frequently use plugs and liners (long narrow containers), and they may be required for large areas, we prefer growing them the horticultural way: first in flats, then transplanting into 4" pots, and when they are sturdy little plants, into the ground. Our thinking is that since they are not tap-rooted but fibrous-rooted (one of their main advantages as far as deep erosion control is concerned) square 4" pots suit them, and so far our experiences have borne this out.
In future newsletters, we will be reporting on the experiences and opinions of Marin ranchers Peggy Rathmann and John Wick, who are working with UC Berkeley researcher Wendy Silver on a study of carbon sequestration and bunchgrasses. So far, it’s very promising. But more on that later. For now, I’ll end with a quote from Peggy, who grows, eats, nurtures, lives, and sleeps bunchgrasses, for the health of their land and the benefit of their cows.
“It takes a while. But it’s so worth it.” | <urn:uuid:c183066d-32a9-42eb-91b6-191fdb0980c2> | CC-MAIN-2013-20 | http://judithlarnerlowry.blogspot.com/2009/02/simplifying-california-native.html | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382584/warc/CC-MAIN-20130516092622-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.956731 | 495 | 2.515625 | 3 | HuggingFaceFW/fineweb-edu/sample-100BT |
Radiation Levels Along the West Coast Not A Concern
The unprecedented earthquake occurring in Japan on Friday, March 11, 2011 and subsequent tsunami that devastated Japan’s western seaboard also affected a nuclear power plant in Fukushima. Despite valiant on-going containment efforts, radioactive materials have escaped into the air, elevating radiation levels in surrounding areas. As of March 16, emergency evacuation has been ordered for people who live withing 20 kilometers (12.4 miles) from the troubled nuclear power plant. While these events are occurring more than 4500 miles from the West Coast of the United States, there is growing public concern regarding radiation. However, authorities from the Departments of Health in Washington, Oregon and Alaska (the three states in NN/LM PNR along the coast), state that there is no public health risk from the damaged nuclear reactor.
Visit the Washington State Department of Health website for more information about the nuclear reactor in Japan and any associated health risks.
Oregonians can visit the Oregon Health Authority’s web site.
Alaskans can go to the State of Alaska Health and Social Services site to read about radiological preparedness.
Lastly, the journal Disaster Medicine and Public Health Preparedness has published an open-access supplement on nuclear preparedness: http://www.dmphp.org/content/vol5/Supplement_1/index.dtl
Articles from this and other publications of the Nuclear Detonation Scarce Resources Project Working Group can be accessed through the Radiation Emergency Medical Management (REMM) tool at http://www.remm.nlm.gov/triagetool_intro.htm . REMM is a source of evidence-based, online and downloadable guidance about clinical diagnosis and treatment of radiation injury for health care providers.
And, for resources for disaster planning and response, remember to visit the NN/LM Emergency Preparedness Toolkit – http://nnlm.gov/ep/ | <urn:uuid:1baaf342-d11a-4dbc-ab67-c02c4388cf54> | CC-MAIN-2013-20 | http://nnlm.gov/pnr/dragonfly/2011/03/21/radiation/ | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382584/warc/CC-MAIN-20130516092622-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.909363 | 402 | 3.0625 | 3 | HuggingFaceFW/fineweb-edu/sample-100BT |
The Digital Library is a database of articles about successful VoiceThread projects. Our hope is to create a resource that offers guidance and inspiration for people undertaking new projects. Please contribute a VoiceThread to help the Digital Library grow.
Using VoiceThread in an online course from Professor Russ Meade
VoiceThread "humanizes" the on-line classroom experience. As a college Professor, I teach all over the US exclusively asynchronously. One of the drawbacks of online learning has always been that the student feels isolated and unconnected with either his or her classmates...
Higher Ed from Della Curtis
An engaging discussion between graduate students looking to earn a master's degree in education. This example showcases the collaboration that can be captured in a VoiceThread between colleagues...
7th grade radio advertisements from Terry Casey
We explored the power of radio advertising and then students created their own advertisements. VoiceThread allowed us to host our ads and the other students are then able to listen and leave their opinions.
Book Review: A Single Shard by Linda Sue Park from C Vidor
This VoiceThread aims to make these elements a bit more familiar to students with brief explanations and interesting images. The VoiceThread also suggests ways in which some themes of the book are evident in the story's imagery.
2nd graders play I-Spy
The project incorporates many tech skills as well as many literacy skills into a fun project they children loved. The parents and teachers also loved sharing their students work.
Higher Ed analysis of Tim O'Brien's "The Things They Carried"
This VoiceThread encouraged my students to critically examine the story and post their insights for the entire world to see. I saw them go from being reluctant and nervous students to enthusiastic and totally engaged teachers of one another.
8th grade Historical Fiction from Shirley Scamardella
This picture book was written, illustrated and told by students. It was entered into the Scholastic Books, Kids are Authors Contest and it won Honorable Mention. I feel this is a good VoiceThread because this is the finished product of a two month project.
Poetry and Illustration from Constance Vidor
The Poetry and Illustration VoiceThread shows how illustration can be used to interpret and illuminate poetry written for young readers. It begins by showing illustrations by two different illustrators of Edward Lear's The Owl and the Pussycat.
Comparing J.S. Bach and Paul McCartney, Constance Vidor
The J.S. Bach and Paul McCartney VoiceThread introduces young learners to the great baroque composer by way of a comparison/contrast with a musician with whom they are more familiar.
1st grade - Reading Analysis from Leanne Windsor
This VoiceThread was a culmination of a project we did in library to get the children thinking about the books they were choosing to read and why they liked them. They were also learning about story structure...
Kindergarten Storybook from Leanne Windsor
This VoiceThread displays the illustrations that the children drew with author Alison Lester when she visited our school. We followed the pattern of her picture book series about children and what they are doing day to day.
4th Grade - Where I'm From Poems from Tara McCartney
Students shared personally significant poetry against a backdrop of their own self portrait. This VoiceThread gave students a chance to share their work orally, as well as to explore the cultural differences between our students in a safe environment.
12th Grade - A Day in the Life from Joanie Batts
These High-School seniors used VoiceThread to create the final segment to a school wide literacy project, "A Day in the Life of Dunnellon High School."
4th Grade - I Am Poems from Jackie Gerstein
The "I am" feedback project demonstrates a method for providing qualitative feedback to students' poetry: students began with a hands-on activity: the magnetic poetry, and later put it into a VoiceThread.
4th Grade English from Ms. Naugle
VoiceThread enabled my students to put their poems out in an audio format to be shared with others. They eagerly practiced their speaking fluency to get it "just right" because they wanted to impress their "audience".
4th Grade book-reading discussion from Krystina Kelly
This VoiceThread shows how a wide number of students from different classes and grades can use VoiceThread to have an asynchronous conversation about books.
Kindergarten reading from Heather Taylor
I think that this is a good project because the children are able to share some of their experiences with reading and talking about books at home.
7th Grade from Amy Cobb
This is a great example of what a VoiceThread can do when embedded in a blog: foster global conversation.
7th Grade from Amy Cobb 2
In place of a written assessment the students were asked to take all the information they gathered from their study on Edgar Allan Poe and put it all into the "What do you know about Poe?" VoiceThread.
7th Grade from Amy Cobb 3
This is a powerful reflection of a young girl's seventh grade experience. It tells a story using six photos that defined her middle school year.
3rd Grade from Alice Mercer
This VoiceThread demonstrates the use of features that are unique to the application. The creator has used both voice and text to "teach" the lesson.
10th grade Chinese language practice from Lilia Hurteau
VoiceThread can be used to teach Chinese in a high school setting. Students have to repeat the words and then make sentences with the new vocabulary. They record themselves and they have to be creative.
9th grade Chinese language lesson from James Rolle
VoiceThread provides a medium for this character-by-character explanation of a commonly used phrase in Chinese that students can listen to and learn on their own time.
Language learners use VoiceThread to practice speaking
This is a great example of how an ESL student can practice her computer skills and her language skills to talk about everyday activities. Students can practice speaking as many times as they like before they can show it to their teachers/classmates.
11th grade - French fluency and history from Hassina Taylor
Students found that responding to my questions orally via the Internet was a great way to improve on their fluency, and they found it very challenging to use the visual cues to find answers. They concentrated much better and retained information because it is a "hands-on"...
3rd grade language "Les Trois Petits Cochons" from Mme Smith
Using VoiceThread, students realized the power of voice as a tool of expression. All students in the class were able to contribute to this VoiceThread presentation of a play they had learned.
Higher-Ed, Studying Abroad in Ecuador, David Thompson
This VoiceThread is a good example of digital storytelling for the purpose of reflecting on a study abroad experience.
7th Grade Spanish from Eve Millard
Learning a second or foreign language, these students are introduced to vocabulary via images or text, and engage in oral practice of the language.
Higher Ed blogs in teaching from Kristen Kozloski, Ph.D.
We used blogging as a reflective practice in my course on Designing Multimedia for Learning. This is an example of our final reflection, as a class, of that blogging process.
Higher Ed online technologies from Jen Hegna
The goal of this project was to have each team member reflect upon tools we utilized to collaborate and complete our online project entitled - Disrupting TCS 702.
7th graders practice Math in Action from Ms Redd
VoiceThread showcases this great example of Math in Action. My students love the idea that they can comment on a video featuring one of their teachers and it feels like they aren't even doing math!!
7th Grade - Exploring Probability from Britt Gow
Students from two countries were able to comment on the slides which show images of probability problems. My students enjoyed this exercise and another teacher has used it in her class to extend each of the problems.
7th Grade - Measurement from Britt Gow
Year 7 students from Hawkesdale P12 college were able to share their knowledge with Grade 5/6 students from a Ballarat Primary School about measurement and ratio. Students were required to articulate their thinking...
6th Grade math from Jackie Ionno
This VoiceThread was created by the student and shows that he really got the intent of the assignment. It shows effort, creativity, organization, and a mathematical knowledge of the the real world.
4th Grade problem-solving from Krystina Kelly
This is a great VoiceThread example because it shows how an entire 4th grade class can work together to develop problem-solving strategies.
Higher Ed teaching with technology from Ellen Dobson
This VoiceThread, entitled "Surfometry," was created by a student-teacher as an assignment for a "Computers in Education" course. The creator incorporated images, video and graphics into an engaging geometry lesson.
Language from Carla Arena
This VoiceThread shows current and new educators how VoiceThread can be used in English/Language Arts courses by asking students to assemble a creative artifact that weaves in literacy benchmarks: poetry, personification...
Language from Carla Arena 2
This professional-development instructor used VoiceThread to introduce new technology for education to her groups of educators.
9th graders write Children's stories about astronomy, Mrs. Edenstrom
I had all of my 9th grade science students do a Children's story about astronomy. They had to have facts, but tell it in a creative way that could be read and understood by elementary aged students! I had great success with this.
4th graders study plants in collaboration with Pakistani students
Students from the US collaborate with Pakistani students to learn about a common interest! This project can be used by teachers of any grade level, can be shown to parents, can be a model for showing kids the possibilities of the medium.
7th Grade - The Water Cycle from Britt Gow
Year 7 Science students did a unit of work on water and the water cycle. They were asked to draw a picture containing mountains, clouds, the sea, a lake, a forest and an underground water reservoir (aquifer).
7th graders Go Green from Mrs. Beatrice Reiser
This VoiceThread is an excellent example because it is interactive and promotes environmental issues.
1st Grade Science from Michele Green
First grade students researched fish in the library, used Paint to draw pictures of them, and then recorded their voices.
The Silk Road - from Constance Vidor
This VoiceThread provides a scaffolding for research and active engagement with an important topic of world history.
Digital storytelling - Abraham Lincoln's dog, Fido, from Clare Caddell
VoiceThread brought a little-known story about Lincoln to life, with images as well as voice. It was created in response to my second and third graders, who wanted to know more about Lincoln's dog, Fido.
History Podcast with secondary-ed students from Laurie Cohen
This project gives students an opportunity to be creative while demonstrating their content knowledge and technological skills. After developing the Lesson Plan and creating this sample, our US History teacher loved it for his class. He is using it in the 10th grade class.
5th Grade - Ellis Is. Narratives from Barbara De Santis
I wanted this project to enable the students to truly feel the immigrant experience. While primary documents are always in their textbook, there is seldom time to closely examine the images looking for clues to foster understanding.
3rd Grade - School Community from Trish Harrison
The objective was for students to learn to express values while looking at different communities. They practiced discussion; brainstorming; writing down ideas; using ideas in small-group talk as conversations.
11th Grade - Reconstruction from Molly Lynde
This is a great VoiceThread foremost because my students were actively engaged and finished with a clearer understanding of what the post Civil War era was really about... they no longer thought of a "universal freed slave" dancing in the street.
4th Grade - Letters from the Internment Camps
In this VoiceThread, students explore an historical event that is relevant to their physical community, the removal of Japanese-Americans to internment camps after the bombing of Pearl Harbor.
8th Grade - Colors of the Night from Mrs. Brosnan
My goal was to further enhance their Art History knowledge that by using VoiceThread enabled me to extend my teaching "outside of the class room".
K-12 art, poetry, and music from Erin Berg
This VT is an example of the power of collaboration using technology. This encompasses art through words, visuals, and music.
5th grade music/video project from Elissa Reichstein
I believe (this VoiceThread) shows an interesting way to use VoiceThread to motivate learning and celebrate student interests and accomplishments.
2nd Grade from Donna Lubin
A great example of how a PowerPoint presentation can be uploaded easily and used as an archive of a PowerPoint-assisted lecture...
Higher Ed Online Learning from Michelle Pacansky-Brock
An engaging and dynamic lecture delivered within an interactive environment engages in a way no 'downloadable' lecture can.
Higher Ed Online Learning from Michelle Pacansky-Brock 2
By engaging in discussions, students explore and engage in course material more deeply while practicing critical-thinking and discourse.
5th and 6th-grade Digital literacy project from Julienne Hogarth
I gave the learners the images and they researched and posted information and comments about Dale Chihuly's life and art. The learners were thrilled to use Voicethread as a tool in their learning as well as becoming very excited about the art work.
5th Grade - Digital portfolios for student-led parent confs.
As we're an international school, I was looking for a seamless way that relatives in other countries (Grandparents, parents away working etc) could view, comment, give feedback on the digital portfolio.
8th Grade - Refugee stories from Creative Technology
VoiceThread has enabled refugee communities to share stories about what it means to come to a new country to live.
6th Grade Class trip from Jennifer Bamsberger
We used VoiceThread to document our class trip to the Maker Faire 2008 in the San Francisco Bay Area as part of our study of the Spirit of Creativity. For some of our sixth graders, this was the first long trip away from home without family.
10th Grade Child Development from Andrea Holtry
This project was a collaboration of the high school students in four Child Development classes, illustrating many of the problems facing children today. | <urn:uuid:708a3238-d316-42b5-bb76-8a858c51acfa> | CC-MAIN-2013-20 | http://nypl.voicethread.com/about/library/ | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382584/warc/CC-MAIN-20130516092622-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.95233 | 3,016 | 2.671875 | 3 | HuggingFaceFW/fineweb-edu/sample-100BT |
Image Size is the size of your original digital photo file, measured in pixels and DPI (Dots Per Inch, sometimes referred to as PPI, Pixels Per Inch). What is a pixel? A pixel is a small square dot. DPI refers to the number of dots (pixels) per inch. Why is this important? Well, if an image is too small, you might not be able to order a large size print or other photo product. A general rule of thumb for image size versus print size is: the image size should be at least the size of the print you want multiplied by 300, at 300 DPI. For example, if you want to order a 4x6 print, the image size should be 1200 pixels (4 x 300) by 1800 pixels (6 x 300) at 300 DPI. If the image size was half of that (600 by 900), then the 4x6 print would likely come out distorted or pixilated if you were to order a print.
Camera Settings Decide in advance what is more important: image quality or room on your memory card. You can set your camera to take photos that are larger or smaller in size. If you know you will only be printing 4x6 photos, then you can reduce the image quality, which allows you to store more photos on your memory card. If you will be printing enlargements or other photo products like photo books, then keep the setting on "high" for higher quality images. The image sizes will be larger and you will not be able to store as many on your memory card at one time. Also, set the file type as "jpeg" if your camera allows you to control that detail. You might have a "tiff" option, but it is not necessary to save the photos as "tiff" files, and it will only take up more room on your memory card.
If you have a point and shoot camera, open your main menu, and find the setting for "image quality" (or something similar). Usually, the options are "low," "medium," and "high." Choose "high" for higher quality (larger) photos. If you have an SLR camera, you probably have additional options. Just stick to high quality jpeg images, unless you know you will be doing extensive image editing and post-production. In that case, you might want to shoot RAW files. Resolution The resolution of your photo is directly impacted by the image size. The more pixels your photos have, the higher their resolution is.
When you upload photos to your online account, you are given three upload options: "Regular," "Fast," and "Fastest." When you choose "Fast" or "Fastest," the photos are compressed, so the resolution will be less than the original photo file. So, if you are just uploading to order 4x6 prints, "Fastest" will be fine. But, if you wish to order enlargements, photo books, calendars, and other photo products, choose the "Regular" speed, which uploads the photos at their original resolution.
Once the photos are uploaded, you will notice three bars for each photo in your account. If all three bars are green, that means that the resolution of the photo that is in the account is sufficient enough to order just about anything on the site. If the bars are all red, you have uploaded a low resolution photo. Try to find the original photo file and check the size. If the size is sufficient enough to order prints (based on the rule we mentioned above about multiplying the desired print size by 300 and comparing to the actual image size), re-upload the photo at "Regular" upload speed. Photos with two or three red bars will generate poor quality prints, especially if you are trying to order anything larger than 4x6 prints. We also will double check the resolution on our end. If we catch a low res file when printing, we always stop and notify you. We want you to be happy with your prints.
Now that you understand image size and resolution a bit more, and understand why they are important when working in your online photo account, here are a few more extra tips about image size and resolution:
- Most computer screens display photos at 72 DPI. That means the printed photo will look different than how it appears on your computer screen.
- If you crop a photo too much (zoom in too much), it will always look pixilated and distorted, no matter how large the image size is.
- Once you take the photo, you cannot increase the size or resolution by increasing the number of pixels in any photo editing program. If you wish to increase the resolution or file size, you must do so by adjusting your camera settings before you take any more photos. | <urn:uuid:1e09e4c7-ed1a-4864-90b8-22fc564cbb6d> | CC-MAIN-2013-20 | http://persnicketyprints.com/tip/resolution/resolution-part-2 | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382584/warc/CC-MAIN-20130516092622-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.942868 | 985 | 3.328125 | 3 | HuggingFaceFW/fineweb-edu/sample-100BT |
Wan Ali, Wan Zah and Mohd Ayub, Ahmad Fauzi and Wong, Su Luan and King, Hasnah Yee Tang and Wan Jaafar, Wan Marzuki (2008) Students teacher attitudes towards computer and online learning : are they a factor in students' usage? The International Journal of Learning, 15 (6). pp. 35-41. ISSN 1447-9494
Full text not available from this repository.
Computers and online learning are rapidly becoming important components in fundamental curriculum of Malaysian educational systems. In Malaysia, the computer and online learning curriculum have been incorporated into all levels of the educational systems. However, the instructional effectiveness of computer and online learning are related to many factors including students’ attitudes towardsthese technologies. Hence, positive attitudes towards computers and online learning are important variables to be studied among pre-service teachers. The main purpose of this study is to examine the attitudes of pre-service teachers at Universiti Putra Malaysia toward computer and online learning and its relationships. In addition, the relationship between this attitudes and the usage of computer and online learning is also studied. The findings indicate that pre-service teachers have positive attitudes towards computers and online learning. The correlation between students’ attitude towards computers and online learning was significant.
|Keyword:||Online Learning; Attitudes toward Computer; Attitudes towards Online Learning|
|Subject:||Computer-assisted instruction - Malaysia|
|Subject:||College teachers - In-service training|
|Faculty or Institute:||Faculty of Educational Studies|
|Publisher:||Common Ground Publishing|
|Deposited By:||Emelda Mohd Hamid|
|Deposited On:||25 May 2012 15:08|
|Last Modified:||12 Nov 2012 14:41|
Repository Staff Only: item control page | <urn:uuid:626c380d-582f-47ba-9350-b504f1f0963c> | CC-MAIN-2013-20 | http://psasir.upm.edu.my/16877/ | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382584/warc/CC-MAIN-20130516092622-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.906219 | 382 | 2.515625 | 3 | HuggingFaceFW/fineweb-edu/sample-100BT |
(Washington, DC • 11/6/06) – The second of five Special Sensor Ultraviolet Limb Imager (SSULI) remote sensing instruments, developed by the Naval Research Laboratory, was launched on November 4, 2006 on board the DMSP F-17 satellite. SSULI is the first operational instrument of its kind and provides a new technique for remote sensing of the ionosphere and thermosphere from space. SSULI's measurements will provide scientific data supporting military and civil systems and will assist in predicting atmospheric drag effects on satellites and reentry vehicles.
A Boeing Delta 4 vehicle launched the Air Force's Defense Meteorological Satellite Program (DMSP) F-17 satellite and the SSULI sensor into low earth orbit from Vandenberg Air Force Base, California. SSULI will be powered on and start initial sensor checkout 30 days after launch.
"Characterization of the Earth's upper atmosphere and ionosphere is a critical goal for Department of Defense (DoD) and civilian users," said Andrew Nicholas, the SSULI Principal Investigator at NRL. He discussed the significance of the planned SSULI observations, saying, "The upper atmosphere affects many systems from global to tactical scales. These systems include GPS positioning, HF radio communications, satellite drag and orbit determination, and over-the-horizon radar. Both the neutral atmosphere and the ionosphere are driven by solar and geomagnetic forcing that occur on many timescales ranging from short (minute, hours) to medium (days to months) to long (years). Real-time global observations that yield altitude profiles of the ionosphere and neutral atmosphere, over an extended period of time (DMSP through the year 2016) will fill a critical need."
SSULI measures vertical profiles of the natural airglow radiation from atoms, molecules, and ions in the upper atmosphere and ionosphere from low earth orbit aboard the DMSP satellite. It builds on the successes of the NRL High Resolution Airglow/Aurora Spectroscopy (HIRAAS) experiment recently flown aboard the Space Test Program (STP) Advanced Research and Global Observations Satellite (ARGOS). SSULI makes measurements from the extreme ultraviolet (EUV) to the far ultraviolet (FUV) over the wavelength range of 80 nm to 170 nm with 2.4 nm resolution. SSULI also measures the electron density and neutral density profiles of the emitting atmospheric constituents. SSULI uses a spectrograph with a mirror capable of scanning below the satellite horizon from 10 degrees to 27 degrees every 90 seconds. These observations represent a vertical slice of the Earth's atmosphere from 750 km to 50 km in depth. Use of these data enables the development of new techniques for global ionospheric remote sensing and new models of global electron density variation.
Commenting on the practical application of the instrument, Mr. Ken Weldy, the Program Manager at NRL said, "Since natural atmospheric phenomena can disrupt day-to-day operations in the military use of space, we look forward to providing SSULI operational products to feed into the Global Assimilation of Ionospheric Measurements (GAIM) model. This will provide an important piece of the characterization of the Earth's upper atmosphere and ionosphere."
An extensive data processing suite was developed to support on-orbit observations and flight operations. It includes data reduction software using unique science algorithms developed at NRL, comprehensive data validation techniques, and graphical interfaces for the user community. After launch, the SSULI sensor, software, and derived atmospheric specification will under go an extensive validation. After validation, SSULI products will be distributed by the Air Force Weather Agency to support operational DoD systems.
Additional information about the SSULI instrument and its data processing software is available at http://www.nrl.navy.mil/tira/Projects/ssuli/.
The Defense Meteorological Satellite Program (DMSP) is a Department of Defense (DoD) program run by the Air Force Space and Missile Systems Center (SMC). The program designs, builds, launches, and maintains several near-polar orbiting, sun synchronous satellites monitoring the meteorological, oceanographic, and solar-terrestrial physics environments. Additional information is available at the DMSP web site (http://dmsp.ngdc.noaa.gov/dmsp.html).
NRL is the Department of the Navy's corporate laboratory. NRL conducts a broad program of scientific research, technology, and advanced development. The Laboratory, with a total complement of approximately 2,500 personnel, is located in southwest Washington, DC, with other major sites at the Stennis Space Center, MS; and Monterey, CA.
Last reviewed: By John M. Grohol, Psy.D. on 21 Feb 2009
Published on PsychCentral.com. All rights reserved. | <urn:uuid:a168bd55-6030-4ce8-bdae-e503fe05ec37> | CC-MAIN-2013-20 | http://psychcentral.com/news/archives/2006-11/nrl-nst110606.html | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382584/warc/CC-MAIN-20130516092622-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.877139 | 990 | 2.90625 | 3 | HuggingFaceFW/fineweb-edu/sample-100BT |
In neuroanatomy, a sulcus (Latin: "furrow", pl. sulci) is a depression or fissure in the surface of the brain.
It surrounds the gyri, creating the characteristic appearance of the brain in humans and other large mammals.
Large furrows (sulci) that divide the brain into lobes are often called fissures. The large furrow that divides the two hemispheres—the interhemispheric fissure—is very rarely called a "sulcus".
The sulcal pattern varies between human individuals, and the most elaborate overview on this variation is probably an atlas by Ono, Kubick and Abernathey: Atlas of the Cerebral Sulci.
Some of the larger sulci are, however, seen across individuals - and even species - so it is possible to establish a nomenclature.
The variation in the amount of fissures in the brain (gyrification) between species is related to the size of the animal and the size of the brain. Mammals that have smooth-surfaced or nonconvoluted brains are called lissencephalics and those that have folded or convoluted brains gyrencephalics. The division between the two groups occurs when cortical surface area is about 10 cm2 and the brain has a volume of 3–4 cm3. Large rodents such as beavers (Template:Convert/lbTemplate:Convert/test/A) and capybaras (Template:Convert/lbTemplate:Convert/test/A) are gyrencephalic and smaller rodents such as rats and mice lissencephalic.
In humans, cerebral convolutions appear at about 5 months and take at least into the first year after birth to fully develop. It has been found that the width of cortical sulci not only increases with age , but also with cognitive decline in the elderly.
↑ 2.02.1Hofman MA. (1985). Size and shape of the cerebral cortex in mammals. I. The cortical surface. Brain Behav Evol. 27(1):28-40. PMID 3836731
↑ 3.03.1Hofman MA. (1989).On the evolution and geometry of the brain in mammals. Prog Neurobiol.32(2):137-58. PMID 2645619
↑Martin I. Sereno, Roger B. H. Tootell, "From Monkeys to humans: what do we now know about brain homologies," Current Opinion in Neurobiology15:135-144, (2005).
Caviness VS Jr. (1975). Mechanical model of brain convolutional development. Science. 189(4196):18-21. PMID 1135626
Tao Liu, Wei Wen, Wanlin Zhu, Julian Trollor, Simone Reppermund, John Crawford, Jesse S Jin, Suhuai Luo, Henry Brodaty, Perminder Sachdev (2010) The effects of age and sex on cortical sulci in the elderly. Neuroimage 51:1. 19-27 May. PMID 20156569
↑ Tao Liu, Wei Wen, Wanlin Zhu, Nicole A Kochan, Julian N Trollor, Simone Reppermund, Jesse S Jin, Suhuai Luo, Henry Brodaty, Perminder S Sachdev (2011) The relationship between cortical sulcal variability and cognitive performance in the elderly. Neuroimage 56:3. 865-873 Jun. PMID 21397704
↑Gerhardt von Bonin, Percival Bailey, The Neocortex of Macaca Mulatta, The University of Illinois Press, Urbana, Illinois, 1947 | <urn:uuid:90017c93-fbd3-4e3b-bf56-08ddb285416b> | CC-MAIN-2013-20 | http://psychology.wikia.com/wiki/Sulcus_(neuroanatomy)?oldid=150425 | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382584/warc/CC-MAIN-20130516092622-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.811399 | 767 | 4.03125 | 4 | HuggingFaceFW/fineweb-edu/sample-100BT |
Table o contents
Chai Nat is locatit in the flat river plain o central Thailand's Chao Phraya River valley. In the sooth o the province the Chao Phraya (umwhile Chai Nat) Dam impunds the Chao Phraya river, baith for flood control as well as tae divert water intae the kintra's lairgest irrigation seestem for the irrigation o rice paddies in the lawer river valley. The dam, pairt o the Greater Chao Phraya Project, wis finished in 1957 an wis the first dam constructit in Thailand.
Oreeginally the ceety wis locatit at Sankhaburi. In the reign o Keeng Mongkut (Rama IV) the main settlement o the province wis moved tae its present-day location. Durin the wars wi the Burmese it wis an important military base for confrontin the Burmese Airmy. As aw these confrontations wur successful the ceety gained the name Chai Nat, which means place o victory.
The slogan o the province is Venerable Luangpu Suk, Renouned Chao Phraya Dam, Famous Bird Park an Tasty Khao Taengkwa Pomelo.
Admeenistrative diveesions
Straw Bird Fair, Chai Nat’s Product Fair and Red Cross Fair (งานมหกรรมหุ่นฟางนกนานาชาติ งานของดี และงานกาชาดจังหวัดชัยนาท) This annual fair is organized bi makin guid uise o straw, a bi-product in rice farmin. Various species o huge straw birds will come perchin on elaborately decoratit floats durin the straw bird procession an the competition is held in front o Chai Nat Ceety Hall. The event is held annually durin Cheenese New Year in Februar.
Chai Nat Pomelo Fair (งานส้มโอชัยนาท) Chai Nat is ane o several provinces famous for producin exceptional pomelo. The best kent are o the Khao Taengkwa variety haein a well-roondit shape, smooth skin, thin peel, sweet-crispy taste an a little sour, but no bitter. The fair is held durin late August - early September in front o Chai Nat Ceety Hall an features mony activities such as pomelo contest, varieties o exhibitions bi provincial authorities, an young shoot an pomelo sales. | <urn:uuid:67520114-7420-40db-8b1d-23abdcbeb5e1> | CC-MAIN-2013-20 | http://sco.wikipedia.org/wiki/Chainat_Province | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382584/warc/CC-MAIN-20130516092622-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.781758 | 640 | 2.53125 | 3 | HuggingFaceFW/fineweb-edu/sample-100BT |
Locating thermophiles in other parts of the universe could very well aid in the search for extraterrestrial life. Most people have agreed that if life is found among the stars, it will be microbial (at least in the near-term future). Many individuals have also suggested that intelligent life forms might very well be extinct in other parts of the universe. If scientists could locate thermophile microbes, they could piece together an archaeological picture of once powerful civilizations.
Taiwan is well known for its hot springs. Most tourists that visit the island end up visiting at least one. Many people like to take relaxing baths in them. Hot springs can be great for people with arthritis. New research is proving that they can also be a great place to find astrobiological data.
Photosynthetic thermophiles that live in hot springs may potentially be removing significant amounts of industrially produced carbon dioxide from the atmosphere. They’ve thrived because of fundamental changes to the atmosphere caused by humanity. In fact, there are some scientists who feel that these microbes could play a vital role in regulating the planet’s climate. That role might become increasingly important in the future.
Planets that were once inhabited by industrially developed civilizations that have since passed might be teeming with life similar to these. If a planet was sufficiently changed by another race of beings, it could have ultimately favored the development of these tiny beings. They could indicate that intelligent lifeforms once inhabited a planet, and that planet could be different today than it was in the past.
While discovering a planet full of microbes would be initially interesting, in the future it could be a relatively common occurrence. Therefore, news services of the future might very well pass by such stories after a few weeks – much like they do today with the discovery of new exoplanets. Finding sufficient numbers of photosynthetic thermophiles would be telling about the history of a world, but it would also require a great deal of geological activity. Then again, there’s nothing to say that other civilizations wouldn’t also have the ability to increase the amount of geological activity on other planets. They might even do it on purpose, as a way of terraforming for instance.
For that matter, humans might want to give that a try. Venus is superheated because of thermal runaway as a result of excess carbon dioxide in the atmosphere. If water were transported to that very hot world, colonists could use the resulting geysers to grow bacteria that would absorb the atmospheric gas.
Leu, J., Lin, T., Selvamani, M., Chen, H., Liang, J., & Pan, K. (2012). Characterization of a novel thermophilic cyanobacterial strain from Taian hot springs in Taiwan for high CO2 mitigation and C-phycocyanin extraction Process Biochemistry DOI: 10.1016/j.procbio.2012.09.019 | <urn:uuid:fb936873-c4b3-4301-85c5-1bd5eb0d9a9c> | CC-MAIN-2013-20 | http://wiredcosmos.com/2012/10/18/searching-for-extraterrestrial-microbes/ | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382584/warc/CC-MAIN-20130516092622-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.96284 | 601 | 3.8125 | 4 | HuggingFaceFW/fineweb-edu/sample-100BT |
is a crime
Crime is the breach of rules or laws for which some governing authority can ultimately prescribe a conviction...
, the essence of which is illicit entry into a building for the purposes of committing an offense. Usually that offense will be theft
In common usage, theft is the illegal taking of another person's property without that person's permission or consent. The word is also used as an informal shorthand term for some crimes against property, such as burglary, embezzlement, larceny, looting, robbery, shoplifting and fraud...
, but most jurisdictions specify others which fall within the ambit of burglary. To engage in the act of burglary is to burgle
(in British English
British English, or English , is the broad term used to distinguish the forms of the English language used in the United Kingdom from forms used elsewhere...
) or to burglarize
(in American English
American English is a set of dialects of the English language used mostly in the United States. Approximately two-thirds of the world's native speakers of English live in the United States....
Common law definition
The common law
Common law is law developed by judges through decisions of courts and similar tribunals rather than through legislative statutes or executive branch action...
burglary was defined by Sir Matthew Hale
Sir Matthew Hale SL was an influential English barrister, judge and jurist most noted for his treatise Historia Placitorum Coronæ, or The History of the Pleas of the Crown. Born to a barrister and his wife, who had both died by the time he was 5, Hale was raised by his father's relative, a strict...
- Breaking can be either actual, such as by forcing open a door, or constructive, such as by fraud or threats. Breaking does not require that anything be "broken" in terms of physical damage occurring. A person who has permission to enter part of a house, but not another part, commits a breaking and entering when they use any means to enter a room where they are not permitted, so long as the room was not open to enter.
- Entering can involve either physical entry by a person or the insertion of an instrument with which to remove property. Insertion of a tool to gain entry may not constitute entering by itself. Note that there must be a breaking and an entering for common law burglary. Breaking without entry or entry without breaking is not sufficient for common law burglary.
- Although rarely listed as an element, the common law required that entry occur as a consequence of the breaking. For example, if a wrongdoer partially opened a window by using a pry bar and then noticed an open door through which he entered the dwelling, there is no burglary at common law. The use of the pry bar would not constitute an entry even if a portion of the prybar "entered" the residence. Under the instrumentality rule the use of an instrument to effect a breaking would not constitute an entry. However, if any part of the perpetrator's body entered the residence in an attempt to gain entry, the instrumentality rule did not apply. Thus, if the perpetrator uses the prybar to pry open the window and then used his hands to lift the partially opened window, an "entry" would have taken place when he grasped the bottom of the window with his hands.
- House includes a temporarily unoccupied dwelling, but not a building used only occasionally as a habitation
- Night time is defined as hours between half an hour after sunset and half an hour before sunrise
- Typically this element is expressed as the intent to commit a felony “therein”. The use of the word “therein” adds nothing and certainly does not limit the scope of burglary to those wrongdoers who break and enter a dwelling intending to commit a felony on the premises. The situs of the felony does not matter, and burglary occurs if the wrongdoer intended to commit a felony at the time he broke and entered.
The common law elements of burglary often vary between jurisdictions. The common law definition has been expanded in most jurisdictions, such that the building need not be a dwelling or even a building in the conventional sense, physical breaking is not necessary, the entry does not need to occur at night, and the intent may be to commit any felony or theft.
Etymology is the study of the history of words, their origins, and how their form and meaning have changed over time.For languages with a long written history, etymologists make use of texts in these languages and texts about the languages to gather knowledge about how words were used during...
originates from Anglo-Saxon
Anglo-Saxon may refer to:* Anglo-Saxons, a group that invaded Britain** Old English, their language** Anglo-Saxon England, their history, one of various ships* White Anglo-Saxon Protestant, an ethnicity* Anglo-Saxon economy, modern macroeconomic term...
or Old English, one of the Germanic languages
The Germanic languages constitute a sub-branch of the Indo-European language family. The common ancestor of all of the languages in this branch is called Proto-Germanic , which was spoken in approximately the mid-1st millennium BC in Iron Age northern Europe...
. According to one textbook, "The word burglar
comes from the two German
German is a West Germanic language, related to and classified alongside English and Dutch. With an estimated 90 – 98 million native speakers, German is one of the world's major languages and is the most widely-spoken first language in the European Union....
, meaning "house," and laron
, meaning "thief" (literally "house thief"). Another suggested etymology is from the later Latin word burgare
, "to break open" or "to commit burglary", from burgus
, meaning "fortress" or "castle", with the word then passing through French and Middle English, with influence from the Latin latro
, "thief". The British verb "burgle" is a late back-formation.
Burglary is prosecuted as a felony
A felony is a serious crime in the common law countries. The term originates from English common law where felonies were originally crimes which involved the confiscation of a convicted person's land and goods; other crimes were called misdemeanors...
A misdemeanor is a "lesser" criminal act in many common law legal systems. Misdemeanors are generally punished much less severely than felonies, but theoretically more so than administrative infractions and regulatory offences...
and involves trespassing and theft, entering a building or automobile, or remaining unlawfully with intent to commit theft or any crime, not necessarily a theft for example, vandalism
Vandalism is the behaviour attributed originally to the Vandals, by the Romans, in respect of culture: ruthless destruction or spoiling of anything beautiful or venerable...
. Even if nothing is stolen in a burglary, the act is a statutory offense. Buildings can include sheds, barns, and coops; burglary of boats, aircraft, and railway cars is possible. Burglary may be an element in crimes involving rape
Rape is a type of sexual assault usually involving sexual intercourse, which is initiated by one or more persons against another person without that person's consent. The act may be carried out by physical force, coercion, abuse of authority or with a person who is incapable of valid consent. The...
Arson is the crime of intentionally or maliciously setting fire to structures or wildland areas. It may be distinguished from other causes such as spontaneous combustion and natural wildfires...
In criminal law, kidnapping is the taking away or transportation of a person against that person's will, usually to hold the person in false imprisonment, a confinement without legal authority...
, identity theft
Identity theft is a form of stealing another person's identity in which someone pretends to be someone else by assuming that person's identity, typically in order to access resources or obtain credit and other benefits in that person's name...
, or violation of civil rights; indeed the "plumbers" of the Watergate scandal
The Watergate scandal was a political scandal during the 1970s in the United States resulting from the break-in of the Democratic National Committee headquarters at the Watergate office complex in Washington, D.C., and the Nixon administration's attempted cover-up of its involvement...
were technically burglars. As with all legal definitions in the U.S., the foregoing description may not be applicable in every jurisdiction, since there are 50 separate state criminal codes, plus Federal and territorial codes in force.
Technically, a burglary committed during the hours of daylight is not burglary, but housebreaking.
In many jurisdictions in the U.S., burglary is punished more severely than housebreaking. In California
California is a state located on the West Coast of the United States. It is by far the most populous U.S. state, and the third-largest by land area...
, for example, burglary was punished as burglary in the first degree, while housebreaking was punished as burglary in the second degree. California now distinguishes between entry into a residence and into a commercial building, with the burglary into a residence with heavier punishment.
In states that continue to punish burglary more severely than housebreaking twilight
Twilight is the time between dawn and sunrise or between sunset and dusk, during which sunlight scattering in the upper atmosphere illuminates the lower atmosphere, and the surface of the earth is neither completely lit nor completely dark. The sun itself is not directly visible because it is below...
, night is traditionally defined as hours between 30 minutes after sunset and 30 minutes before sunrise.
Some academics consider burglary as an inchoate crime. Others say that because the intrusion itself is harmful, this justifies punishment even when no further crime is committed.
Possession of burglar's tools, in jurisdictions that make this an offense, has also been viewed as an inchoate crime:
Under Florida State Statutes
The Florida law is based on the Florida Constitution , which defines how the statutes must be passed into law, and defines the limits of authority and basic law that the Florida Statutes must be complied with...
, "burglary" occurs when a person "enter[s] a dwelling, a structure, or a conveyance with the intent to commit an offense therein, unless the premises are at the time open to the public or the defendant is licensed or invited to enter. Depending on the circumstances of the crime, burglary can be classified as third-, second-, or first-degree felonies, with maximum sentences of five years, fifteen years, and life, respectively.
A person commits the offense of burglary when, without authority and with the intent to commit a felony or theft therein, he enters or remains within the dwelling house of another or any building, vehicle, railroad car, watercraft, or other such structure designed for use as the dwelling of another or enters or remains within any other building, railroad car, aircraft, or any room or any part thereof. A person convicted of the offense of burglary, for the first such offense, shall be punished by imprisonment for not less than one nor more than 20 years. For the purposes of this Code section, the term "railroad car" shall also include trailers on flatcars, containers on flatcars, trailers on railroad property, or containers on railroad property. O.C.G.A. § 16-7-1
Burglary and the intended crime, if carried out, are treated as separate offenses. Burglary is a felony, even when the intended crime is a misdemeanor, and the intent to commit the crime can occur when one "enters or remains unlawfully" in the building, expanding the common law definition. It has three degrees. Third-degree burglary is the broadest, and applies to any building or other premises. Second-degree burglary retains the common-law element of a dwelling, and first-degree burglary requires one to be in a dwelling and to be armed with a weapon or to cause injury. A related offense, criminal trespass, covers unlawful entry to buildings or premises without the intent to commit a crime, and is a misdemeanor or, in the third degree, a violation. Possession of burglar's tools, with the intent to use them to commit burglary or theft, is a misdemeanor.
The Commonwealth of Massachusetts
The Commonwealth of Massachusetts is a state in the New England region of the northeastern United States of America. It is bordered by Rhode Island and Connecticut to the south, New York to the west, and Vermont and New Hampshire to the north; at its east lies the Atlantic Ocean. As of the 2010...
uses the term "burglary" to refer to a night-time breaking and entering of a dwelling with the intent to commit a felony. Burglary is a felony punishable by not more than twenty years; should the burglar enter with a dangerous weapon, they may be imprisoned for life. Unlawful entries of a structure other than a dwelling are labeled "breaking and entering" and punishments vary according to structure.
In Maryland, under title 6, subtitle 2 of the criminal law code, the crime of burglary is divided into four degrees. The first three degrees are felonies, while fourth-degree burglary is a misdemeanor. Breaking and entering into a dwelling with intent to commit theft or a crime of violence is first-degree burglary. Breaking and entering into a "storehouse" (a structure other than a dwelling, also including watercraft, aircraft, railroad cars, and vessels) with intent to commit theft, arson, or a crime of violence is second-degree burglary. Third-degree burglary is defined as breaking and entering into a dwelling with intent to commit a crime.
Simple breaking and entering into a dwelling or storehouse without specific intent to commit an additional crime is fourth-degree burglary. This degree also includes two other offenses that do not have breaking and entering as an element: Being in or on the yard, garden, or other property of a storehouse or dwelling with the intent to commit theft, or possession of burglar's tools with the intent to use them in a burglary offense.
In the criminal code of New Hampshire
New Hampshire is a state in the New England region of the northeastern United States of America. The state was named after the southern English county of Hampshire. It is bordered by Massachusetts to the south, Vermont to the west, Maine and the Atlantic Ocean to the east, and the Canadian...
, "A person is guilty of burglary if he enters a building or occupied structure, or separately secured or occupied section thereof, with purpose to commit a crime therein, unless the premises are at the time open to the public or the actor is licensed or privileged to enter."
Under the penal law
In the most general sense, penal is the body of laws that are enforced by the State in its own name and impose penalties for their violation, as opposed to civil law that seeks to redress private wrongs...
in New York, burglary is always a felony, even in third degree. It is more serious if the perpetrator uses what appears to be a dangerous weapon, or if he or she enters a dwelling.
In Pennsylvania, it is a defense to prosecution if the building or structure in question is rendered abandoned
In Virginia, there are degrees of burglary, described as "Common Law Burglary" and "Statutory Burglary."
Common Law Burglary is defined as: if any person breaks and enters the dwelling of another, in the nighttime, with intent to commit a felony or any larceny (Theft < 200$) therein, shall be guilty of burglary, punishable as a class 3 felony; provided, however, that if such person was armed with a deadly weapon at the time of such entry, he shall be guilty of a class 2 felony.
Statutory Burglary is defined as: If any person in the nighttime enters without breaking, or in the daytime breaks and enters or enters and conceals himself in a dwelling house or an adjoining, occupied outhouse, or, in the nighttime enters without breaking or at any time breaks and enters or enters and conceals himself in any office, shop, manufactured home, storehouse, warehouse, banking house, church or other house, or any ship, vessel or river craft, or any railroad car, or any automobile, truck, or trailer, if such automobile, truck or trailer is used as a dwelling or place of human habitation, with intent to commit murder, rape, robbery or arson in violation of Virginia State code section 18.2-77, 18.2-79, or 18.2-80, shall be deemed guilty of statutory burglary, which offense shall be a class 3 felony. However, if such person was armed with a deadly weapon at the time of such entry, he shall be guilty of a class 2 felony.
Additionally, if any person commits any of the acts mentioned in the VA state code section 18.2-90 with intent to commit larceny, or any felony other than murder, rape, robbery or arson in violation of VA state code section 18.2-77, 18.2-79, or 18.2-80, or if any person commits any acts mentioned in 18.2-89 or 18.2-90 with intent to commit assault and battery, shall be guilty of statutory burglary, punishable by confinement in a state correctional facility for not less than one or more than twenty years, or, in the discretion of the jury or the court trying the case without a jury, be confined in jail for a period not exceeding twelve months or fined not more than $2,500, either or both. However, if the person was armed with a deadly weapon at the time of such entry, he shall be guilty of a Class 2 felony.
Finally, if any person break and enter a dwelling house while said dwelling is occupied, either in the day or nighttime, with intent to commit any misdemeanor except assault and battery or trespass (which falls under the previous paragraph), shall be guilty of a class 6 felony. However, if the person was armed with a deadly weapon at the time of such entry, he shall be guilty of a class 2 felony.
In Wisconsin, burglary is committed by one who enters a building without consent and with intent to steal or to commit another felony. Burglary may also be committed by entry to a locked truck or trailer or a ship. The crime of burglary is treated as being more serious if the burglar is armed with a dangerous weapon when the burglary is committed or arms himself/herself during the commission of the burglary.
England and Wales
Burglary is defined by section 9 of the Theft Act 1968
The Theft Act 1968 is an Act of the Parliament of the United Kingdom. It creates a number of offences against property in England and Wales.On 15 January 2007 the Fraud Act 2006 came into force, redefining most of the offences of deception.-History:...
which created two variants:
The offence is defined in similar terms to England and Wales by the
Theft Act (Northern Ireland) 1969.
Under Scots law
Scots law is the legal system of Scotland. It is considered a hybrid or mixed legal system as it traces its roots to a number of different historical sources. With English law and Northern Irish law it forms the legal system of the United Kingdom; it shares with the two other systems some...
, the crime of burglary does not exist. Instead theft by housebreaking
covers theft where the security of the building is overcome. It does not include any other aspect of burglary found in England and Wales. It is a crime usually prosecuted under solemn procedure
An indictment , in the common-law legal system, is a formal accusation that a person has committed a crime. In jurisdictions that maintain the concept of felonies, the serious criminal offence is a felony; jurisdictions that lack the concept of felonies often use that of an indictable offence—an...
in a superiour court. Another common law crime still used is Hamesukin which covers forced entry into a building where a serious assault on the occupant takes place. Common law
Common law is law developed by judges through decisions of courts and similar tribunals rather than through legislative statutes or executive branch action...
crimes in Scotland are gradually being replaced by statutes.
In Canada, burglary is labelled as "Breaking and Entering" under section 348 of the Criminal Code
A criminal code is a document which compiles all, or a significant amount of, a particular jurisdiction's criminal law...
and is a hybrid offence
A hybrid offence, dual offence, Crown option offence, dual procedure offence, or wobbler are the special class offences in the common law jurisdictions where the case may be prosecuted either summarily or as indictment...
. Breaking and entering is defined as trespassing with intent to commit an indictable offence
In many common law jurisdictions , an indictable offence is an offence which can only be tried on an indictment after a preliminary hearing to determine whether there is a prima facie case to answer or by a grand jury...
. The crime is commonly referred to in Canada as "break and enter" which in turn is often shortened to "B and E".
In Sweden, burglary does not exist as an offence in itself, instead there are two available offences. If a person simply breaks into any premise, he is technically guilty of either unlawful intrusion
or breach of domiciliary peace
), depending on the premise in question. Breach of domiciliary peace is only applicable when a person "unlawfully intrudes or remains where another has his living quarters"
The only punishment available for any of these offences is fines, unless the offence is considered gross. In that case, the maximum punishment is two years in prison.
However, if the person who has forced himself into a house, steals anything
(literally "takes what belongs to another with intent to acquire it"
), he is guilty of (ordinary) theft
). However, the section regarding gross theft
(Chapter 6, 4s of the Penal Code, grov stöld
) states "in assessing whether the crime is gross, special consideration shall be given to whether the unlawful appropriation took place after intrusion into a dwelling."
For theft, the punishment is imprisonment of at most two years, while gross theft carries a punishment of between six months and six years.
As in Sweden, there is no crime of burglary as such in Finland. In the case of breaking and entering, the Finnish penal code states that
A person who unlawfully
(1) enters domestic premises by force, stealth or deception, or hides or stays in
such premises [...]
shall be sentenced for invasion of domestic premises to a fine or to imprisonment for at most six months.
However, if theft is committed during unlawful entering, then a person is guilty of theft or aggravated theft depending on the circumstances of the felony.
(1) If in the theft the offender breaks into an occupied residence,
and the theft is aggravated also when assessed as a whole, the offender shall be
sentenced for aggravated theft to imprisonment for at least four months and at most
- R v Collins
R v Collins 1973 QB 100 is a case decided by the Court of Appeal of England and Wales which examined the meaning of "enters as a trespasser" in the definition of burglary...
Trespass is an area of tort law broadly divided into three groups: trespass to the person, trespass to chattels and trespass to land.Trespass to the person, historically involved six separate trespasses: threats, assault, battery, wounding, mayhem, and maiming...
- Home Invasion
Home Invasion is the fifth solo album by Ice-T. Released in 1993, the album Home Invasion is the fifth solo album by Ice-T. Released in 1993, the album Home Invasion is the fifth solo album by Ice-T. Released in 1993, the album (which was originally set to be released in 1992 under the deal with...
- Watergate burglaries
- "Cat burglar" at Wiktionary
Wiktionary is a multilingual, web-based project to create a free content dictionary, available in 158 languages... | <urn:uuid:57d7671f-998e-45cc-be3c-f9daa3b6982a> | CC-MAIN-2013-20 | http://www.absoluteastronomy.com/topics/Burglary | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382584/warc/CC-MAIN-20130516092622-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.948745 | 4,998 | 3.109375 | 3 | HuggingFaceFW/fineweb-edu/sample-100BT |
Cleaner Water: North Carolina's Straight-Pipe Elimination Project
by Fred D. Baldwin
Some years ago, William and Elizabeth Thomas tried unsuccessfully to install a properly designed septic system that would replace a four-inch pipe draining household wastewater straight into a little creek a few yards behind their home.
"I scrounged up enough money to put one in," William Thomas says. Spreading his hands about two feet apart, he adds, "But I didn't get down this far until we hit water."
The Thomases live on a small hillside lot in a rural area of Madison County, North Carolina. Their situation is similar to that of many rural Appalachian families who for one reason or another-money, the lay of the land, or both-live in older homes with inadequate septic systems. By the end of the year, however, they and many other Madison County residents will have new septic systems in place, thanks to a county program backed by an impressive team of state, federal, and local partners ranging from area conservation groups to the Appalachian Regional Commission (ARC).
The genesis of the program goes back to 1995, when Governor James B. Hunt created the Year of the Mountains Commission to assess current and future issues affecting North Carolina's western mountain communities. To protect and improve water quality, the commission recommended that, in addition to reducing mine drainage and agricultural runoff, the state Department of Environment and Natural Resources (DENR) be directed to "aggressively pursue a program to eliminate the practice of 'straight-piping.' " For years, decades even, it had been politically easier to ignore this issue. The commission pointed out that the 1990 Census of housing showed that nearly 50,000 households in North Carolina did not have connections to either municipal sewage systems or adequate septic systems. This was true not only in mountainous areas, but also in low-income communities across the state. Some of these households were draining "black water," which includes raw sewage, into creeks or streams; others were piping toilet wastes to a septic tank but straight-piping soapy and bacteria-laden "gray water" from sinks, baths, and dishwashers. Still other households were relying on septic systems built before the installation of a dishwasher or a second bathroom; these older systems were now prone to backups or leaks.
As early as 1958, the state took the first of many steps to regulate or eliminate straight-piping. This and subsequent measures were loosely enforced. In 1996, Governor Hunt established a goal to eliminate straight-piping of untreated wastewater into western North Carolina's rivers and streams by the end of the decade. "Every child should grow up in a community with clean, safe water," Hunt says.
That same year, in response to the Year of the Mountains Commission report, the North Carolina General Assembly created the Wastewater Discharge Elimination (WaDE) program, which differed significantly from earlier, essentially punitive measures. The new law provided a temporary "amnesty" for households reporting conditions violating state environmental health codes and, more important, provided technical assistance to communities wishing to take advantage of the state's Clean Water Management Trust Fund (a fund established to finance projects that address water pollution problems). Terrell Jones, the WaDE team leader, praises Madison County for being the first county to conduct a wastewater discharge survey under the new law, and he emphasizes that straight-piping, especially of gray water, is a statewide problem.
Driving around Madison County, you see why wastewater problems are costly to correct. Roads wind up and down past rocky, fast-flowing streams and creeks that drain into the French Broad River, where white-water rafters come for excitement. Houses on back roads are far apart but near streams. If there's enough land suitable for a septic tank and drainage field downhill from one of those houses, a conventional septic system can be installed for about $2,000. But if wastewater has to be pumped uphill, the cost can easily reach $8,000 or more. This explains why punitive measures against straight-piping have been loosely enforced. Local officials know that even $2,000 is beyond the means of many families. Who would tell cash-strapped people-more often than not, elderly-that they had to sell or abandon their home or family farmstead because of a housing code violation?
A Growing List of Partners
Madison County officials decided to take the lead on a positive approach. They first turned to the Land-of-Sky Regional Council, an Asheville-based local development district that represents 19 governmental units in four Appalachian counties, including Madison. The Land-of-Sky staff took advantage of an infrastructure demonstration grant from the North Carolina Division of Community Assistance and funds from ARC to begin a wastewater survey and community-planning process. From that point, the list of partners grew rapidly. They included the DENR WaDE program, U.S. Department of Agriculture (USDA) Rural Development, the North Carolina Rural Communities Assistance Project, the state-funded Clean Water Management Trust Fund, the Pigeon River Fund, the Community Foundation of Western North Carolina, the Western North Carolina Housing Partnership, and ARC.
The Madison County Health Department and Land-of-Sky took the lead locally, working with a grassroots planning committee representing a broad base of organizations, outdoor sports enthusiasts, environmental groups, and private-property owners (some of them living in homes with straight-piping). Among the decisions: to test every building in Madison County not connected to a municipal system, not just the older units. That way, no one would feel singled out, and all faulty septic systems would be spotted. "It's made the process go slower," says Heather Bullock, the Land-of-Sky regional planner assisting the project, "but it's made it better."
Not all that much slower, either. By the end of September 1999, health department employees had surveyed 4,594 of an estimated 10,000 houses in Madison County. Where plumbing configurations weren't self-evident, the surveyors dropped dye tablets into sinks and toilets (different colors for each) to see if colored water emerged into a stream or septic tank area. The survey identified 945 noncompliant systems (20 percent of the total). Of these, 258 were straight-piping black water; 535, gray water. Another 116 had failing septic systems, and 36 had only outhouses. The incidence of problems closely tracked household income.
A welcome surprise, says Kenneth D. Ring, health director of the Madison County Health Department, was how well the inspectors were received. "The cooperation has been overwhelming," he says.
Although most people with poor systems knew they had problems and wanted to correct them, some knew little or nothing about the design of their systems. For example, Ronnie Ledford, the chief building inspector and environmental health supervisor on the health department staff, recalls a visit with a man living in a mobile home. "He thought he had a septic system," Ledford says. "He had a 55-gallon drum. We found a 'black' pipe draining into a ditch line. He was very shocked. It took him some time to get his money together, but he took care of it himself."
The problem all along, however, had been that too few people had been able to get the money together to take care of things for themselves. All the agencies involved chipped in to the extent their guidelines permitted. A few septic systems were renovated with Community Development Block Grant funds, but that program's rules require that any unit being renovated in any way be brought up to code in all respects-prohibitively expensive for people in housing with other problems. The USDA provided Section 504 loans and grants for eligible elderly, low-income home owners. The largest pool of money came from the Clean Water Management Trust Fund, which awarded Madison County $750,000 for a revolving loan and grant fund, plus funds for administration.
Even so, setting up a workable program wasn't easy. Many low-income area residents had poor credit ratings and little collateral with which to guarantee loans. If the program's loan requirements were too tight, applicants wouldn't qualify for loans, and pollutants would continue to drain into streams; too loose, and the loan fund itself would soon drain away.
Help with Funding
The Madison County Revolving Loan and Grant Program was established with these concerns in mind. The program includes both grants and loans, the ratios based on household income. In determining credit-worthiness, the program coordinator looks at whether difficulties were caused by circumstances beyond the family's control, such as a medical emergency. If a loan still looks too risky, the applicant is referred to an educational program run by the nonprofit Consumer Credit Counseling Service of Western North Carolina, in Asheville.
When it appeared that Madison County might lack the legal flexibility for making the needed loans, the partners turned for help to the Center for Community Self-Help, a statewide nonprofit that offers loans as a community development tool. Self-Help agreed to make the loans from its funds, using the county's fund as its collateral. This somewhat complicated arrangement gives everyone involved some freedom to maneuver. The default rate is likely to be substantially higher than a bank could tolerate, but Self-Help makes sure applicants take the loan seriously.
"The goal is to clean up the water," explains Tom Barefoot, the USDA Rural Development manager for the area. "We're trying to build on what it takes to get people in [the program], not on possible failure."
"This is a multi-year program," adds Marc Hunt, a loan officer with Self-Help's western North Carolina regional office. "We say, ' Work on your credit and get back on the list in a few months.' We don't want to enable consumers to develop bad habits."
Contracts were let this fall for installing the first batch of new septic systems (not counting a handful of early projects). By the end of the year 2000, Madison County hopes to have replaced 130 straight-pipes.
The benefits will be both tangible and intangible. First of all, the streams of Madison County, some of which flow into a river providing drinking water for towns downstream, will be cleaner. That has important health and economic benefits for an area increasingly attractive to both outdoor recreationalists and people planning to build homes away from cities. Ironically, in some jurisdictions, worries about "image" have been a factor in unwillingness to deal more aggressively with straight-piping. "Madison County recognized an opportunity," says Barefoot, "and they had the courage to act. It's not always a politically safe decision." Marc Hunt agrees: "Many rural counties have similar situations. Any one of them could have done it, but Madison County took the lead."
Governor Hunt also has praise for the county. "I am proud of everyone involved in Madison County's work to find and fix straight-piping problems in a cooperative effort. This will only help our economic development, our public health, and our environment. But most of all, we're helping to make sure our children can grow up in a community with clean, safe water."
The various public and private partners involved hope that Madison County's experience will become a model for other counties. There have been expressions of interest from county officials inside and outside the Appalachian areas of the state.
"It's really incredible to me," says Jody Lovelace, a community development specialist with USDA Rural Development, "how we've been able to pull this together. Everyone said, 'Let's not just clean up the water. Let's help these folks develop financial responsibility and financial pride.' " For the individual households involved, there are direct benefits. Some will have a chance to build or improve a credit history. Most will benefit at least somewhat from improved property values. All, of course, will be glad to be rid of septic systems that back up or of the unpleasant and potentially dangerous discharge of wastewater of any kind near their homes. "It's a health hazard," Elizabeth Thomas says.
The Thomases, who were defeated by waterlogged soil when they tried to replace their old system years earlier, this time received help from a neighbor. He agreed to let them install a septic tank on his vacant field, downhill and off to one side of their house.
"He's a good neighbor," William Thomas says.
That pretty much sums up what Madison County, Land-of-Sky, and their various partners have accomplished. The straight-pipe elimination project began with a blue-ribbon commission's straight talk about an old problem. It's grown into a program that gives everyone involved-from agency officials to rural people living in houses built by their grandparents many decades ago-a chance to prove that they can be good neighbors to each other.
Fred D. Baldwin is a freelance writer based in Carlisle, Pennsylvania. | <urn:uuid:fefff558-6fce-48fd-ba57-d053c7be5dc4> | CC-MAIN-2013-20 | http://www.arc.gov/magazine/articles.asp?ARTICLE_ID=94&F_ISSUE_ID=&F_CATEGORY_ID=16 | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382584/warc/CC-MAIN-20130516092622-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.969787 | 2,651 | 3.4375 | 3 | HuggingFaceFW/fineweb-edu/sample-100BT |
The Search for God Is Never a Search in the Dark
Although it is a dangerous business to select a passage from a Shakespeare play and hold it up as a mouthpiece of the poet, there are nevertheless a few key passages that seem to express the bard's own thoughts on the creative process. In Act V, Scene I of A Midsummer Night's Dream, for example, Theseus, King of Athens, offers a mini-dissertation on the surprising similarities between lunatics, lovers, and poets. Of the poet's art in particular, he says: "Such tricks hath strong imagination, / That, if it would but apprehend some joy, / It comprehends some bringer of that joy."
Dictionaries, even the Oxford English Dictionary, offer little help at understanding Shakespeare's distinction between apprehend and comprehend. If we read the lines, however, in the context of Theseus's full speech, the following distinction emerges: To apprehend is to perceive some force or feeling that transcends our ordinary human faculties. To comprehend, by contrast, is to create some rational or artistic framework for making sense of, and thus "containing," the very force or feeling which seems to defy description. Thus, in the poet's case, an apprehended feeling of unbounded, free-floating joy is comprehended, through the device of poem-writing, into a single, concrete bringer of that joy.
In The Mystery of God: Theology for Knowing the Unknowable (Baker Academic), Steven Boyer and Christopher Hall seem to translate (unconsciously) Theseus's distinction into the realm of theology. Too often, they argue, theology attempts to put into concrete words and images an experience that is finally too large for us to take in—and not just quantitatively (there is too much of God for us to grasp), but qualitatively as well (as Uncreated Creator, God is wholly other than his creatures and cannot be contained in logical categories). When we try to force the essence of the eternal, omnipresent Creator into our own structures of thought, we often find that we have not so much explained him as explained him away. At this juncture, one might expect Boyer and Hall either to treat theology as a branch of subjective poetry—beautiful, perhaps even awe inspiring, but incapable of expressing universal truth—or to give up on putting into words (comprehending) that glory, majesty, and holiness of God that we can only barely apprehend.
Thankfully, they resist both options, instead offering a different distinction that maintains both the otherness and mystery of God and our capacity, through theological exploration, to reliably know his nature. Too often, modern theologians, especially pluralists, think of God as an "investigative mystery." If we are to understand him, we must amass scattered clues and then figure out how they might fit together. The Bible and traditional Christianity, in contrast, present God as a "revelational mystery." God has revealed himself to us through the Law and Prophets, the Old and New Testament, and Christ himself. We haven't been left to search in the dark. Still, because the God who reveals himself is beyond our comprehension, the mystery remains and cannot be fully contained in doctrinal statements. | <urn:uuid:8fbe4641-1032-43fa-a4ff-04b3228c4b53> | CC-MAIN-2013-20 | http://www.christianitytoday.com/ct/2013/january-web-only/search-for-god-is-never-search-in-dark.html | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382584/warc/CC-MAIN-20130516092622-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.959901 | 669 | 2.515625 | 3 | HuggingFaceFW/fineweb-edu/sample-100BT |
Last Updated on Wednesday, 13 June 2012 09:33 Posted by Clash Wednesday, 13 June 2012 08:45
Professor Ian Juby explains science/origins better than anyone I have seen. He has produced a 12-hour exhaustive look at the Creation/Evolution debate that can be watched in 30-minute segments on his website, Creation Week.
The video series features tons of props, exhibits, photographs, artifacts, and experiments that explain science and origins and the age of the earth in an extremely compelling manner. The segment on dinosaur tracks with human tracks is jaw-dropping!
He also has The traveling Creation Museum, Creation Exhibition, where you can see the scientific evidence and judge for yourself whether the evidence supports the Biblical account of Creation, or evolution. Showing scientific evidence from the dinosaurs, biology, geology, and fossils, as well as a model of Noah’s ark and one of the largest collections on display in the world of fossil human and dinosaur footprints found together, and dinosaurs in ancient art. The footprints show that humans lived right alongside (and according to the evolutionary timescale, even before) the dinosaurs.
You can watch his videos here: Creation Week | <urn:uuid:21541c53-da5b-482a-9961-d6362c80de7d> | CC-MAIN-2013-20 | http://www.clashentertainment.com/word/42-the-word/6549-complete-creation | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382584/warc/CC-MAIN-20130516092622-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.953932 | 241 | 2.53125 | 3 | HuggingFaceFW/fineweb-edu/sample-100BT |
Saturday, May 18, 2013
Fun in the Sun: Summer Safety Tips to Avoid Injuries
Summertime outdoor activities can be fun, but it’s important to follow good summer safety habits to avoid injuries.
Swimming pools are a perfect way to cool off from the hot sun, but safety comes first when enjoying the water.
According to National SAFE Kids , drowning is:
Keep the following safety precautions in mind:
Adult supervision is key. An adult should always be able to see and hear children who are swimming and be close enough to intervene in case of an emergency.
The pool environment is important. Separate home pools from the rest of the property to prevent children from walking directly into the pool area. If possible, use a fence or other barrier at least 4 feet high with openings no more than 4 inches apart and extend it completely around the pool.
Teach children to swim. Parents should enroll children before the age of eight in swimming lessons with a certified instructor.
Outdoor cooking is popular during the summer months. Unfortunately, grilling can be dangerous.
According to the most recent statistics from the National Fire Protection Association, the improper use of grills causes:
To keep families and their homes out of harm’s way, the Home Safety Council recommends the following safe grilling techniques:
Be cautious of nearby tree branches or other items which could catch on fire.
Know how to use a fire extinguisher and keep one handy.
Keep your grill at least 10 feet from a home or building.
Leave a grill unattended, especially when small children and pets are present.
Attempt to restart a charcoal flame by adding additional lighter fluid.
Keep filled propane tanks in a hot car or truck.
Prolonged unprotected exposure to the sun’s ultraviolet (UV) rays damages skin and eyes.
According to the American Cancer Society:
A majority of the more than 2 million cases of non-melanoma skin cancer diagnosed annually are sun related.
An estimated 68,130 new melanomas are diagnosed each year.
To lower the risk of skin cancer practice the following sun safety tips recommended by the American Cancer Society:
Avoid intense sunlight for long periods of time
Seek shade between 10 a.m. and 4 p.m. when the sun’s rays are strongest.
Wear a shirt to guard against excessive sun exposure.
Apply sunscreen with a sun protection factor (SPF) of 30 or higher.
Wear a hat to shade your face, ears and neck.
Wear sunglasses to protect your eyes and surrounding skin from UV ray damage.
Relaxing in the pool, grilling outdoors and enjoying the sun are all great summer activities, but don’t let summer fun turn dangerous – Practice good summer safety. | <urn:uuid:d95c59ae-3f06-4251-8bea-4995535e319e> | CC-MAIN-2013-20 | http://www.countryfinancial.com/nick.roesch/articles/keepingYourFamilySafe/summerSafety | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382584/warc/CC-MAIN-20130516092622-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.904261 | 577 | 2.71875 | 3 | HuggingFaceFW/fineweb-edu/sample-100BT |
A huge study of millions of kids revealed for the first time the true measure of type 2 diabetes in children in the United States. The results appeared in the June 27, 2007 Journal of the American Medical Association. In the recent past, type 2 diabetes was called adult-onset diabetes because this obesity-related condition was a problem of the middle-aged and the elderly. It usually takes years of unhealthy eating to tip someone into this type of diabetes. It was rarely seen before age 30 or even 40. Sadly, today we do see type 2 diabetes in children.
A family I saw yesterday had a 10-year-old who already had it. Pediatricians across the country are having similar experiences. But until this significant study none of us knew exactly how large the problem had become.
Stunningly, 22 percent of all diabetes diagnosed in US children was type 2. And in kids aged 10-19, type 2 diabetes was more common than the autoimmune type 1 (previously called juvenile diabetes) – even though type 1 has also been increasing over the last 2 decades around the world.
The consequences of overfed, undernourished, inactive lifestyles have reached from middle age into childhood. The message is clear: it’s time to feed our kids healthy amounts of healthy foods and to ensure that they get a liberal dose of active play every day.
The Writing Group for the SEARCH for Diabetes in Youth Study Group. “Incidence of Diabetes in Youth in the United States.” JAMA 2007, 297, pp. 2716-2724. | <urn:uuid:c8960477-a48c-4f93-b3e7-27b68bbb530b> | CC-MAIN-2013-20 | http://www.drgreene.com/adult-diabetes-kids/59/ | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382584/warc/CC-MAIN-20130516092622-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.968843 | 322 | 3.1875 | 3 | HuggingFaceFW/fineweb-edu/sample-100BT |
of lakes dot the marshy Arctic tundra regions. Now, in the latest addition to
the growing body of evidence that global warming is significantly affecting
the Arctic, two recent studies suggest that thawing permafrost is the cause
of two seemingly contradictory observations both rapidly growing and
rapidly shrinking lakes.
Thawing permafrost is altering the lakes that dominate Arctic landscapes, such as this one in western Siberia. Courtesy of Laurence C. Smith.
The first study is a historical analysis of changes to 10,000 Siberian lakes over the past 30 years, a period of warming air and soil temperatures. Using satellite images, Laurence Smith, a geographer at the University of California, Los Angeles, and colleagues found that, since the early 1970s, 125 Siberian lakes vanished completely, and those that remain averaged a 6 percent loss in surface area, a total of 930 square kilometers.
They report in the June 3 Science that the spatial pattern of lake disappearance suggests that the lakes drained away when the permafrost below them thawed, allowing the lake water to seep down into the groundwater. However, the team also found that lakes in northwestern Siberia actually grew by 12 percent, and 50 new lakes formed. Both of the rapid changes are due to warming, they say, and if the warming trend continues, the northern lakes will eventually shrink as well.
These two processes are similar, in that were witnessing permafrost degradation in both regions, says co-author Larry Hinzman, a hydrologist at the University of Alaska in Fairbanks, who in previous studies documented shrinking lakes in southern Alaska. In the warmer, southern areas, we get groundwater infiltration, but in the northern areas, where the permafrost is thicker and colder, its going to take much, much longer for that to occur. So instead of seeing lakes shrinking there, were seeing lakes growing.
That finding is consistent with the second study, which focused on a set of unusually oriented, rapidly growing lakes in northern Alaska, an area of continuous permafrost. Jon Pelletier, a geomorphologist at the University of Arizona in Tucson, reports in the June 30 Journal of Geophysical Research Earth Surface that the odd alignment of the lakes is caused not by wind direction but by permafrost melting faster at the downhill end of the lake, which has shallower banks.
Since the 1950s, scientists have attributed the odd alignment of the egg-shaped lakes to winds blowing perpendicularly to the long axes of the lakes, which then set up currents that caused waves to break at the northwest and southeast ends, thus preferentially eroding them. The prevailing wind direction idea has been around so long that we dont even think about it, Smith says, but Jons [Pelletier] work is challenging that. Its a very interesting paper.
Wind-driven erosion occurs in the Great Lakes, but at rates of about a meter a year. The Alaskan oriented thaw lakes grow at rates of 5 meters or more per year. Pelletier says this rate difference suggests a different process is at work.
According to the model, the direction and speed of growth depend on where and how quickly the permafrost thaws, which is determined by two factors: how the water table intersects the slope of the landscape and how fast the summer temperature increases. If the permafrost thaws abruptly, the shorter, downhill bank is more likely to thaw first. The soggy soil slumps into the water, and the perimeter of the lake is enlarged. Its not just the [global] warming trend, but also how quickly the warming takes place in the summertime, Pelletier says.
Hinzman says that the lakes are just one part of the Arctic water cycle, which has seen an increasing number of perturbations in recent years. The whole hydrologic cycle is changing and this is just one component of that.
Understanding how the hydrologic cycle is changing is important, Hinzman says, because the amount of freshwater runoff into the Arctic Ocean impacts global ocean circulation and the amount of sea ice, thus affecting climate worldwide. If global warming continues to the point where permafrost goes away, there will be fewer lakes, Smith says. And a drier, less marshy Arctic could alter weather patterns and ecosystems, researchers say, affecting everything from the subsistence lifestyle of native people to the hazard of fire on the tundra.
Geotimes contributing writer
Back to top | <urn:uuid:5fdf99e1-ac10-4897-aae4-baeb9600a36e> | CC-MAIN-2013-20 | http://www.geotimes.org/sept05/NN_arcticlakes.html | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382584/warc/CC-MAIN-20130516092622-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.945934 | 921 | 3.59375 | 4 | HuggingFaceFW/fineweb-edu/sample-100BT |
On this day in 1863, Union General Ulysses S. Grant breaks the siege of Chattanooga, Tennessee, in stunning fashion by routing the Confederates under General Braxton Bragg at Missionary Ridge.
For two months following the Battle of Chattanooga, the Confederates had kept the Union army bottled up inside a tight semicircle around Chattanooga. When Grant arrived in October, however, he immediately reversed the defensive posture of his army. After opening a supply line by driving the Confederates away from the Tennessee River in late October, Grant prepared for a major offensive in late November. It was launched on November 23 when he sent General George Thomas to probe the center of the Confederate line. This simple plan turned into a complete victory, and the Rebels retreated higher up Missionary Ridge. On November 24, the Yankees captured Lookout Mountain on the extreme right of the Union lines, and this set the stage for the Battle of Missionary Ridge.
The attack took place in three parts. On the Union left, General William T. Sherman attacked troops under Patrick Cleburne at Tunnel Hill, an extension of Missionary Ridge. In difficult fighting, Cleburne managed to hold the hill. On the other end of the Union lines, General Joseph Hooker was advancing slowly from Lookout Mountain, and his force had little impact on the battle. It was at the center that the Union achieved its greatest success. The soldiers on both sides received confusing orders. Some Union troops thought they were only supposed to take the rifle pits at the base of the ridge, while others understood that they were to advance to the top. Some of the Confederates heard that they were to hold the pits, while others thought they were to retreat to the top of Missionary Ridge. Furthermore, poor placement of Confederate trenches on the top of the ridge made it difficult to fire at the advancing Union troops without hitting their own men, who were retreating from the rifle pits. The result was that the attack on the Confederate center turned into a major Union victory. After the center collapsed, the Confederate troops retreated on November 26, and Bragg pulled his troops away from Chattanooga. He resigned shortly thereafter, having lost the confidence of his army.
The Confederates suffered some 6,600 men killed, wounded, and missing, and the Union lost around 5,800. Grant missed an opportunity to destroy the Confederate army when he chose not to pursue the retreating Rebels, but Chattanooga was secured. Sherman resumed the attack in the spring after Grant was promoted to general in chief of all Federal forces. | <urn:uuid:7b1a4a78-5b08-48b8-86b9-bcbde260344d> | CC-MAIN-2013-20 | http://www.history.com/this-day-in-history/-battle-of-missionary-ridge?catId=2 | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382584/warc/CC-MAIN-20130516092622-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.975761 | 513 | 4.03125 | 4 | HuggingFaceFW/fineweb-edu/sample-100BT |
An excerpt from www.HouseOfNames.com archives copyright © 2000 - 2013
Where did the Italian Ciccaroni family come from? What is the Italian Ciccaroni family crest and coat of arms? When did the Ciccaroni family first arrive in the United States? Where did the various branches of the family go? What is the Ciccaroni family history?The surname Ciccaroni came from the personal name Cicco, which is found in southern Italy and the Venetian region as a popular and affectionate form of the name Francesco.
In comparison with other European surnames, Italian surnames have a surprising number of forms. They reflect the regional variations and the many dialects of the Italian language, each with its own distinctive features. For example, in Northern Italy the typical Italian surname suffix is "i", whereas in Southern Italy it is "o". Additionally, spelling changes frequently occurred because medieval scribes and church officials often spelled names as they sounded rather than according to any specific spelling rules. The spelling variations in the name Ciccaroni include Cicco, Cicchi, De Cicco, D'Accico, Daccico, Cicchello, Cicchelli, Cicchella, Ciccarello, Ciccarelli, Ciccarella, Ciccariello, Cicchetto, Cicchetti, Cicchitto, Cicchino, Cicchini, Ciccolo, Ciccolino, Ciccolini, Coccolone, Coccoloni, Ciccolella, Ciccotto, Ciccotti, Ciccotta, Cicconi, Ciccone, Ciccaglione, Ciccaglioni, Ciccalotti, Ciccarese, Ciccaresi, Ciccarino, Ciccarini, Ciccarone, Ciccaroni, Cichetti, Cicutto, Cicala, Cicconetti, Cicalotti, Ciceri, Cicero, Cicera, Cicinelli, Cicogna, Ciconi and many more.
First found in Piedmont. Earliest records date back to the year 1112, when Pompeo Cicala was a valiant soldier in the city of Genoa.
This web page shows only a small excerpt of our Ciccaroni research. Another 262 words(19 lines of text) covering the years 1493, 1623, 1673, 1686, 1751, 1780, and 1804 are included under the topic Early Ciccaroni History in all our PDF Extended History products.
Another 162 words(12 lines of text) are included under the topic Early Ciccaroni Notables in all our PDF Extended History products.
Early North American records indicate many people bearing the name Ciccaroni were among those contributors: Liberato Diciocco, age 27, who arrived at New York on Dec. 20, 1882, aboard the "Italia"; Bernardo Cichero, who arrived in Philadelphia, Pennsylvania, in 1855.
The Ciccaroni Family Crest was acquired from the Houseofnames.com archives. The Ciccaroni Family Crest was drawn according to heraldic standards based on published blazons. We generally include the oldest published family crest once associated with each surname.
This page was last modified on 14 January 2011 at 09:59.
houseofnames.com is an internet property owned by Swyrich Corporation. | <urn:uuid:96354a77-2136-4431-8cbb-92e896f424aa> | CC-MAIN-2013-20 | http://www.houseofnames.com/ciccaroni-family-crest?a=54323-224 | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382584/warc/CC-MAIN-20130516092622-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.902765 | 691 | 2.734375 | 3 | HuggingFaceFW/fineweb-edu/sample-100BT |
First-Hand:The Foundation of Digital Television: the origins of the 4:2:2 component digital standard
Contributed by Stanley Baron, IEEE Life Fellow
By the late 1970's, the application of digital technology in television production was widespread. A number of digital television products had become available for use in professional television production. These included graphics generators, recursive filters (noise reducers), time base correctors and synchronizers, standards converters, amongst others.
However, each manufacturer had adopted a unique digital interface, and this meant that these digital devices when formed into a workable production system had to be interfaced at the analog level, thereby forfeiting many of the advantages of digital processing.
Most broadcasters in Europe and Asia employed television systems based on 625/50 scanning (625 lines per picture, repeated 50 fields per second), with the PAL color encoding system used in much of Western Europe, Australia, and Asia, while France, the Soviet Union, Eastern Europe, and China used variations of the SECAM color encoding system. There were differences in luminance bandwidth: 5.0 MHz for B/G PAL, 5.5 MHz for PAL in the UK and nominally 6 MHz for SECAM. There were also legacy monochrome systems, such as 405/50 scanning in the UK and the 819/50 system in France. The color television system that was dominate in the Americas, Japan, and South Korea was based on 525/60 scanning, 4.2 MHz luminance bandwidth, and the NTSC color standard.
NTSC and PAL color coding are both linear processes. Therefore, analog signals in the NSTC format could be mixed and edited during studio processing, provided that color sub carrier phase relationships were maintained. The same was true for production facilities based on the PAL system. In analog NTSC and PAL studios it was normal to code video to composite form as early as possible in the signal chain so that each signal required only one wire for distribution rather than the three needed for RGB or YUV component signals. The poor stability of analog circuitry meant that matching separate channel RGB or YUV component signals was impractical except in very limited areas. SECAM employed frequency modulated coding of the color information, which did not allow any processing of composite signals, so the very robust SECAM composite signal was used only on videotape recorders and point to point links, with decoding to component signals for mixing and editing. Some SECAM broadcasters avoided the problem by operating their studios in PAL and recoding to SECAM for transmission.
The international community recognized that the world community would be best served if there could be an agreement on a single production or studio digital interface standard regardless of which color standard (525 line NTSC, 625 line PAL, or 625 line SECAM) was employed for transmission. The cost of implementation of digital technology was seen as directly connected to the production volume; the higher the volume, the lower the cost to the end user, in this case, the broadcasting community.
Work on determining a suitable standard was organized by the Society of Motion Picture Engineers (SMPTE) on behalf of the 525/60 broadcasting community and the European Broadcasting Union (EBU) on behalf of the 625/50 broadcasting community.
In 1982, the international community reached agreement on a common 4:2:2 Component Digital Television Standard. This standard as documented in SMPTE 125, several EBU Recommendations, and ITU-R Recommendation 601 was the first international standard adopted for interfacing equipment directly in the digital domain avoiding the need to first restore the signal to an analog format.
The interface standard was designed so that the basic parameter values provided would work equally well in both 525 line/60 Hz and 625 line/50 Hz television production environments. The standard was developed in a remarkably short time, considering its pioneering scope, as the world wide television community recognized the urgent need for a solid basis for the development of an all digital television production system. A component-based (Y, R-Y, B-Y) system based on a luminance (Y) sampling frequency of 13.5 MHz was first proposed in February 1980; the world television community essentially agreed to proceed on a component based system in September 1980 at the IBC; a group of manufacturers supplied devices incorporating the proposed interface at a SMPTE sponsored test demonstration in San Francisco in February 1981; most parameter values were essentially agreed to by March 1981; and the ITU-R (then CCIR) Plenary Assembly adopted the standard in February 1982.
What follows is an overview of this historic achievement, providing a history of the standard's origins, explaining how the standard came into being, why various parameter values were chosen, the process that led the world community to an agreement, and how the 4:2:2 standard led to today's digital high definition production standards and digital broadcasting standards.
It is understood that digital processing of any signal requires that the sample locations be clearly defined in time and space and, for television, processing is simplified if the samples are aligned so that they are line, field, and frame position repetitive yielding an orthogonal (rectangular grid) sampling pattern.
While the NTSC system color sub carrier frequency (fsc) was an integer sub multiple of the horizontal line frequency (fH) [fsc = (m/n) x fH] lending itself to orthogonal sampling, the PAL system color sub carrier employed a field frequency off set and the SECAM color system employed frequency modulation of the color subcarrier, which made sampling the color information, contained within those systems a more difficult challenge. Further, since some European nations had adopted various forms of the PAL 625 line/50Hz composite color television standard as their broadcast standard and other European nations had adopted various forms of the SECAM 625 line/50Hz composite color television standard, the European community's search for a common digital interface standard implied that a system that was independent of the color coding technique used for transmission would be required.
Developments within the European community
In September 1972, the European Broadcasting Union (EBU) formed Working Party C, chaired by Peter Rainger to investigate the subject of coding television systems. In 1977, based on the work of Working Party C, the EBU issued a document recommending that the European community consider a component television production standard, since a component signal could be encoded as either a PAL or SECAM composite signal just prior to transmission.
At a meeting in Montreux, Switzerland in the spring of 1979, the EBU reached agreement with production equipment manufacturers that the future of digital program production in Europe would be best served by component coding rather than composite coding, and the EBU established a research and development program among its members to determine appropriate parameter values. This launched an extensive program of work within the EBU on digital video coding for program production. The work was conducted within a handful of research laboratories across Europe and within a reorganized EBU committee structure including: Working Party V on New Systems and Services chaired by Peter Rainger; subgroup V1 chaired by Yves Guinet, which assumed the tasks originally assigned to Working Party C; and a specialist supporting committee V1 VID (Vision) chaired by Howard Jones. David Wood, representing the EBU Technical Center, served as the secretariat of all of the EBU committees concerned with digital video coding.
In 1979, EBU VI VID proposed a single three channel (Y, R-Y, B-Y) component standard. The system stipulated a 12.0 MHz luminance (Y) channel sampling frequency and provided for each of the color difference signals (R-Y and B-Y) to be sampled at 4.0 MHz. The relationship between the luminance and color difference signals was noted sometimes as (12:4:4) and sometimes as (3:1:1). The proposal, based on the results of subjective quality evaluations, suggested these values were adequate to transparently deliver 625/50i picture quality.
The EBU Technical Committee endorsed this conclusion at a meeting in April 1980, and instructed its technical groups: V, V1, and V1 VID to support this effort.
SMPTE organized for the task at hand
Three SMPTE committees were charged with addressing various aspects of world wide digital standards. The first group, organized in late 1974, was the Digital Study Group chaired by Charles Ginsburg. The Study Group was charged with investigating all issues concerning the application of digital technology to television. The second group was a Task Force on Component Digital Coding with Frank Davidoff as chairman. This Task Force, which began work in February 1980, was charged with developing a recommendation for a single worldwide digital interface standard. While membership in SMPTE committees is generally open to any interested and affected party, the membership of the Task Force had been limited to recognized experts in the field. The third group was the Working Group on Digital Video Standards. This Working Group was charged with documenting recommendations developed by the Study Group or the Task Force and generating appropriate standards, recommended practices, and engineering guidelines.
In March 1977, the Society of Motion Picture and Television Engineers (SMPTE) began development of a digital television interface standard. The work was assigned by SMPTE's Committee on New Technology chaired by Fred Remley to the Working Group on Digital Video Standards chaired by Dr. Robert Hopkins.
By 1979, the Working Group on Digital Video Standards was completing development of a digital interface standard for NTSC television production. Given the state of the art at the time and the desire to develop a standard based on the most efficient mechanism, the Working Group created a standard that allowed the NTSC television video signal to be sampled as a single composite color television signal. It was agreed after a long debate on the merits of three times sub carrier (3fsc) versus four times sub carrier (4fsc) sampling that the Composite Digital Television Standard would require the composite television signal with its luminance channel and color sub carrier to be sampled at four times the color sub carrier frequency (4fsc) or 14.31818... MHz.
During the last quarter of 1979, agreement was reached on a set of parameter values, and the drafting of the Composite Digital Television Standard was considered completed. It defined a signal sampled at 4fsc with 8 bit samples. This standard seemed to resolve the problem of providing a direct digital interface for production facilities utilizing the NTSC standard.
By 1980, the Committee on New Technology was being chaired by Hopkins and the Working Group on Digital Video Standards was being chaired by Ken Davies.
Responding to communications with the EBU and so as not to prejudice the efforts being made to reach agreement on a world wide component standard, in January 1980, Hopkins put the finished work on the NTSC Composite Digital Television Standard temporarily aside so that any minor modifications to the document that would serve to meet possible world wide applications could be incorporated before final approval. Since copies of the document were bound in red binders, the standard was referred to as the "Red Book".
Seeking a Common Reference
The agenda of the January 1980 meeting of SMPTE's Digital Study Group included a discussion on a world wide digital television interface standard. At that meeting, the Study Group considered the report of the European community, and members of the EBU working parties had been invited to attend. Although I was not a member of the Study Group, I was also invited to attend the meeting.
It was recognized that while a three color representation of the television signal using Red, Blue, and Green (R, G, B) was the simplest three component representation, a more efficient component representation, but one that is more complex, is to provide a luminance or gray scale channel (Y) and two color difference signals (R-Y and B-Y). The R-Y and B-Y components take advantage of the characteristics of the human visual system which is less sensitive to high resolution information for color than for luminance. This allows for the use of a lower number of samples to represent the color difference signals without observable losses in the restored images. Color difference components (noted as I, Q or U, V or Dr, Db) were already in use in the NTSC, PAL, and SECAM systems to reduce the bandwidth required to support color information.
Members of the NTSC community present at the January 1980 Study Group meeting believed that the EBU V1 VID proposed 12.0 MHz, (3:1:1) set of parameters would not meet the needs for NTSC television post production particularly with respect to chroma keying, then becoming an important tool. In addition, it was argued that: (1) the sampling frequency was too low (too close to the Nyquist point) for use in a production environment where multiple generations of edits were required to accommodate special effects, chroma keying, etc., and (2) a 12.0 MHz sampling system would not produce an orthogonal array of samples in NTSC (at 12.0 MHz, there would be 762.666... pixels per line).
The NTSC community offered for consideration a single three channel component standard based on (Y, R-Y, B-Y). This system stipulated a 4fsc (14.318 MHz) luminance sampling frequency equal to 910 x fH525, where fH525 is the NTSC horizontal line frequency. The proposal further provided for each of the color difference components to be sampled at 2fsc or 7.159 MHz. This relationship between the luminance and color difference signals was noted as (4:2:2). Adopting 4fsc as the luminance sampling frequency would facilitate trans coding of video recorded using the “single wire” NTSC composite standard with studio mixers and editing equipment based on a component video standard.
Representatives of the European television community present at the January 1980 Study Group meeting pointed to some potential difficulties with this proposal. The objections included: (1) that the sampling frequency was too high for use in practical digital recording at the time, and (2) a 14.318 MHz sampling system would not produce an orthogonal array of samples in a 625 line system (at 14.318 MHz, there would be 916.36... pixels per line).
During the January 1980 Study Group meeting discussion, I asked why the parties involved had not considered a sampling frequency that was a multiple of the 4.5 MHz sound carrier, since the horizontal line frequencies of both the 525 line and 625 line systems had an integer relationship to 4.5 MHz.
The original definition of the NTSC color system established a relationship between the sound carrier frequency (fs) and the horizontal line frequency (fH525) as fH525 = fs/286 = 15734.265... Hz, had further defined the vertical field rate fV525 = (fH525 x 2)/525 = 59.94006 Hz, and defined the color sub carrier (fsc) = (fH525 x 455)/2 = 3.579545.... MHz. Therefore, all the frequency components of the NTSC system could be derived as integer sub multiples of the sound carrier.
The 625 line system defined the horizontal line frequency (fH625) = 15625 Hz and the vertical field rate fV625 = (fH625 x 2)/625 = 50 Hz. It was noted from the beginning that the relationship between fs and the horizontal line frequency (fH625) could be expressed as fH625 = fs/288. Therefore, any sampling frequency that was an integer multiple of 4.5 MHz (fs) would produce samples in either the 525 line or 625 line systems that were orthogonal.
I was asked to submit a paper to the Study Group and the Task Force describing the relationship. The assignment was to cover two topics. The first topic was how the 625 line/50Hz community might arrive at a sampling frequency close to 14.318 MHz. The second topic was to explain the relationship between the horizontal frequencies of the 525 line and 625 line systems and 4.5 MHz.
This resulted in my authoring a series of papers written between February and April 1980 addressed to the SMPTE Task Force explaining why 13.5 MHz should be considered the choice for a common luminance sampling frequency. The series of papers was intended to serve as a tutorial with each of the papers expanding on the points previously raised. A few weeks after I submitted the first paper, I was invited to be a member of the SMPTE Task Force. During the next few months, I responded to questions about the proposal, and I was asked to draft a standards document.
Crunching the numbers
The first paper I addressed to the Task Force was dated 11 February 1980. This paper pointed to the fact that since the horizontal line frequency of the 525 line system (fH525 had been defined as 4.5 MHz/286 (or 2.25 MHz/143), and the horizontal line frequency of the 625 line system (fH625) was equal to 4.5 MHz/288 (or 2.25 MHz/144), any sampling frequency that was a multiple of 4.5 MHz/2 could be synchronized to both systems.
Since it would be desirable to sample color difference signals at less than the sampling rate of the luminance signal, then a sampling frequency that was a multiple of 2.25 MHz would be appropriate for use with the color difference components (R-Y, B-Y) while a sampling frequency that was a multiple of 4.5 MHz would be appropriate for use with the luminance component (Y).
Since the European community had argued that the (Y) sampling frequency must be lower than 14.318 MHz and the NTSC countries had argued that the (Y) sampling frequency must be higher than 12.00 MHz, my paper and cover letter dated 11 February 1980 suggested consideration of 3 x 4.5 MHz or 13.5 MHz as the common luminance (Y) channel sampling frequency (858 times the 525 line horizontal line frequency rate and 864 times the 625 line rate both equal 13.5 MHz).
My series of papers suggested adoption of a component color system based on (Y, R-Y, B-Y) and a luminance/color sampling relationship of (4:2:2), with the color signals sampled at 6.75 MHz. In order for the system to facilitate standards conversion and picture manipulation (such as that used in electronic special effects and graphics generators), both the luminance and color difference samples should be orthogonal. The desire to be able to trans code between component and composite digital systems implied a number of samples per active line that was divisible by four.
The February 1980 note further suggested that the number of samples per active line period should be greater than 715.5 to accommodate all of the world wide community standards active line periods. While the number of pixels per active line equal to 720 samples per line was not suggested until my next note, (720 is the number found in Rec. 601 and SMPTE 125), 720 is the first value that “works.” 716 is the first number greater than 715.5 that is divisible by 4 (716 = 4 x 179), but does not lend itself to standards conversion between 525 line component and composite color systems or provide sufficiently small pixel groupings to facilitate special effects or data compression algorithms. </p>
Additional arguments in support of 720 were provided in notes I generated prior to IBC'80 in September. Note that 720 equals 6! [6! (6 factorial) = 6x5x4x3x2x1] = 24 x 32 x 5. This allows for many small factors, important for finding an economical solution to conversion between the 525 line component and composite color standards and for image manipulation in special effects and analysis of blocks of pixels for data compression. The composite 525 line digital standard had provided for 768 samples per active line. 768 = 28 x 3. The relationship between 768 and 720 can be described as 768/720 = (28 x 3)/(24 x 32 x 5) = (24)/(3 x 5) = 16/15. A set of 16 samples in the NTSC composite standard could be used to calculate a set of 15 samples in the NTSC component standard.
Proof of Performance
At the September 1980 IBC conference, international consensus became focused on the 13.5 MHz, (4:2:2) system. However, both the 12.0 MHz and 14.318 MHz systems retained some support for a variety of practical considerations. Discussions within the Working Group on Digital Video Standards indicated that consensus could not be achieved without the introduction of convincing evidence.
SMPTE proposed to hold a “Component Coded Digital Video Demonstration” in San Francisco in February 1981 organized by and under the direction of the Working Group on Digital Video Standards to evaluate component coded systems. A series of practical tests/demonstrations were organized to examine the merits of various proposals with respect to picture quality, production effects, recording capability and practical interfacing, and to establish an informed basis for decision making.
The EBU had scheduled a series of demonstrations in January 1981 for the same purpose. SMPTE invited the EBU to hold its February meeting of the Bureau of the EBU Technical Committee in San Francisco to be followed by a joint meeting to discuss the results of the tests. It was agreed that demonstrations would be conducted at three different sampling frequencies (near 12.0 MHz, 13.5 MHz, and 14.318 MHz) and at various levels of performance.
From 2nd through the 6th of February 1981 (approximately, one year from the date of the original 13.5 MHz proposal), SMPTE conducted demonstrations at KPIX Television, Studio N facilities in San Francisco in which a number of companies participated. Each participating sponsor developed equipment with the digital interface built to the specifications provided. The demonstration was intended to provide proof of performance and to allow the international community to come to an agreement.
'The demonstration organizing committee had to improvise many special interfaces and interconnections, as well as create a range of test objects, test signals, critical observation criteria, and a scoring and analysis system and methodology.
The demonstrations were supported with equipment and personnel by many of the companies that were pioneers in the development of digital television and included: ABC Television, Ampex Corporation, Barco, Canadian Broadcasting Corporation, CBS Technology Center, Digital Video Systems, Dynair, Inc., KPIX Westinghouse Broadcasting, Leitch Video Ltd., Marconi Electronics, RCA Corporation and RCA Laboratories, Sony Corporation, Tektronix Inc., Thomson CSF, VG Electronics Ltd., and VGR Corporation. I participated in the demonstrations as a member of SMPTE's Working Group on Digital Video Standards, providing a Vidifont electronic graphics generator whose interface conformed to the new standard.
Developing an agreement
The San Francisco demonstrations proved the viability of the 13.5 MHz, (4:2:2) proposal. At a meeting in January 1981, the EBU had considered a set of parameters based on a 13.0 MHz (4:2:2) system. Additional research conducted by EBU members had shown that a (4:2:2) arrangement was needed in order to cope with picture processing requirements, such as chroma key, and the EBU members believed a 13.0 MHz system appeared to be the most economic system that provided adequate picture processing. Members of the EBU and SMPTE committees met at a joint meeting chaired by Peter Rainger in March 1981 and agreed to propose the 13.5 MHz, (4:2:2) standard as the world wide standard. By autumn 1981, NHK in Japan led by Mr. Tadokoro, had performed its own independent evaluations and concurred that the 13.5 MHz, (4:2:2) standard offered the optimum solution.
A number of points were generally agreed upon and formed the basis of a new world wide standard. They included:
- The existing colorimetry of EBU (for PAL and SECAM) and of NTSC would be retained for 625 line and 525 line signals respectively, as matrixing to a common colorimetry was considered overly burdensome;
- An 8 bit per sample representation would be defined initially, being within the state of the art, but a 10 bit per sample representation would also be specified since it was required for many production applications;
- The range of the signal to be included should include head room (above white level) and foot room (below black level) to allow for production overshoots;
- The line length to be sampled should be somewhat wider than those of the analog systems (NTSC, PAL, and SECAM) under consideration to faithfully preserve picture edges and to avoid picture cropping;
- A bit parallel, sample multiplexed interface (e.g. transmitting R-Y, Y, B-Y, Y, R-Y, ...) was practical, but in the long term, a fully bit and word serial system would be desirable;
- The gross data rate should be recordable within the capacity of digital tape recorders then in the development stages by Ampex, Bosch, RCA, and Sony.
The standard, as documented, provided for each digital sample to consist of at least 8 bits, with 10 allowed. The values for the black and white levels were defined, as was the range of the color signal. (R-Y) and (B-Y) became CR [=0.713 (R-Y)] and CB [=0.564 (B-Y)]. While the original note dated February 1980 addressed to the Task Force proposed a code of 252(base10) =(1111 1100) for ‘white’ at 100 IRE and a code of 72 (base10) =(0100 1000) for ‘black’ at 0 IRE to allow capture of the sync levels, agreement was reached to better utilize the range of codes to capture the grey scale values with more precision and provide more overhead. ‘White’ was to be represented by an eight bit code of 240(base10) =(1111 0000) and ‘black’ was to be represented by an eight bit code 16 (base10) =(0001 0000). The original codes for defining the beginning and the end of picture lines and picture area were discussed, modified, and agreed upon, as well as synchronizing coding for line, field, and frame, each coding sequence being unique and not occurring in the video signal.SMPTE and EBU organized an effort over the next few months to familiarize the remainder of the world wide television community with the advantages offered by the 13.5 MHz, (4:2:2) system and the reasoning behind its set of parameters. Members of the SMPTE Task Force traveled to Europe and to the Far East. Members of the EBU committees traveled to the, then, Eastern European block nations and to the members of the OTI, the organization of the South American broadcasters. The objective of these tours was to build a consensus prior to the upcoming discussion at the ITU in the autumn of 1981. The success of this effort could serve as a model to be followed in developing future agreements.
I was asked to draft a SMPTE standard document that listed the parameter values for a 13.5 MHz system for consideration by the SMPTE Working Group. Since copies of the document were bound in a green binder prior to final acceptance by SMPTE, the standard was referred to as the “Green Book”.
In April 1981, the draft of the standard titled “Coding Parameters for a Digital Video Interface between Studio Equipment for 525 line, 60 field Operation” was distributed to a wider audience for comment. This updated draft reflected the status of the standard after the tests in San Francisco and agreements reached at the joint EBU/SMPTE meeting in March 1981. The EBU community later requested a subtle change to the value of ‘white’ in the luminance channel, and it assumed the value of 235(base10). This change was approved in August 1981.
After review and some modification as noted above to accommodate European concerns, the “Green Book” was adopted as SMPTE Standard 125.
ITU/R Recommendation 601
The European Broadcasting Union (EBU) generated an EBU Standard containing a companion set of parameter values. The SMPTE 125 and EBU documents were then submitted to the International Telecommunications Union (ITU). The ITU, a treaty organization within the United Nations, is responsible for international agreements on communications. The ITU Radio Communications Bureau (ITU-R/CCIR) is concerned with wireless communications, including allocation and use of the radio frequency spectrum. The ITU also provides technical standards, which are called “Recommendations.”
Within the ITU, the development of the Recommendation defining the parameter values of the 13.5 MHz (4:2:2) system fell under the responsibility of ITU-R Study Group 11 on Television. The chair of Study Group 11, Prof. Mark I. Krivocheev, assigned the drafting of the document to a special committee established for that purpose and chaired by David Wood of the EBU. The document describing the digital parameters contained in the 13.5 MHz, (4:2:2) system was approved for adoption as document 11/1027 at ITU-R/CCIR meetings in Geneva in September and October 1981. A revised version, document 11/1027 Rev.1, dated 17 February 1982, and titled “Draft Rec. AA/11 (Mod F): Encoding parameters of digital television for studios” was adopted by the ITU-R/CCIR Plenary Assembly in February 1982. It described the digital interface standard for transfer of video information between equipment designed for use in either 525 line or 625 line conventional color television facilities. Upon approval by the Plenary Assembly, document 11/1027 Rev.1 became CCIR Recommendation 601.
The Foundation for HDTV and Digital Television Broadcasting Services
The 4:2:2 Component Digital Television Standard allowed for a scale of economy and reliability that was unprecedented by providing a standard that enabled the design and manufacture of equipment that could operate in both 525 line/60Hz and 625 line/50Hz production environments. The 4:2:2 Component Digital Television Standard permitted equipment supplied by different manufacturers to exchange video and embedded audio and data streams and/or to record and playback those streams directly in the digital domain without having to be restored to an analog signal. This meant that the number of different processes and/or generations of recordings could be increased without the noticeable degradation of the information experienced with equipment based on analog technology. A few years after the adoption of the 4:2:2 Component Digital Television Standard, all digital production facilities were shown to be practical.
A few years later when the ITU defined “HDTV,” the Recommendation stipulated: “the horizontal resolution for HDTV as being twice that of conventional television systems” described in Rec. 601and a picture aspect ratio of 16:9. A 16:9 aspect ratio picture requires one-third more pixels per active line than a 4:3 aspect ratio picture. Rec. 601 provided 720 samples per active line for the luminance channel and 360 samples for each of the color difference signals. Starting with 720, doubling the resolution to 1440, and adjusting the count for a 16:9 aspect ratio leads to the 1920 samples per active line defined as the basis for HDTV. Accommodating the Hollywood and computer communities' request for “square pixels” meant that the number of lines should be 1920 x (9/16) = 1080.
Progressive scan systems at 1280 pixels per line and 720 lines per frame are also a member of the “720 pixel” family. 720 pixels x 4/3 (resolution improvement) x 4/3 (16:9 aspect ratio adjustment) = 1280. Accommodating the Hollywood and computer communities' request for square pixels meant that the number of lines should be 1280 x (9/16) = 720.
The original 720 pixel per active line structure became the basis of a family of structures (the 720 pixel family) that was adopted for MPEG based systems including both conventional television and HDTV systems. Therefore, most digital television systems, including digital video tape systems and DVD recordings are derived from the format described in the original 4:2:2 standard.
The existence of a common digital component standard for both 50 Hz and 60 Hz environments as documented in SMPTE 125 and ITU Recommendation 601 provided a path for television production facilities to migrate to the digital domain. The appearance of high quality, fully digital production facilities providing digital video, audio, and metadata streams and the successful development of digital compression and modulation schemes allowed for the introduction of digital television broadcast services.
In its 1982-1983 award cycle, the National Academy of Television Arts and Sciences recognized the 4:2:2 Component Digital Standard based on 13.5 MHz (Y) sampling with 720 samples per line with three EMMY awards:
The European Broadcasting Union (EBU) was recognized: “For achieving a European agreement on a component digital video studio specification based on demonstrated quality studies and their willingness to subsequently compromise on a world wide standard.”
The International Telecommunications Union (ITU) was recognized: “For providing the international forum to achieve a compromise of national committee positions on a digital video standard and to achieve agreement within the 1978-1982 period.”
The Society of Motion Picture and Television Engineers (SMPTE) was recognized: “For their early recognition of the need for a digital video standard, their acceptance of the EBU proposed component requirement, and for the development of the hierarchy and line lock 13.5 MHz demonstrated specification, which provided the basis for a world standard.”
This narrative is intended to acknowledge the early work on digital component coded television carried out over several years by hundreds of individuals, organizations, and administrations throughout the world. It is not possible in a limited space to list all of the individuals or organizations involved, but by casting a spotlight on the results of their work since the 1960's and its significance, the intent is to honor them - all.
Individuals interested in the specific details of digital television standards and picture formats (i.e. 1080p, 720p, etc.) should inquire at www.smpte.org. SMPTE is the technical standards development organization (SDO) for motion picture film and television production.
- ↑ This article builds on a prior article by Stanley Baron and David Wood; simultaneously published in the SMPTE Motion Imaging Journal, September 2005, pp. 327 334 as “The Foundations of Digital Television: the origins of the 4:2:2 DTV standard" and in the EBU Technical Review, October 2005, as "Rec. 601 the origins of the 4:2:2 DTV standard.”
- ↑ Guinet, Yves; “Evolution of the EBU's position in respect of the digital coding of television”, EBU Review Technical, June 1981, pp.111 117.
- ↑ Davies, Kenneth; “SMPTE Demonstrations of Component Coded Digital Video, San Francisco, 1981”, SMPTE Journal, October 1981, pp.923 925.
- ↑ Fink, Donald; “Television Engineering Handbook”, McGraw Hill [New York, 1957], p.7 4.
- ↑ Baron, S.; “Sampling Frequency Compatibility”, SMPTE Digital Study Group, January 1980, revised and submitted to the SMPTE Task Force on Digital Video Standards, 11 February 1980. Later published in SMPTE Handbook, “4:2:2 Digital Video: Background and Implementation”, SMPTE, 1989, ISBN 0 940690 16, pp.20 23.
- ↑ Weiss, Merrill &amp;amp; Marconi, Ron; “Putting Together the SMPTE Demonstrations of Component Coded Digital Video, San Francisco, 1981”, SMPTE Journal, October 1981, pp.926 938.
- ↑ Davidoff, Frank; “Digital Television Coding Standards”, IEE Proceedings, 129, Pt.A., No.7, September 1982, pp.403 412.
- ↑ Nasse, D., Grimaldi, J.L., and Cayet, A; “An Experimental All Digital Television Center”, SMPTE Journal, January 1986, pp. 13 19.
- ↑ ITU Report 801, “The Present State of High Definition Television”, Part 3, “General Considerations of HDTV Systems”, Section 4.3, “Horizontal Sampling”. | <urn:uuid:9a916a8d-2c90-4824-b961-0b5932af2602> | CC-MAIN-2013-20 | http://www.ieeeghn.org/wiki/index.php?title=First-Hand:The_Foundation_of_Digital_Television:_the_origins_of_the_4:2:2_component_digital_standard&redirect=no | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382584/warc/CC-MAIN-20130516092622-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.944655 | 7,615 | 3.375 | 3 | HuggingFaceFW/fineweb-edu/sample-100BT |
J.S. Bach was born in Eisenach, Germany, in 1685 and died in 1750. He came from a long family history of professional musicians including church organists and composers. Like his father, Johann Ambrosius Bach, J.S. (Johann Sebastian) would learn and surpass him in this art of classical music composing.
Bach's childhood wasn't that great as his father passed away when he was 9 and his mother also died when he was a young boy. Although he spent much time with his musically inclined uncles, he also spent time studying and learning from his older brother, Johann Christoph Bach.
Growing up, Bach learned much about organ building. Back in those days, the church organ was a highly complex instrument with many mechanical and moving parts/pedals and pipes. His early experience with repairing and talking with organ builders & performers would prove valuable as he mastered the musical craft. | <urn:uuid:fb14bcdf-7ddb-4fc4-8f3f-d9cdd0bc6d21> | CC-MAIN-2013-20 | http://www.mdt.co.uk/composers/b/bach-johann-sebastian.html?composer=5316&label=2187 | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382584/warc/CC-MAIN-20130516092622-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.99545 | 191 | 3.265625 | 3 | HuggingFaceFW/fineweb-edu/sample-100BT |
I’m struggling a bit to teach my children to pack for themselves. I want them to learn how to be self-reliant, but I also want to make sure they have everything they need for the day. If I don’t triple check every detail, they’re likely to be fully prepared for snack time but missing important papers or sports equipment. What’s the right thing to do?
Your desire to raise self-reliant children is fantastic. But there’s no doubt that passing the baton can be tough. The first question has to be: how old are your children? A good general rule of thumb is, if they’re old enough to read, they’re old enough to pack their own bags. Assuming your little ones are old enough, the most effective thing to do is give them some time frame to take complete responsibility for getting themselves ready, ask questions to help prompt them if you think they aren’t paying attention to something crucial, and most importantly, when things aren’t crucial (e.g. do they have the right uniform packed), letting them fail. Nothing teaches quite like experience. As you let go of the reigns a bit, here are some more ideas to guide you.
• Planning Starts the Night Before. Mornings are not the right time to teach your children how to pack themselves. You’re rushed, and they’re often bleary-eyed and grumpy. The ideal time to sit down with them, explain what you are trying to accomplish, and get them to start preparing for the next day is after homework but before TV time. That way you have time to ask them questions and offer un-stressed help in the initial stages. This is a process that will take time and spending time in the evenings helping them learn how to become responsible for themselves is time well spent.
• Explain as You Go. You need to develop a checklist with them and then go through the items. Don’t criticize or watch over the task being done. Accept that the task will not be done exactly the way you would do it but recognize that as long as it is accomplished and done on time, that it is okay. In the beginning, be prepared to patiently ask and answer a lot of questions! Why do emergency numbers need to be in the backpacks? Because you might need to call someone. Why does lunch have to be prepared? So that mom knows they are eating healthy and, besides, too much sugar will make them feel bad, Why do you keep asking about permission slips or projects that need to go with them? Because it’s important they do not miss out on something the rest of the class is doing. This is just a primer but you get the idea.
• Provide Feedback. Once the task has been completed, give constructive feedback to the person. As a guideline, tell your son or daughter five great things about the job for every one criticism. If after some time you notice they are consistently sloppy or forgetful, be patient but firm and make sure there are consequences for actions. | <urn:uuid:c2dcc337-4538-4bf9-8838-d2005b03e76d> | CC-MAIN-2013-20 | http://www.mommytracked.com/node/4457/print | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382584/warc/CC-MAIN-20130516092622-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.964187 | 638 | 2.59375 | 3 | HuggingFaceFW/fineweb-edu/sample-100BT |
Newegg.com - A great place to buy computers, computer parts, electronics, software, accessories, and DVDs online. With great prices, fast shipping, and top-rated customer service - once you know, you Newegg.
If you are reading this message, Please click this link to reload this page.(Do not use your browser's "Refresh" button). Please email us if you're running the latest version of your browser and you still see this message.
Table of contents
Bluetooth is an industrial specification for wireless data transfer. Bluetooth connectivity is often found in high-end keyboards and mice. Bluetooth generally provide an operating range of up to 30 feet and is less prone to interference in comparison to RF technology.
DPI and FPS
DPI (dots per inch) and FPS (frames per second) are the number of counts in an inch of movement and the number of times the sensor reads the surface in a second respectively. These figures are measures of the amount of information recorded by the mouse's sensor. The greater the amount of information that is gathered, the more accurately and precisely the surface can be tracked. To detect movement, optical and laser mice use sensors to read beams of light as they are reflected from the tracking surface.
Currently 400 and 800 DPI optical mice as well as 800 DPI laser mice are very popular, but some high-end models are capable of 1000, 1600 or even 2000 DPI tracking speeds.
The Personal System/2 or PS/2 was the designation for IBM's second generation of personal computers. The PS/2 keyboard and mouse ports were introduced with it. PS/2 ports connect the keyboard and mouse to a computer and are usually color-coded on today's systems - purple for keyboards and green for mice. Most desktop motherboards still provide PS/2 ports, but an increasing number of keyboards and mice are using USB ports.
Radio Frequency (RF) is a wireless communication technology. Using RF technology allows keyboards and mice to computers without annoying cables.
The USB (Universal Serial Bus) port is a popular I/O interface used for connecting computers and peripherals or other devices. It is capable of supporting up to 127 daisy-chained peripheral devices simultaneously. The latest USB 2.0 specification can deliver 480Mbps data transfer bandwidth. In addition, USB provides plug-and-play capabilities to allow device changes while the computer is powered on. Today, many keyboard and mice use the USB interface. | <urn:uuid:cc2f6ff5-32a8-4803-8794-bd3c0c07dc53> | CC-MAIN-2013-20 | http://www.newegg.com/Product/CategoryIntelligenceArticle.aspx?articleId=94 | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382584/warc/CC-MAIN-20130516092622-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.925089 | 503 | 2.5625 | 3 | HuggingFaceFW/fineweb-edu/sample-100BT |
During the next two weeks, you can help build a map of global light pollution, assisting scientists and astronomers as they monitor the loss of virgin night skies. You just have to look at the stars and write down what you see — or, more likely, what you don’t see.
Imagine if every time you needed to officially identify yourself you had to be sedated and knocked out cold. This might sound only slightly less stressful than checking through security at the airport, but for animals being tracked by wildlife authorities and researchers it’s a regularity that is not only stressful, but potentially harmful.
Five amazing, clean technologies that will set us free, in this month's energy-focused issue. Also: how to build a better bomb detector, the robotic toys that are raising your children, a human catapult, the world's smallest arcade, and much more. | <urn:uuid:500e5554-819d-4c21-8fac-36b69903181e> | CC-MAIN-2013-20 | http://www.popsci.com/category/tags/sea-turtles | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382584/warc/CC-MAIN-20130516092622-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.953186 | 177 | 2.53125 | 3 | HuggingFaceFW/fineweb-edu/sample-100BT |
Henri MatisseFrench (Le Cateau-Cambrésis, France, 1869 - 1954, Nice, France)
Along with Pablo Picasso, Henri Matisse was one of the pillars of the Parisian avant-garde, whose formal innovations in painting would dominate much of modern art. Matisse initially worked in law, but discovered a passion for art when he began painting as an amateur. He went on to study traditional academic painting. In the early years of the twentieth century, however, he rejected the idea that painting had to imitate the appearance of nature. His characteristic innovations were the use of vibrant, arbitrary colors; bold, autonomous brushstrokes; and a flattening of spatial depth. This anti-naturalistic style inspired the critical name "fauves," or "wild beasts," for the group of painters around Matisse.
Ironically, Matisse often applied his thoroughly modern style to traditional subjects such as still lifes, landscapes, and portraits. Such works express a sense of timeless joy and stillness that runs counter to the frenetic, technologically inspired compositions of many of his contemporaries. Although primarily dedicated to painting, Matisse was also active as a sculptor and printmaker. In the 1940s, in failing health, he embarked on a well-known group of cut-paper collages. | <urn:uuid:7e985f30-8f6b-4310-945c-71148b9fc48c> | CC-MAIN-2013-20 | http://www.sfmoma.org/explore/collection/artists/463?artwork=78 | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382584/warc/CC-MAIN-20130516092622-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.972218 | 271 | 3.203125 | 3 | HuggingFaceFW/fineweb-edu/sample-100BT |
It has been six years since the U.S. has had to worry about mad cow disease. With the recent confirmed case of the disease in a California bovine, the public is worried about food safety. Is there reason for concern?
Is our food safe?
Since 2006, the U.S. has had no positive tests for mad cow disease or bovine spongiform encephalopathy (BSE). This new case, found in California’s Central Valley, marks only the fourth occurrence ever in the U.S., out of 40,000 tests each year. The infected cow, however, never entered the human food chain, meaning that there is no risk to beef and/or dairy products, nor is there a risk for other countries who import U.S. beef.
Read about Oprah’s connection to mad cow disease >>
"The beef and dairy in the American food supply is safe..."
No risk to humans
In an effort to quell the rising public concern, the USDA has issued a statement regarding the recent case of mad cow disease. In part, Tom Vilsack, U.S. Agriculture Secretary, said, "The beef and dairy in the American food supply is safe and USDA remains confident in the health of U.S. cattle… USDA has no reason to believe that any other U.S. animals are currently affected, but we will remain vigilant and committed to the safeguards in place."
Read more about selecting organic foods >>
To further ease our minds, John Clifford, the U.S. Department of Agriculture’s chief veterinarian, said that this particular cow died of an atypical form of mad cow disease which was caused by a random mutation and not from contaminated feed, meaning that it was a chance occurrence.
Learn more about the benefits of grass-fed beef >>
There was a time when mad cow disease was rampant, but in recent years, the numbers have dropped drastically. In 2011, only 29 cases were reported worldwide, compared to over 37,000 cases in 1992. Cattle ranchers are actually touting this recent discovery as proof that the system is working as it should.
Still in the mood for beef? Cook the perfect tenderloin >>
More food news | <urn:uuid:77e0bb58-34c7-4d4e-8c06-17b1eb8cf9a1> | CC-MAIN-2013-20 | http://www.sheknows.com/food-and-recipes/articles/958435/mad-cow-disease-confirmed-in-california | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382584/warc/CC-MAIN-20130516092622-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.966171 | 461 | 3.234375 | 3 | HuggingFaceFW/fineweb-edu/sample-100BT |
Alice Walker usually puts herself into characters that she writes about in her stories. However, you don't understand this unless you know about her. Staring with this let us find out about who she is and where she came from. When recounting the life of Alice Walker, you find out that she was born to sharecroppers in Eatonton, Georgia in 1944 and was the baby of eight children. She lost one of her eyes when her brother shot her with a BB gun by accident. She was valedictorian of her class in high school and with that and receiving a scholarship; she went to Spelman, a college for black women, in Atlanta. She then transferred to Sarah Lawrence College in New York and during her time there went Africa as an exchange student. She received her Bachelor of Arts degree from Sarah Lawrence in 1965. She was active in the Civil Rights Movement of the 60's and as of the 90's she is still an involved activist. She started her own publishing company in 1984, Wild Tree Press. She is an acclaimed writer and has even received a Pulitzer Prize for the movie, The Color Purple. What is it about her that makes her works so meaningful and persuasive? What provoked her to write what she has?
One of her works, a short story called Everyday Use, is a story that she herself can be pictured in. During the opening of this story you find a woman with her two daughters. She and one of her daughters, Maggie, have just cleaned and beautified the yard of their new house. It is very comforting sitting under the Elm tree that is present and blocks the wind from going through the house. It is a place that you feel enveloped in comfort and love. Maggie and Dee, the other daughter are very different, and it is very apparent that mother, is not your everyday' woman. She, the mother, is "a larger woman that can kill and clean a hog as mercilessly as a man' (American Lit, p. 2470). She has no problems doing what needs to be done in order to feed and protect her... [continues]
Cite This Essay
(2005, 05). Instilled Heritage. StudyMode.com. Retrieved 05, 2005, from http://www.studymode.com/essays/Instilled-Heritage-57580.html
"Instilled Heritage" StudyMode.com. 05 2005. 05 2005 <http://www.studymode.com/essays/Instilled-Heritage-57580.html>.
"Instilled Heritage." StudyMode.com. 05, 2005. Accessed 05, 2005. http://www.studymode.com/essays/Instilled-Heritage-57580.html. | <urn:uuid:3beea9b8-71d8-4bf0-a666-dce0d6d92656> | CC-MAIN-2013-20 | http://www.studymode.com/essays/Instilled-Heritage-57580.html | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382584/warc/CC-MAIN-20130516092622-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.975093 | 564 | 2.875 | 3 | HuggingFaceFW/fineweb-edu/sample-100BT |
July 16, 2010
"Having HIV appears to be associated with a greater risk of death, even when the immune system is relatively robust and patients have not started treatment," according to a study published Friday in the Lancet, MedPage Today reports (Smith, 7/15).
Though the WHO recommends patients begin receiving antiretroviral therapy (ART) when their CD4 levels -- a measure of immune system response -- dip below 350, "[t]he researchers said their findings point to the need for continuing studies to examine the risks and benefits of starting antiretroviral therapy, or ART, for patients with high CD4 cell counts," HealthDay News reports.
"For this study, researchers examined data from 40,830 HIV patients, aged 20 to 59, in Europe and North America, who had at least one CD4 count greater than 350 cells per microliter while not taking ART. The patients were divided into four risk groups: men who have sex with men, heterosexuals, injection drug users, and those with other or unknown risk factors," the news service writes.
"The relatively low rate for men who have sex with men suggests that unmeasured confounders -- such as lifestyle and socioeconomic factors -- might play a role in the high rates for the other groups, the researchers said," MedPage Today continues (7/15).
However, when compared to patients with CD4 counts between 350 and 499, the death rate was 23 percent lower in patients with counts of 500-699 and 34 percent lower with patients with counts at or above 700, according to the Lancet study. "Because ART might reduce the risk of death in such patients, these findings support the need for continuing studies (such as the START trial and further exploration of existing observational databases) of the risks and benefits of starting ART at CD4 counts greater than 350 cells per ?L," the authors conclude (Study Group on Death Rates at High CD4 Count in Antiretroviral Naive Patients, 7/16).
However, the study authors "cautioned that the findings may not apply outside Europe and North America, where all of the patients were under care. ? They also noted that all of the patients were diagnosed early in the course of the disease, and their attitudes to healthcare might differ from those diagnosed later," MedPage Today adds (7/15).
Top Issues at AIDS 2010; Fauci on HIV Vaccine, Prevention; Funding Global HIV/AIDS Programs; HIV in The Middle East, North Africa
IRIN/PlusNews examines some of the "issues likely to top the list" during the International AIDS Conference-AIDS 2010, which kicks off July 18 in Vienna, Austria, including universal access to treatment, recent scientific developments in the area of HIV/AIDS research and the topic of treatment as prevention (7/15).
The Kaiser Family Foundation will provide webcasts of select sessions from AIDS 2010 starting with the Opening Session LIVE at 19:30 CEST/17:30 GMT/1:30 p.m. ET on Sunday, July 18.
Agence France-Presse features a conversation with Anthony Fauci, head of the National Institute for Allergy and Infectious Diseases (NIAID), who speaks of recent advances that scientists hope will bring them closer to the development of an HIV vaccine.
In the article, Fauci reflects on the results of the Thai HIV vaccine trial, which found an investigational HIV vaccine provided slight protection against HIV, and the recent discovery of three antibodies that protect against HIV in one individual. Fauci noted that while the two studies "have left scientists feeling 'much more confident that ultimately we will have a vaccine' against HIV/AIDS, although it was still impossible to say exactly when that would be," AFP writes.
Fauci also spoke of the importance of a continued emphasis on HIV prevention programs, including such things as male circumcision and syringe exchange programs. "Ways have to be found, too, to improve access to these preventive measures, especially in developing countries where only 20 percent of "populations who would benefit" actually have access to them, he added," the news service writes (Santini, 7/15).
In other news, Medecins Sans Frontieres (MSF) on Thursday said international donors need to maintain their commitments for global HIV/AIDS programs ahead of AIDS 2010, during which they pointed to their recent report that estimated the potential consequences of "'delayed, deferred, or denied'" global HIV/AIDS funding on patient populations worldwide, Reuters reports.
"The report suggested that far from cutting back on treatment projects in high-risk developing regions such as sub-Saharan Africa, donors should recognise that investing now in earlier treatment for more patients would pay off later," the news service writes (Kelland, 7/15).
"MSF's study showed that early and sustained treatment of HIV patients had born fruit in several regions, including Malawi's Thyolo district where the overall death rate dropped by a stunning 37 percent between 2000 and 2007, thanks to universal access to ARVs," AFP reports. "Where patients get treatment, 'there is an overall reduction of mortality in the community, there is also less tuberculosis and we start to see, where there is a high coverage of ARV, also a reduction in the number of new cases (of HIV/AIDS),' said [Mit] Philips," who authored the MSF report (7/15).
"In light of the financial crisis, donors may be tempted to walk away from their commitments to provide universal access to AIDS treatment," the report said, according to Reuters. "But these policies are short-sighted and fail to take into account long-term payoffs, including savings in economic terms, as well as increased quality of life and quality outcomes," Reuters continues.
The report also said the U.S. is "'flatlining' funding for AIDS treatment,'" the news service adds (Kelland, 7/15).
In related HIV/AIDS coverage, the National reports on the U.N. Development Program and UNAIDS' decision to form a commission to examine "whether legal structures criminalise certain types of high-risk behaviour and drive the disease underground" and what that might mean for the Middle East.
"Last month's U.N. conference in Dubai found many countries in the Middle East and North Africa fall 'well short' of providing universal treatment, with sufferers often subject to ill-treatment, social stigma and discrimination," the newspaper writes (Reinl, 7/15).
Meanwhile, the Economist looks at the travel restrictions people living with HIV/AIDS face throughout the world, including the Middle East. "In the past year, both China and America have lifted 20-odd-year bans stopping individuals with HIV from entering, but 51 countries still restrict movement in some form (be it entry to the country or a stay therein) based on a person's HIV status," the magazine writes. The magazine features a graphic demonstrating the countries that apply "the severest restrictions to HIV sufferers, including the denial of entry visas and even deportation" (7/15).
No comments have been made. | <urn:uuid:177916f2-eefc-47b7-997e-df4281bd188c> | CC-MAIN-2013-20 | http://www.thebodypro.com/content/art57464.html | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382584/warc/CC-MAIN-20130516092622-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.959172 | 1,468 | 2.59375 | 3 | HuggingFaceFW/fineweb-edu/sample-100BT |
Being referred for radiation treatment is an unfamiliar experience to most cancer patients. On these pages, we will explain radiation oncology to you and answer questions that most often exist for our patients. And we will also explain why you should feel remarkably confident in coming to URMC, the region’s leader in radiation oncology.
What is Radiation Oncology?
Radiation oncology is one of the three major cancer specialties in oncologic medicine. It uses energy from radiation beams, radio isotopes, or charged particles to target tumors and to eradicate cancer cells.
Radiation beams are usually generated in treatment machines, such as linear accelerators or high-energy CT scanners. Another type of radiation treatment uses radioisotopes, or radioactive materials. These are utilized in radiation implants and radioisotope-labeled molecules in the treatment of various cancers.
In addition to getting rid of cancer, radiation treatment is highly effective in reducing symptoms such as cancer-related pain. Radiation has also been used in the treatment of many benign (non-cancerous) conditions in both adults and children.
What Makes URMC Different?
At the James P. Wilmot Cancer Center, the Department of Radiation Oncology is an essential part of multidisciplinary care. In other words, a team of experts from surgery, medical oncology, radiation oncology, and many other disciplines will come together to evaluate and manage your cancer treatment. This is a unique approach to care and is considered the ideal model of cancer care.
The Department of Radiation Oncology provides state-of-the-art treatment technology to increase the curability of cancer while reducing side effects. Our comprehensive cancer care team includes physician radiation oncologists, radiation physicists, radiation therapists, dosimetrists, nurses, social workers, and nutritionists.
What Should I Expect as a Patient?
Your treatment will involve a team of healthcare providers from the Department of Radiation Oncology. Typically, a radiation oncologist will direct the radiation treatment process and plans. Your team will also include a secretary, a nurse, a nurse practitioner, a resident physician in training, radiation therapists who operate the treatment machines, and a radiation dosimetrist or physicist specializing in radiation treatment physical plans.
The department also offers assistance from social workers and nutritionists. Support groups for cancer patients are also available. These include disease-specific groups, age-specific groups, and many others.
A typical radiation treatment process begins with an initial consultation with your radiation oncologist. The treatment recommendation, indication, rationale, benefits, side effects, and potential risks will be explained to you. This is followed by a radiation simulation session, which takes approximately one hour. This simulation process ensures the accuracy of your treatment plan.
Your actual treatment will begin 7-14 days later. However, patients with cancer-related emergencies can begin their treatments sooner. Daily treatment visits may take 15 -30 minutes and generally last 1 to 8 weeks, depending on the diagnosis and the treatment plan. The stereotactic brain radiosurgery is generally completed in one session. The stereotactic body radiosurgery is generally completed in less than 10 sessions.
Your radiation oncologist, therapists, and the team nurse will be there for you every step of the way. They will help you assess treatment-related side effects, your progress, and tolerance.
What Technology do you Offer?
We offer state-of-the-art equipment for external beam radiation at all four of our treatment sites: Strong Memorial Hospital (SMH), Highland Hospital (HH), Cancer Center at Park Ridge (PR), and Sands Cancer Center.
- CT simulators (SMH, PR)
- Megavoltage CT (SMH)
- Cone-beam CT units (SMH)
- Linear accelerators with IMRT and IGRT capabilities (SMH, HH, PR, Sands)
- Novalis Radiosurgery (SMH)
- TomoTherapy (SMH)
- Brachytherapy (SMH, HH, PR)
- Prostate seed implants (SMH, HH, PR)
- GYN implants for gynecologic cancer (HH)
- Nucletron High Dose Rate Brachytherapy (HH)
- Liver radiation using Theraspheres (SMH)
- I-131 treatment for thyroid cancer (HH, PR)
- Radioactive mesh tumor bed boost for lung cancer (SMH)
- Total Body Irradiation (SMH)
- Accelerated partial breast radiotherapy using MammoSite or external beam (SMH, HH, PR, Sands)
How can I Learn More About my Disease or Condition?
Your cancer treatment team in the Department of Radiation Oncology will be your very best resource for learning more.
A radiation oncologist will evaluate your treatment process during your treatment course at a one-on-one session with you at least once a week. The nurse, nurse practitioner, physician's assistant, and resident physician on your team will also be valuable resources regarding education about your disease or condition.
In addition, the Wilmot Cancer Center has a patient and family resource center for those seeking additional information about specific cancers as well as information concerning radiation therapy. The center is located on the 1st floor of the Cancer Center.
If you are a patient and you need to speak to us right away, please call our Clinic Coordinator at (585) 275-4958.
Make an Appointment
If you would like to make an appointment or consult our physicians for a second opinion, please contact us at one of the following locations:
- Wilmot Cancer Center
- Highland Hospital
- Cancer Center at Park Ridge
- Sands Cancer Center | <urn:uuid:9ffa6721-0668-47e5-82ac-2b0016662f9d> | CC-MAIN-2013-20 | http://www.urmc.rochester.edu/radiation-oncology/patient-care/ | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382584/warc/CC-MAIN-20130516092622-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.920473 | 1,189 | 2.5625 | 3 | HuggingFaceFW/fineweb-edu/sample-100BT |
A new study in the Journal of the National Cancer Institute shows death rates from cancer steadily declined from 1994 to 1998. But the rates have since leveled off.
Lung cancer still remains the number one cancer killer in America. The American Cancer Society estimates that eight out of 10 lung cancer deaths are due to smoking.
Data from the Centers for Disease Control and Prevention shows Kentucky not only leads the way in percentage of smokers in the population, but also has the highest lung cancer death rates in the country.
The state health department says it will spend $4.8 million on tobacco control programs this year--an amount that still falls far short of the $25 million recommended by the CDC for the state. This puts Kentucky in 40th place overall for tobacco control spending.
Health officials say they are doing the best they can with the funds available for tobacco control programs. | <urn:uuid:939ea2bc-7a49-40ca-aa55-45b2b851707c> | CC-MAIN-2013-20 | http://www.wbko.com/home/headlines/452382.html?site=full | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382584/warc/CC-MAIN-20130516092622-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.94371 | 175 | 2.671875 | 3 | HuggingFaceFW/fineweb-edu/sample-100BT |
The Molymod® biochemistry set for students can be used to make any number of simple organic molecules but in particular the set is designed to allow the user to make many important biological molecules including amino acids, peptides, polysaccharides, purines, pyrimidines, glycerides, and phospholipids. The set includes some hydrogen atoms with two holes allowing the depiction of hydrogen bonding. Atom parts are made of attractive solid plastic spheres. In this set they are available with from 1 - 4 holes in the usual angular orientations.
Contents of the Molymod® biochemistry molecular model set (for students)
The contents are contained within a sturdy plastic storage box (235 x 170 x 35 mm) and includes a brief instruction leaflet.
The Molymod® system is the original, unique, dual-scale system of high quality low-cost molecular models. These enormously popular sets are ideal for students but are also used by scientists all over the world.
Important notice: Molymod® atomic and molecular model products are scientific educational and visualization aids, and consequently are not suitable for children less than 10 years old.
Delivery and shipping |
Terms and conditions |
About us | | <urn:uuid:11f74b69-9019-48fc-9f07-436354929bfc> | CC-MAIN-2013-20 | http://www.webelements.com/shop/product.php/15/molymod_biochemistry_set_for_students/75d7cc45fcc8ec52dc3275581dbf8614 | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382584/warc/CC-MAIN-20130516092622-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.910614 | 250 | 2.65625 | 3 | HuggingFaceFW/fineweb-edu/sample-100BT |
How does the law treat Mountaintop Removal? Laws at the state and federal level regulate mountaintop removal coal mining and its environmental impacts, require varying levels of public participation, and apply varying amounts of scrutiny in the permitting process. This section will explore the governmental institutions that exert influence over mountaintop removal coal mining, the laws that regulate it, and potential changes in the law. There is great dispute as to whether or not regulations and their enforcement sufficiently safe-guard the health, safety, and well-being of communities living near mountaintop removal sites and the surrounding environment, so both law as written and law as applied will be explored in this section.
National Environmental Protection Act
The National Environmental Protection Act was passed in 1970 in order to monitor the environmental impact of federal agency actions and decisions. The act created a Council on Environmental Policy, required environmental impact statements and a process to solicit public input to include environmental concerns in federal agency decision making.
In the case of mountaintop removal and valley fill permitting, coal companies prepare and submit an Environmental Impact Statements
(EIS) for each permit. In theory, these statements assess the potential impact of mining on the environment. The Army Corps of Engineers is empowered to issue Finding of no Significant Impact (FONSI) documents which supersede any concerns that may be present in the EIS by explaining why the Corps has concluded that there are no significant
environmental impacts resulting from the granting of a permit. In the past, this power has been used to streamline the permitting process, despite the obvious impacts of mountaintop removal mining.
Instances exist where permits were granted despite inadequate EIS statements, and then challenged in court. For example, as part of a settlement agreement from the Bragg vs Robertson (Civ. No. 2:98-0636 (S.D. W.V.)
the EPA, the Corps, the U.S. Interior Department's Fish & Wildlife Service and Office of Surface Mining, and the West Virginia Department of Environmental Protection (DEP), prepared an environmental impact statement (final EIS
) looking at the impacts of mountaintop mining and valley fills
More information is available in the Citizen's Guide to NEPA.
Clean Water Act
Congress passed the Clean Water Act (CWA) in 1972 with the intention of resolving the crisis of America's polluted waterways and wetlands. The CWA combines regulatory and non-regulatory tools in attempting to rid current water systems of pollutants, while attempting to stem the development of new polluted waterways and wetlands. In order to safeguard against the dumping of waste and pollutants into waterways, the Act forbids all dumping (except for specific agricultural uses) that is not approved by the Army Corps of Engineers.
Surface mines are required to obtain a National Pollutant Discharge Elimination System (NPDES) permit, which is regulated under the CWA. In West Virginia, the DEP has primacy of enforcement of the NPDES permits with EPA acting as the federal oversight body. These permits cover all pollutants discharged off the site and into the waters of the United States, restricting effluent limits and requiring the site operator to explain in the mining plan how it will meet those limits and treat what's running off the site, among other requirements.
Valley Fill, or 404, Permits
If a mining plan calls for valley fills, a 404 permit must be obtained, which is an exemption to Subsection 404 of the CWA which allows the Corps to issue variances to fill in an intermittent or perennial stream. EPA follows the United States Geological Survey's definitions for streams: an intermittent stream holds water during wet portions of the year and a perennial stream holds water throughout the year.
The Corps does not have authority over water quality, that's the jurisdiction of the EPA who oversees this permit. However, since anything that interferes with the flow of the water of the United States is regulated by the Corps, the Corps administers valley fills. This is the only part of surface mine permitting where the state does not have primacy of enforcement.
"The thing that's ironic here is that the fill rule was originally developed for developers seeking to build, and this was intended for very small projects, primarily for filling in wetlands for building things like subdivisions and shopping malls," Sludge Safety Project staffer Mathew Louis-Rosenberg said. "So, the division that handles these permits is the Division of Wetlands. Within that, the standards that are developed, that are currently used to determine the environmental impact of a fill, were developed for wetland ecosystems.
"We can't allow them to continue to keep issuing permits. Look at what they're using to evaluate them. It's an ecosystem that bears no resemblance to what we're evaluating." The people who authored the guidelines testified to Joe Lovett, a West Virginia lawyer who's only tried cases aimed at ending mountaintop removal, that the methodology they developed was not appropriate for this ecosystem. "The people who literally wrote the book they use to decide whether they should issue a permit, said it's inappropriate," Louis-Rosenberg said.
There's two types of valley fill permits: individual and Nationwide 21. Large sites are supposed to obtain an individual permit, which carries a more stringent set of regulations requiring more data and proof that the project will not have an adverse impact on the environment. However, Nationwide 21 allows the Corps to issue a blanket permit for small things and, unlike the individual permits, does not require a public hearing.
Coal companies saw the value in nationwide permits, and would often break up large valley fills into smaller pieces in their permit applications in order to avoid the regulatory oversight of an
Listen to Ernie Thompson, resident of Horse Creek and former mine inspector, talk about the changes in law regarding valley fills.
individual permit. In this way, many valley fills were created when a nationwide permit was granted and residents did not notice until dumping began. Beginning with a court victory for advocates against mountaintop removal in 2007, Nationwide 21 permits were declared in violation of the Clean Water Act. This ruling remained in effect until the Fourth Circuit Court overturned it in early 2009.
Surface Mining Control and Reclamation Act (SMCRA)
On February 16th, 1972, a coal slurry impoundment ruptured in Buffalo Creek, W.Va. The rushing tidal wave of sludge killed over 120 people and left many thousands homeless, yet several days before the rupture a federal inspector had found the dam site in "satisfactory" safety. The dangers of strip mining had long been of concern to coalfield residents.
The increased mechanization of strip mining that left many union mining jobs and the encroaching growth of strip mines combined to spark a powerful grassroots movement to abolish strip mining. The tragedy of the Buffalo Creek Flood stoked these flames and five years later, in response to the growing political pressure from Southern Democrats and coalfield residents, Jimmy Carter signed the Surface Mining Control and Reclamation Act in 1977.
It did not abolish strip mining, instead stating that its primary goal was to "establish a nationwide program to protect society and the environment from the adverse effects of surface coal mining operations." To do so, it created the Office of Surface Mining in the Department of the Interior and the regulations for them to enforce. The surface mines constructed before SMCRA are often referred to as "pre-law," while those after are called "post-law."
The SMCRA permit is the whole mining plan, top to bottom, including the blasting plan. The DEP administers SMCRA permits but, unlike the NPDES permits, the Interior Department's Office of Surface Mining and Reclamation maintains oversight. The DEP's Office of Explosives and Blasting also approves the blasting plan within the SMCRA permit. A typical Environmental Impact Statement is also required here, "but they usually don't get filled out," Louis-Rosenberg said.
SMCRA requires that "all surface coal mining operations back-fill, compact... and grade in order to restore the approximate original contour of the land with all high-walls, spoil piles and depressions eliminated." However, the WVDEP regularly grants exceptions to the Approximate Original Contour (AOC) rule. This despite one of the act's goals: to "assure that surface mining operations are not conducted where reclamation as required by this Act is not feasible."
Since 1977, the following language was added to SMCRA, weakening this goal:
In cases where an industrial, commercial, agricultural, residential or public facility (including recreational facilities) use is proposed or the postmining use of the affected land, the regulatory authority may grant a permit for a surface mining operation of the nature described....after consultation with the appropriate land use planning agencies, if any, the proposed post mining land use is deemed to constitute an equal or better economic use of the affected land as compared to pre mining use.
This clause allows coal companies to not restore a site's AOC, as long as it is put to a better economic or social use than it was before. Under this clause, prisons and a golf course have been constructed on mountaintop removal sites. One such prison has earned the name Sink-Sink because it does just that. The mountains, and the ecosystems they support, have an intrinsic cultural value to the residents of the Coal River Valley that cannot be measured in monetary terms, and yet under SMCRA the economic value of the land supersedes all others.
Mining Safety and Health Administration Permit
The federal Mining Safety and Health Administration oversees the regulatory structure of the mines. All mines have to pass muster on the safety of operating the mine and this is all detailed in the MSHA plan the coal companies must file. Part of MSHA's responsibility is approval and inspection of coal slurry
dams. MSHA focuses on the safety aspect of the structures, not their environmental impacts. | <urn:uuid:af4c7880-f258-4680-af24-2539b741bf42> | CC-MAIN-2013-20 | http://auroralights.org/map_project/theme.php?theme=mtr&article=20 | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700264179/warc/CC-MAIN-20130516103104-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.95274 | 2,040 | 3.1875 | 3 | HuggingFaceFW/fineweb-edu/sample-100BT |
BIOL Subject Gateway
As an integrated part of SciVerse, SciVerse Scopus is the world's largest abstract and citation database of peer-reviewed literature and quality web sources with smart tools to track, analyze, and visualize research.
ScienceDirect covers many scientific disciplines including biology, chemistry, and environmental science.
PubMed includes over 15 million citations for biomedical articles back to the 1950's. These citations are from MEDLINE and additional life science journals. PubMed includes links to many sites providing full text articles, medical and scientific textbooks and other related sources. PubMed also links to other services developed by the National Center for Biotechnology Information (NCBI).
BioMed Central publishes over 200 peer-reviewed open access journals.
Indexes journals in a variety of subjects, including biology, chemistry, and environmental science. Many full-text articles are available.
AGRICOLA (AGRICultural OnLine Access) serves as the catalog and index to the collections of the National Agricultural Library. The records describe publications and resources encompassing all aspects of agriculture and allied disciplines, including animal and veterinary sciences, entomology, plant sciences, forestry, aquaculture and fisheries, farming and farming systems, agricultural economics, extension and education, food and human nutrition, and earth and environmental sciences.
Provides fulltext access to core scholarly journals in the Arts and Sciences
This collection of electronic books covers topics related to environmental science, including environmental chemistry, ecology, environmental toxicology, forestry, sustainability, and more.
GREENR (Global Reference on the Environment, Energy, and Natural Resources) is a database that offers content on the development of emerging green technologies and discusses issues on the environment, sustainability and more.
Reference QH540.4 .E515 2008
This is a free, fully searchable collection of articles written by scholars, professionals, educators, and other experts. The articles are written in non-technical language and will be useful to students, educators, scholars, professionals, as well as to the general public.
Reference QH360.2.O83 2002
Reference QR9.E53 2000
Reference QL7.G7813 2003
Reference QR358 .E53 1999
This two volume encyclopedia describes the most famous scientific concepts, principles, laws, and theories in astronomy, biology, chemistry, geology, mathematics, medicine, meteorology, and physics.
This encyclopedia considers both the professional ethics of science and technology, and the social, ethical, and political issues raised by science and technology.
This encyclopedia covers a wealth of topics on the ethics of health professions, animal research, population control and the environment. | <urn:uuid:06b66e52-989e-4a8b-8494-792b3b8069b3> | CC-MAIN-2013-20 | http://cooklibrary.towson.edu/gateways/page.cfm?dept=BIOL&class=0 | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700264179/warc/CC-MAIN-20130516103104-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.888246 | 539 | 2.828125 | 3 | HuggingFaceFW/fineweb-edu/sample-100BT |
When spraying pesticides, don't let others get your drift.
"It is bad enough when your drift damages your crops, your lawn or your garden. But when the damage is to your neighbor's field or flowerbeds, then you've got a real problem," says Erdal Ozkan, an Ohio State University Extension agricultural engineer.
Spray drift is one of the more serious problems pesticide applicators have to deal with. Three-fourths of the agriculture-related complaints investigated by the Ohio Department of Agriculture in 2003 involved drift.
"This shows the seriousness of the problem," Ozkan says. "Drift will be even a bigger problem in the future since there is an increase in acreage of genetically modified crops and use of non-selective herbicides for weed control. Even a small amount of these non-selective herbicides can cause serious damage on the crop nearby that is not genetically modified."
Drift is the movement of a pesticide through air, during or after application, to a site other than the intended site of application. It not only wastes expensive pesticides and damages non-target crops nearby, but also poses a serious health risk to people living in areas where drift is occurring.
"Eliminating drift completely is impossible," Ozkan says. "However, it can be reduced to a minimum if chemicals are applied with good judgment and proper selection and operation of application equipment."
Major factors influencing drift include spray characteristics, equipment/application techniques, weather conditions and operator skill and care.
"Conscientious sprayer operators rarely get into drift problems. They understand the factors that influence drift and do everything possible to avoid them," Ozkan said.
Spraying under excessive wind conditions is the most common factor affecting drift. "The best thing to do is not to spray under windy conditions. If you don't already have one, get yourself a reliable wind speed meter as soon as possible. Only then can you find out how high the wind speed is," Ozkan says.
After wind speed, spray droplet size is the most important factor affecting drift. Research has shown that there is a rapid decrease in the drift potential of droplets whose diameters are greater than approximately 200 microns – or about twice the thickness of human hair.
"If operators of sprayers pay attention to wind direction and velocity, and have knowledge of droplet sizes produced by different nozzles, drift can be minimized," Ozkan says. "The ideal situation is to spray droplets that are all the same size, and larger than 200 microns. Unfortunately with the nozzles we use today, this is not on option. They produce droplets varying from just a few microns to more than 1,000 microns. The goal is to choose and operate nozzles that produce relatively fewer of the drift-prone droplets."
Using low-drift nozzles is one of the many options available to growers to reduce drift.
Following are other drift-reduction strategies to keep drift under control:
- Use nozzles that produce coarser droplets when applying pesticides on targets that do not require small, uniformly distributed droplets, such as systemic products, preplant soil incorporated applications and fertilizer applications.
- Keep spray volume up and use nozzles with larger orifices.
- Follow recent changes in equipment and technology, such as shields and air-assisted and electrostatic sprayers that are developed for drift reduction in mind.
- Keep the boom closer to the spray target. Nozzles with a wider spray angle will allow you to do that.
- Keep spray pressure down and make sure pressure gauges are accurate.
- Follow label recommendations to avoid drift with highly volatile pesticides.
- If you are not using low-drift nozzles, try adding Drift Retardant Adjuvants into your spray mixture.
- Avoid spraying on extremely hot, dry and windy days, especially if sensitive vegetation is nearby. Try spraying during mornings and late afternoons. Although it may not be practical, from the drift reduction perspective, the best time to spray is at night.
- Avoid spraying near sensitive crops that are downwind. Leave a buffer strip of 50-100 feet, and spray the strip later when the wind shifts.
"Good judgment can mean the difference between an efficient, economical application, or one that results in drift, damaging non-target crops and creating environmental pollution," Ozkan says. "The goal of a conscientious pesticide applicator should be to eliminate off-target movement of pesticides, no matter how small it may be." | <urn:uuid:2baa416a-e6aa-4bde-bda2-7e39a26129d2> | CC-MAIN-2013-20 | http://cornandsoybeandigest.com/pesticide-spray-drift-isnt-good-last-droplet | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700264179/warc/CC-MAIN-20130516103104-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.939131 | 938 | 2.90625 | 3 | HuggingFaceFW/fineweb-edu/sample-100BT |
concept. The basic idea is that the human mind
can keep track of about seven
at once, or can differentiate between seven or so
different (but similar) things.
The phrase comes from the title of a 1956 paper by Harvard professor George
A. Miller titled, The Magical Number Seven, Plus or Minus Two: Some Limits on our Capacity for Processing Information, which begins:
My problem is that I have been persecuted by an integer. For seven years this number has followed me around, has intruded in my most private data, and has assaulted me from the pages of our most public journals. This number assumes a variety of disguises, being sometimes a little larger and sometimes a little smaller than usual, but never changing so much as to be unrecognizable. The persistence with which this number plagues me is far more than a random accident. There is, to quote a famous senator, a design behind it, some pattern governing its appearances. Either there really is something unusual about the number or else I am suffering from delusions of persecution.
Miller goes on to present data from a number of
experiments which support the idea
(by arriving at the number seven). Topics of the
experiments he reviewed included, "span of immediate
memory", "capacity for absolute judgements of the position of a dot
on a square", and (my favorite) "capacity for absolute judgements of saltiness | <urn:uuid:5bad0a5b-bfeb-494a-9f8a-d58ba4c75253> | CC-MAIN-2013-20 | http://everything2.com/title/seven%252C+plus+or+minus+two | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700264179/warc/CC-MAIN-20130516103104-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.93767 | 289 | 2.890625 | 3 | HuggingFaceFW/fineweb-edu/sample-100BT |
Tuesday, December 4, 2012
Today in History - Tuesday, Dec. 4, 2012
Today is Tuesday, Dec. 4, the 339th day of 2012. There are 27 days left in the year.
Today's Highlight in History:
On Dec. 4, 1619, a group of settlers from Bristol, England, arrived at Berkeley Hundred in present-day Charles City County, Va., where they held a service thanking God for their safe arrival. (Some suggest this was the true first Thanksgiving in America, ahead of the Pilgrims' arrival in Massachusetts.)
On this date:
In 1619, settlers from Bristol, England, arrived at Berkeley Hundred in present-day Charles City County, Va.
In 1783, Gen. George Washington bade farewell to his Continental Army officers at Fraunces Tavern in New York.
In 1816, James Monroe of Virginia was elected the fifth president of the United States.
In 1912, Medal of Honor recipient Gregory "Pappy" Boyington, the Marine Corps pilot who led the "Black Sheep Squadron" during World War II, was born in Coeur d'Alene, Idaho.
In 1918, President Woodrow Wilson left Washington on a trip to France to attend the Versailles (vehr-SY') Peace Conference.
In 1942, U.S. bombers struck the Italian mainland for the first time in World War II. President Franklin D. Roosevelt ordered the dismantling of the Works Progress Administration, which had been created to provide jobs during the Depression.
In 1965, the United States launched Gemini 7 with Air Force Lt. Col. Frank Borman and Navy Cmdr. James A. Lovell aboard.
In 1978, San Francisco got its first female mayor as City Supervisor Dianne Feinstein (FYN'-styn) was named to replace the assassinated George Moscone (mahs-KOH'-nee).
In 1984, a five-day hijack drama began as four armed men seized a Kuwaiti airliner en route to Pakistan and forced it to land in Tehran, where the hijackers killed American passenger Charles Hegna.
In 1991, Associated Press correspondent Terry Anderson, the longest held of the Western hostages in Lebanon, was released after nearly seven years in captivity. Pan American World Airways ceased operations.
In 1992, President George H.W. Bush ordered American troops to lead a mercy mission to Somalia, threatening military action against warlords and gangs who were blocking food for starving millions.
In 1996, the Mars Pathfinder lifted off from Cape Canaveral and began speeding toward Mars on a 310 million-mile odyssey. (It arrived on Mars in July 1997.)
Ten years ago: United Airlines lost its bid for $1.8 billion in federal loan guarantees, a major setback to the nation's second-largest air carrier in its efforts to avoid bankruptcy. Supreme Court justices heard arguments on whether federal laws intended to combat organized crime and corruption could be used against anti-abortion demonstrators. (The Court later ruled that such laws were improperly used to punish abortion opponents.)
Five years ago: Defending his credibility, President George W. Bush said Iran was dangerous and needed to be squeezed by international pressure despite a U.S. intelligence finding that Tehran had halted its nuclear weapons program four years earlier. The intelligence report on Iran figured in a Democratic debate on National Public Radio as rivals assailed front-runner Hillary Rodham Clinton for voting in favor of a Senate resolution designating Iran's Revolutionary Guard a terrorist organization. Pimp C (Chad Butler), a rapper with the Texas hip-hop group Underground Kingz, was found dead in a hotel room in Los Angeles; he was 33.
One year ago: Prime Minister Vladimir Putin's party hung onto its majority in Russia's parliamentary election, but faced accusations from opponents of rigging the vote. Rafael Nadal recovered from a terrible start and beat Juan Martin del Potro of Argentina 1-6, 6-4, 6-1, 7-6 (0) to give Spain its fifth Davis Cup title. After going more than two years and 26 tournaments without a victory, Tiger Woods won the Chevron World Challenge. Former Hewlett-Packard chairwoman Patricia Dunn, 58, died in Orinda, Calif.
Today's Birthdays: Actress-singer Deanna Durbin is 91. Game show host Wink Martindale is 79. Pop singer Freddy Cannon is 76. Actor-producer Max Baer Jr. is 75. Actress Gemma Jones is 70. Rock musician Bob Mosley (Moby Grape) is 70. Singer-musician Chris Hillman is 68. Musician Terry Woods (The Pogues) is 65. Rock singer Southside Johnny Lyon is 64. Actor Jeff Bridges is 63. Rock musician Gary Rossington (Lynyrd Skynyrd; the Rossington Collins Band) is 61. Actress Patricia Wettig is 61. Actor Tony Todd is 58. Jazz singer Cassandra Wilson is 57. Country musician Brian Prout (Diamond Rio) is 57. Rock musician Bob Griffin (The BoDeans) is 53. Rock singer Vinnie Dombroski (Sponge) is 50. Actress Marisa Tomei is 48. Actress Chelsea Noble is 48. Actor-comedian Fred Armisen is 46. Rapper Jay-Z is 43. Actor Kevin Sussman is 42. Actress-model Tyra Banks is 39. Country singer Lila McCann is 31. Actress Lindsay Felton is 28. Actor Orlando Brown is 25.
Thought for Today: "Many are called but few get up." — Oliver Herford, American author (1863-1935). | <urn:uuid:77e91f28-b327-4347-ae3d-f2934bf24075> | CC-MAIN-2013-20 | http://fosters.com/apps/pbcs.dll/article?AID=/20121204/NEWS17/121129470/-1/CITNEWS0803 | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700264179/warc/CC-MAIN-20130516103104-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.939074 | 1,152 | 2.90625 | 3 | HuggingFaceFW/fineweb-edu/sample-100BT |
Department of Energy
- Land is essential for all types of economic activity - every business has a footprint.
- At the height of the 2006 real estate boom, land in the US is estimated to have been worth more than $17 trillion.
- Research presented by @AEI’s Stephen Oliner suggests that land is indeed a high-risk investment.
Land is essential for all types of economic activity. Every business — whether it’s General Motors or the corner grocery store — has a footprint. The same is true for the homes and apartments in which people live.
Land also constitutes a major part of wealth. At the height of the real estate boom in 2006, land in the United States (excluding farmland and land held by the government) is estimated to have been worth more than $17 trillion. This figure represents about 40 percent of the value of commercial real estate and housing in the United States.1 Of course, much of that wealth dissolved over the next few years as real estate markets crashed. The new research presented in this Letter documents the huge swing in land value over the recent cycle, showing that land is indeed a high-risk investment.
Read the full text of the letter here. | <urn:uuid:5eced0c3-44a6-4e08-a4ef-0b5c98f45549> | CC-MAIN-2013-20 | http://[email protected]/article/economics/the-great-land-price-swing/ | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700264179/warc/CC-MAIN-20130516103104-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.955282 | 247 | 3.078125 | 3 | HuggingFaceFW/fineweb-edu/sample-100BT |
At a Glance
Why Get Tested?
To determine lithium levels in the blood in order to maintain a therapeutic level or to detect lithium toxicity
When to Get Tested?
When beginning treatment with lithium as the dose is adjusted to achieve therapeutic blood levels; at regular intervals to monitor lithium levels; as needed to detect low or toxic concentrations
A blood sample drawn from a vein in your arm
Test Preparation Needed?
The Test Sample
What is being tested?
This test measures the amount of lithium in the blood. Lithium is one of the most well-established and widely-used drugs prescribed in the treatment of bipolar disorder. Bipolar disorder is a mental condition that is characterized by alternating periods of depression and mania. These periods may be as short as a few days or weeks or may be months or years long. During a depressive episode, those affected may feel sad, hopeless, worthless, and lose interest in daily activities. They may be fatigued but have trouble sleeping, experience weight loss or gain, have difficulty concentrating, and have thoughts of suicide. During a manic episode, those affected may be euphoric, irritable, have high energy and grandiose ideas, use poor judgment, and participate in risky behaviors. Sometimes affected people will have mixed episodes with aspects of both mania and depression. Bipolar disorder can affect both adults and children.
Lithium is prescribed to even out the moods of a person with bipolar disorder; it is often called a "mood stabilizer" and is sometimes prescribed for people with depression who are not responding well to other medications. It is a relatively slow-acting drug and it may take several weeks to months for lithium to affect a person's mood. Dosages of the drug are adjusted until a steady concentration in the blood that is within therapeutic range is reached. The actual amount of drug that it will take to reach this steady state will vary from person to person and may be affected by a person's age, general state of health, and other medications that they are taking.
Lithium levels are monitored on a regular basis because blood levels must be maintained within a narrow therapeutic range. Too little and the medication will not be effective; too much and symptoms associated with lithium toxicity may develop, such as nausea, vomiting, diarrhea, confusion, and tremors. Extremely high levels can lead to stupor, seizures, and can be fatal.
How is the sample collected for testing?
A blood sample is obtained by inserting a needle into a vein in the arm.
NOTE: If undergoing medical tests makes you or someone you care for anxious, embarrassed, or even difficult to manage, you might consider reading one or more of the following articles: Coping with Test Pain, Discomfort, and Anxiety, Tips on Blood Testing, Tips to Help Children through Their Medical Tests, and Tips to Help the Elderly through Their Medical Tests.
Another article, Follow That Sample, provides a glimpse at the collection and processing of a blood sample and throat culture.
Is any test preparation needed to ensure the quality of the sample?
No test preparation is needed. However, timing of the sample collection may affect results. Generally, lithium blood levels are performed 12-18 hours after the last dose (also known as a "trough" level). Tell the laboratorian who is drawing your blood when you took your last dose so that the results can be interpreted correctly.
Ask a Laboratory Scientist
This form enables you to ask specific questions about your tests. Your questions will be answered by a laboratory scientist as part of a voluntary service provided by one of our partners, American Society for Clinical Laboratory Science. If your questions are not related to your lab tests, please submit them via our Contact Us form. Thank you.
* indicates a required field
NOTE: This article is based on research that utilizes the sources cited here as well as the collective experience of the Lab Tests Online Editorial Review Board. This article is periodically reviewed by the Editorial Board and may be updated as a result of the review. Any new sources cited will be added to the list and distinguished from the original sources used.
Sources Used in Current Review
Tietz Textbook of Clinical Chemistry and Molecular Diagnostics. Burtis CA, Ashwood ER and Bruns DE, eds. 4th ed. St. Louis, Missouri: Elsevier Saunders; 2006, Pp 1271-1272.
Harrison's Principles of Internal Medicine. 16th ed. Kasper D, Braunwald E, Fauci A, Hauser S, Longo D, Jameson JL, eds. McGraw-Hill, 2005 Pg 2557.
(Updated March 24, 2009) Lee D, Gupta M. Toxicity, Lithium from Medscape. Available online at http://emedicine.medscape.com/article/815523-overview through http://emedicine.medscape.com. Accessed September 2009.
(January 15, 2009) MedlinePlus Medical Encyclopedia. Bipolar Disorder. Available online at http://www.nlm.nih.gov/medlineplus/ency/article/000926.htm. Accessed September 2009.
(January 4, 2009) Mayo Clinic. Bipolar Disorder. Available online at http://www.mayoclinic.com/health/bipolar-disorder/DS00356 through http://www.mayoclinic.com. Accessed September 2009.
National Alliance on Mental Illness. Medications, Lithium. Available online through http://www.nami.org. Accessed September 2009.
(Jan 14, 2009) Lloyd A. Netdoctor: Lithium. Available online at http://www.netdoctor.co.uk/diseases/depression/lithium_000290.htm through http://www.netdoctor.co.uk. Accessed September 2009.
Sources Used in Previous Reviews
Thomas, Clayton L., Editor (1997). Taber's Cyclopedic Medical Dictionary. F.A. Davis Company, Philadelphia, PA [18th Edition]. Pp 1121.
Spearing, M. Updated (2006 February 17,Updated). Bipolar Disorder. NIMH [On-line information]. Available online at http://www.nimh.nih.gov/publicat/bipolar.cfm#bp6 through http://www.nimh.nih.gov.
Goldberg, J. and Citrome, L. (2005 February). Latest therapies for bipolar disorder, Looking beyond lithium. Postgraduate Medicine online v 117 (2) [On-line journal]. Available online at http://www.postgradmed.com/issues/2005/02_05/goldberg.htm through http://www.postgradmed.com.
Geddes, J. et. al. (2004 February). Long-Term Lithium Therapy for Bipolar Disorder: Systematic Review and Meta-Analysis of Randomized Controlled Trials. Am J Psychiatry 161:217-222 [On-line journal]. Available online at http://ajp.psychiatryonline.org/cgi/content/full/161/2/217 through http://ajp.psychiatryonline.org.
Newport, D. J. et. al. (2005 November). Lithium Placental Passage and Obstetrical Outcome: Implications for Clinical Management During Late Pregnancy. American Journal of Psychiatry 162:2162-2170 [on-line abstract]. Available online at http://ajp.psychiatryonline.org/cgi/content/abstract/162/11/2162 through http://ajp.psychiatryonline.org.
Schapiro, N. (2005). Bipolar Disorders in Children and Adolescents. Medscape from J Pediatr Health Care. 2005; 19 (3): 131-141 [On-line information]. Available online at http://www.medscape.com/viewarticle/504584_1 through http://www.medscape.com.
Menon, L. (2005 August, Updated). Lithium. National Alliance for the Mentally Ill [On-line information]. Available online through http://www.nami.org.
Walling, A. (2005 January 1). Evidence-Based Guidelines for Bipolar Disorder Therapy. American Family Physician [On-line journal]. Available online at http://www.aafp.org/afp/20050101/tips/19.html through http://www.aafp.org. | <urn:uuid:63074de8-ea20-4fda-befd-69f7c6c9e2e0> | CC-MAIN-2013-20 | http://labtestsonline.org/index.php/understanding/analytes/lithium/tab/glance | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700264179/warc/CC-MAIN-20130516103104-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.878963 | 1,770 | 3.234375 | 3 | HuggingFaceFW/fineweb-edu/sample-100BT |
Since the human body has no mechanism to excrete excess iron, it is probably best to refrain from consuming blood-based (heme) iron and taking iron supplements unless prescribed (for example, for pregnant women who are demonstrably anemic). This is because iron pills have been linked to birth complications such as preterm birth and maternal hypertension. Presumably because of iron’s pro-oxidant qualities, it can be a double-edged sword; lowering the iron level of cancer patients has been associated with dropping death rates. The absorption of plant-based (non-heme) iron can be regulated by the body, though, making dark green leafy veggies and legumes such as lentils preferable sources, especially since food is a package deal.
Topic summary contributed by a volunteer
To help out on the site, email [email protected] | <urn:uuid:985da1cd-658f-4e95-822e-6106d3a26187> | CC-MAIN-2013-20 | http://nutritionfacts.org/topics/iron/page/2/ | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700264179/warc/CC-MAIN-20130516103104-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.944644 | 175 | 2.859375 | 3 | HuggingFaceFW/fineweb-edu/sample-100BT |
What is OPEC?
OPEC is an acronym for Organization of the Petroleum Exporting Countries. OPEC was formed in 1960 in Baghdad, Iraq with five founding member countries. Currently OPEC is a cartel composed of 11
oil producing countries. Current member countries include: Algeria, Indonesia, Iran, Iraq, Kuwait, Libya, Nigeria, Qatar, Saudi Arabia, United Arab Emirates, and Venezuela. OPEC's stated purpose is said to serve three main functions:
- Help stabilize world oil prices
- Ensure oil producers achieve a reasonable rate of return on production
- Ensure a stable supply of crude oil for consumer use. OPEC has a current goal of $27 US per barrel of oil.
How much crude oil do the OPEC countries produce?
Collectively these countries hold approximately 77% of known world crude oil reserves.
In terms of daily crude oil production OPEC countries currently produce about 41% (24.2 million barrels per day) of the world's crude oil. The oil exported by the OPEC countries accounts for 55% of all oil traded internationally. OPEC countries also represent about 15% of
total world natural gas production.
How does OPEC set oil prices?
OPEC does not "set" oil prices. OPEC manipulates the free market price of crude oil by setting caps on the oil production of its member countries. Twice each year, ministers from each OPEC country meet
in Vienna, Austria to review the status of the international oil market and to forecast the future oil demands in order to agree upon an appropriate crude oil production level. | <urn:uuid:59656bb5-8c00-49fc-a789-775c1d020d7c> | CC-MAIN-2013-20 | http://oklahomagasprices.com/OPEC_Info.aspx | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700264179/warc/CC-MAIN-20130516103104-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.952593 | 311 | 3.171875 | 3 | HuggingFaceFW/fineweb-edu/sample-100BT |
Emergency & Security
Medications, Attention Deficit Disorder, and Hepatitis
Medication in School
School personnel may assist a student to manage prescription and non-prescription medication only under the directions of a physician. Prescription medication will be accepted only in the container properly labeled by the pharmacist. This label will serve as the physician’s written instructions. The parent must fill out medication consent forms, available in the front office. Students may carry a 1-day supply of non-prescription medication with them, as long as they also carry a note from the parent specifying the name of the medication and dose to be taken. All medication requested to be administered by school personnel must be checked in with school personnel and kept in a locked cupboard. The student may carry emergency medication/inhalers with parent and physician written instruction. School personnel will accept changes in medication dosages only with the new properly labeled pharmacy container reflecting the dosage and/or time changes. Parents are responsible for transportation of medication to and from school. Parents are responsible for refilling the school’s supply of medication and keeping track of that supply. Parents are responsible for the preparation of all tablets (e.g., halving tablets). Parents are responsible for picking up all unused medication at the end of the school year.
ADHD / ADD and Medications
An increasing number of students are being diagnosed as having an attention deficit disorder and are being placed on medication by their physicians. There is still much to learn about attention disorders, and controversy about the appropriateness of prescribing medication. Our role as educators is to provide instruction, make reasonable accommodations, observe behavior and provide feedback to parents and physicians when asked. We have the responsibility of cooperating with a physician and parent when a child is placed on medication following the procedures outline in School Board Policy 5665, Administering Medication in School (see Appendix A). We do not have the training or authority to prescribe medication. Consult with district nurses, psychologists, or social workers if you have questions about ADHD / ADD.
Due to the continuing high local incidence rate of Hepatitis B, we will again follow the recommendations of the Lane County Health Department in restricting the use of home prepared foods for use at school sponsored events. Food prepared by the cafeteria staff, pre-packaged or “store bought” items, and food cleaned and cooked at school under staff supervision are acceptable to share at school. This restriction applies specifically to all school functions that include students, parents, or other members of the community. While the restrictions do not apply to events that are organized by the staff for each other, it is advisable to consider applying the same rules. Please continue to insist that students wash their hands before eating or handling food, especially after using the restroom. | <urn:uuid:5b2892de-7757-4250-9f7d-974a2eb403c6> | CC-MAIN-2013-20 | http://schools.4j.lane.edu/kelly/pages-new/resources/handbookpages/meds-hep-add.html | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700264179/warc/CC-MAIN-20130516103104-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.948732 | 565 | 2.546875 | 3 | HuggingFaceFW/fineweb-edu/sample-100BT |
Better pollution control technology needed to cut VOC emissions
By Summit Voice
FRISCO — Ongoing studies of winter ozone formation in the Uinta Basin shows the need for better pollution control technology on oil and gas drilling rigs and other equipment used for fossil fuel development.
An emissions inventory developed for the study found that oil and gas operations are responsible for 98-99 percent of the volatile organic compounds (VOCs), and for 57-61 percent of the nitrogen oxide emissions. VOCs and nitrogen compounds are the key ingredients for ozone-laced smog, which has been clearly identified as a human health threat.
The collaborative study led by University of Utah scientists was aimed at better understanding winter ozone formation and the scientists found that snow-covered ground, along with specific atmospheric conditions, are the key factors for ozone formation. C
urtailing industrial operations during certain weather patterns could be one way to reduce the formation of ozone, but that might prove costly for companies working with leased drilling gear, officials said during a press conference this week.
Before developing a comprehensive mitigation strategy, researchers want to develop more accurate weather and photochemical models to accurately simulate winter ozone formation. Only then will they know which mitigation strategies are most effective.
But in general, the study team said VOC controls hold the most promise for effectively reducing ozone production and would have other health benefits, considering that cancer-causing substances like benzene and tuolene are health threats in their own right.
The study was partly supported by the Western Energy Alliance with funding from several fossil fuel development companies adding up to $2.125 million.
“Ironically, after gathering a very impressive research team and deploying them into the basin with a vast array of scientific instruments, there were no high ozone occurrences in 2012,” said Kathleen Sgamma, Vice President of Government & Public Affairs. “The weather conditions necessary for ozone formation did not exist last winter, but as a result, the scientists were able to gather extensive baseline data,” she said.
“Industry remains committed to protecting air quality while continuing to develop domestic energy in the West, and proud to be a part of this scientific endeavor,” Sgamma said.
The final 2011-2012 Study report and the 2012/2013 plan are online at http://www.deq.utah.gov/locations/uintahbasin/index.htm. | <urn:uuid:38f15415-aaee-4594-a4bd-f8dc08776193> | CC-MAIN-2013-20 | http://summitcountyvoice.com/2013/02/20/fossil-fuel-drilling-fingered-in-uinta-basin-ozone-formation/ | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700264179/warc/CC-MAIN-20130516103104-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.950677 | 494 | 2.671875 | 3 | HuggingFaceFW/fineweb-edu/sample-100BT |
exactly located (exactlyLocated)
The actual, minimal location of an
Object. This is a subrelation of the more general Predicate
SUMO / BASE-ONTOLOGY
Related WordNet synsets
- the precise location of something; a spatially limited location; "she walked to a point where she could survey the whole street"
Agar obj is partly located in region, to yah kuch subobj nahin, ki subobj is a part of obj aur subobj is exactly located in region.
(partlyLocated ?OBJ ?REGION)
(part ?SUBOBJ ?OBJ)
(exactlyLocated ?SUBOBJ ?REGION))))
Agar obj is exactly located in region, to yah kuch otherobj nahin, ki otherobj is exactly located in region aur otherobj is not equal to obj.
(exactlyLocated ?OBJ ?REGION)
(exactlyLocated ?OTHEROBJ ?REGION)
(equal ?OTHEROBJ ?OBJ))))))
"thing ki jagah time tha" is equal to region agar hai thing is exactly located in region during time.
(WhereFn ?THING ?TIME)
(exactlyLocated ?THING ?REGION))) | <urn:uuid:c9bd6a3e-3426-45f4-8eec-ef2af2ae747f> | CC-MAIN-2013-20 | http://virtual.cvut.cz/kifb/hindi/concepts/exactly_located.html | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700264179/warc/CC-MAIN-20130516103104-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.895518 | 279 | 3.359375 | 3 | HuggingFaceFW/fineweb-edu/sample-100BT |
Reishiki - the external expression of respect
In Iaido dojo you will practice with a wooden sword (Bokken), or a training sword (Iaito), or even a real Japanese sword with a cutting blade (Katana). There will be numerous people practicing, all in one room. Following the rules of etiquette ensures that no one gets injured. Also, following the rules of etiquette enhances practice in other ways. The teacher can more quickly determine skill levels when students line up in the order of rank. The ceremonial bowing serves as a concentration and focusing point; when bowing, practitioners shows respect for others. Maintaining observant silence allows students to focus their attention and practice reading body language. Cleaning the dojo after practice leaves it ready for the next group.
Always remember, reishiki comes from the heart and without sincere respect it will be only an empty gestures.
- Be on time.
- Do not make class wait.
- Finger and toe nails must be cut short and all jewelry removed.
- Remove shoes before entering.
- A sword should be untied and held in the right hand.
- Step directly in to the dojo.
- Do not block doorway.
- Stop and bow to Shinzen.
- Avoid drawing or pointing a sword toward Shinzen.
- Before practice, be sure your sword is in proper shape.
- Check the Mekugi.
- Place it at Shimoza (opposite side of room from shinzen) with the Ha to the wall.
- Never touch a sword without the owner's permission.
- Do not knock or step over any sword.
- The floor must be cleared and swept.
- Leave the Dojo ready for those who practice after you.
- Eating, drinking, and smoking are not allowed on the Dojo floor.
- When on the practice floor do not have private conversations other than iaido related subjects.
- Tell the teacher of any injuries or problems, or of having to leave early.
- Do not leave without permission.
- Do not speak when teacher is speaking.
- Thank the teacher.
- Show respect to other iaidoka (students)
- Do not draw directly towards others.
- Do not do anything that may distract or injure a fellow practitioner or spectator. | <urn:uuid:566c0f69-f932-4d46-abcd-00f058076cf6> | CC-MAIN-2013-20 | http://www.artofiaido.com/etiquette | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700264179/warc/CC-MAIN-20130516103104-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.853047 | 483 | 2.796875 | 3 | HuggingFaceFW/fineweb-edu/sample-100BT |
A number of federal laws and ordinances protect U.S. employees from discrimination in the workplace. These laws are enforced by the U.S. Equal Employment Opportunity Commission (EEOC). The EEOC is the main entity responsible for upholding and designating all employment laws in the United States, including federal job discrimination.
Here's a look at federal job discrimination laws.
Civil Rights Act of 1964 (Title VII). This act protects employees from job discrimination on the basis of race, color, religion, sex, or national origin. All aspects of employment are covered, including hiring, firing, promotion, wages, recruitment, training, and any other terms of employment.
Equal Pay Act of 1963. This act ensures that employees receive the same pay, benefits, and opportunities as those employees of the opposite sex who perform the same work in the same establishment.
Age Discrimination in Employment Act of 1967. This act protects workers who are 40 years of age or older from job discrimination that favors younger workers.
Title I and Title V of the Americans with Disabilities Act of 1990.
This act protects qualified workers with disabilities from job discrimination in the private and state and municipal sectors.
Sections 501 and 505 of the Rehabilitation Act of 1973. This act protects qualified workers with disabilities who work for the federal government from job discrimination.
Civil Rights Act of 1991. This act clarifies some of the ambiguous sections of Title VII, and provides monetary compensation for victims of federal job discrimination.
If you think you are a victim of job discrimination
If you think you are a victim of job discrimination under one of these federal laws, you can file a discrimination charge with the EEOC. In addition, a charge may be filed on your behalf by another person to protect your identity. You can file a charge by mail or in person at the nearest EEOC office. Importantly, you must file a claim with the EEOC within 90 days of the alleged discrimination before a private lawsuit can be filed. You must provide the following information in order to file a charge with the EEOC:
- The complaining party's name, address, and telephone number
- The name, address, and telephone number of the claim's respondent
- The date and a short description of the alleged discrimination
The EEOC will then investigate the claim. It will either dismiss the case, attempt to settle the case, bring the case to federal court, or issue the charging party a "right to sue," which allows the party to seek private counsel and bring suit upon the employer directly.
Other job discrimination laws and agencies
In addition to federal laws, many states and municipalities have their own laws that protect employees against discrimination. Workers and applicants who feel they are being discriminated against in regard to sexual orientation, parental status , marital status, political affiliation, and any other personal choice that does not affect their ability to do their job can research local and state ordinances to see whether they have legislative protection. | <urn:uuid:225174b7-88cb-4385-8008-4f65996fc0a6> | CC-MAIN-2013-20 | http://www.avvo.com/legal-guides/federal-job-discrimination-laws?pretty_print=false | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700264179/warc/CC-MAIN-20130516103104-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.944743 | 600 | 3.359375 | 3 | HuggingFaceFW/fineweb-edu/sample-100BT |
On March 12, 2002 the first results from the 2001 Census - population and dwelling counts - were released by Statistics Canada. Information on other characteristics of the B.C. population such as age, ethnicity, education, income, etc. will be released over the next two years.
British Columbia was the third fastest growing province in Canada, increasing 4.9% between 1996 and 2001. On May 15, 2001, the population of B.C. was counted as 3,907,738, compared with 3,724,500 in May 1996. B.C.'s population growth was slightly stronger than the national rate of 4.0%. In the previous five year period, B.C.'s population had increased 13.5%, more than double the 5.7% increase in the Canadian population. Between 1996 and 2001, Alberta (10.3%) and Ontario (6.1%) had the strongest population growth among the provinces. Nunavut's population grew by 8.1%.
Fewer than half (12 out of 28) of the regional districts in the province experienced population growth between 1996 and 2001. The regions that grew were concentrated in the southwest mainland, eastern Vancouver Island and Okanagan areas. Squamish-Lillooet (12.3%), Greater Vancouver (8.5%), Central Okanagan (8.2%) and Fraser Valley (6.8%) regional districts registered the strongest growth. On Vancouver Island, most of the growth occurred in the Nanaimo (4.3%) and Capital (2.4%) regional districts. The northern and Kootenay regions registered population declines over the 5-year period, with the largest decreases in Skeena-Queen Charlotte (-12.5%) and Mount Waddington (-10.2%).
Among large municipalities (those with populations of more than 100,000), the strongest growth in the 1996-2001 period was posted in Surrey (14%), followed by Coquitlam (11%) and Richmond (10%). Among smaller municipalities (those with populations of more than 5,000), Whistler had the strongest growth (24%), although the small neighbouring community of Pemberton had even stronger growth (91%).
Top Municipalities (> 5,000 people) in terms of growth from 1996 to 2000
|Municipality ||2001 Population || % Change |
|Whistler ||8,896 ||24.0% |
|Surrey ||347,825 ||14.2% |
|Port Moody ||23,816 ||14.2% |
|View Royal ||7,271 ||12.9% |
|Maple Ridge ||63,169 ||12.5% |
|Coquitlam ||112,890 ||10.9% |
|New Westminster ||54,656 ||10.8% |
|Richmond ||164,345 ||10.4% |
|Port Coquitlam ||51,257 ||9.8% |
|Abbotsford ||115,463 ||9.6% |
Urban and Rural Population
Between 1996 and 2001, the population has become more urbanised with 85% of the provincial population now living in urban areas, up from 82% in 1996 and 80% in 1991.
Characteristics of Population Growth
Although information on the characteristics of the population growth between 1996 and 2001 is not yet available from the 2001 Census, current population estimates provide insight into some aspects of the growth. About two thirds of the population growth between 1996 and 2001 was due to migration with natural increase (births minus deaths) accounting for the rest. The growth due to migration was entirely from international sources, as a large number of people left B.C. for Alberta and only small numbers arrived from other parts of Canada. Between 1991 and 1996, a similar number of people had arrived from international sources but there had also been almost as large a net inflow from other parts of the country. More than three quarters (77%) of the immigrants to B.C. over the 1996-2001 period were from Asian countries, followed by European sources (12%) and North and Central America (4%). | <urn:uuid:2c387e3b-0186-4c0f-94d1-fd0fbf5014cc> | CC-MAIN-2013-20 | http://www.bcstats.gov.bc.ca/StatisticsBySubject/Census/2001Census/PopulationHousing/Highlights.aspx | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700264179/warc/CC-MAIN-20130516103104-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.951196 | 851 | 2.796875 | 3 | HuggingFaceFW/fineweb-edu/sample-100BT |
The Walking Liberty half dollar has won many praises and criticisms in its time. Adolph Weinman’s Walking Liberty design was more than an attempt to beautify the half dollar. It represented a concerted effort to revitalize the denomination and to get half dollars back into circulation in again. The Mint was able to churn out plenty of Walking Liberty half dollars in the design’s first year. Of the first years mintage couldn’t compare to the numbers that were minted in the 1940s.
Adolph Weinman was better known as a sculptor and medal designer. As such he won the competition to design the new half dollar. The Mint began producing the new Walking Liberty design in November, 1916. However it was January 2, 1917 before any of these dated half dollars entered into circulation.
The new half dollars debut soon brought many praises and some criticisms. The Jan 23, 1917, issue of the Elyra, Ohio Evening Telegram is quoted as stating the Walking Liberty half dollar was more “elaborate” than the old Barber half dollar. And that both half dollars shared one thing in common—they both seemed to have been inspired by some French coin designs.
For what ever reason, Weinman managed to work the American flag into the Walking Liberty half dollar design, which does seem to set it apart and gave it a more national character than other coin designs. Weinman had his own comments on the symbolism in his design:
“The design of the Half dollar bears a full-length figure of Liberty, the folds of the Stars and Stripes flying to the breeze as a background. Progressing in full stride toward the dawn of a new day, carrying branches of laurel and oak, symbolic of civil and military glory. The hand of the figure is outstretched in bestowal of the spirit of liberty.”
“The reverse of the half dollar shows an eagle perched high upon a mountain craig, his wings unfolded, fearless in spirit, and conscious of his power. Springing from a rift in the rock is a sapling of Mountain Pine, symbolic of America.”
Many bird experts were amused at the design of the eagle displayed on the half dollar. It was quite unlike any other eagle pictured on other U.S. coins. One leading ornithologist remarked the eagle looked like a “turkey.”
Very little was said about the branch of Mountain Pine. It did add a very dramatic touch to the design and is probably the coin’s most distinctive feature. The Walking Liberty is definitely the most distinctive half dollar created. In time the Walking Liberty half dollar gave way to the Franklin half dollar in 1948. | <urn:uuid:317df08c-8a0f-44fa-958d-481d95ef3107> | CC-MAIN-2013-20 | http://www.bellaonline.com/ArticlesP/art171311.asp | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700264179/warc/CC-MAIN-20130516103104-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.971752 | 549 | 3.359375 | 3 | HuggingFaceFW/fineweb-edu/sample-100BT |
- For other places called Lodi, see Lodi.
Lodi is a town in Lombardy, Italy, on the right shore of the river Adda. It is the capital of the province of Lodi.
The commune has an area of 41,42 sq. km; population (2001) 40,805. Its name is pronounced by Italians as LAW-dee.
It was a Celtic village that in Roman times was called in Latin Laus Pompeia (probably in honor of the consul Gnaeus Pompeius Strabo) and was known also because its position allowed many Gauls of Gallia Cisalpina to obtain Roman citizenship. It was in an important position at the crossing of vital Roman roads.
In became a Catholic diocese and its first bishop, Saint Bassiano , (319-409), is the patron saint of the town (celebrated on January 19).
A free Comune (municipality) around 1000, it fiercely resisted the Milanese, who destroyed it in April 24 1158. Frederick Barbarossa re-built it on its current location.
Starting from 1220, the Lodigiani (inhabitants of Lodi) spent some decades in realizing an important work of hydraulic engineering: a system of miles and miles of artificial rivers and channels (called Consorzio di Muzza) was created in order to give water to the countryside, turning some arid areas into one of the (still now) most important agricultural areas of the region.
Lodi was ruled by the Visconti family, who built a castle.
In 1423, the antipope Antipope John XXIII, from Lodi's Duomo, launched his bolla by which he convened the Council of Constance (end of the Great Schism).
In 1454 representatives from all the regional states of Italy met in Lodi to sign the treaty known as the peace of Lodi, by which they intended to work in the direction of Italian unification, but this peace lasted only 40 years.
The town was then ruled by the Sforza family, France, Spain, Austria. In 1786 it became the eponymous capital of a province that included Crema.
On May 10, 1796: Battle of Lodi: the young Corsican general Napoleon Bonaparte won on the river Adda his first important battle, defeating the Austrians and later entering Milan. This is why in many towns there are streets dedicated to the famous bridge (for instance in Paris 6th arrondissement, Rue du Pont de Lodi).
In 1945, the Italian petrol company Agip, directed by Enrico Mattei, started extracting methane from its fields, and Lodi was the first Italian town with a regular domestic gas service. | <urn:uuid:c3de8023-c4eb-43a2-b824-c31a8b54402c> | CC-MAIN-2013-20 | http://www.biologydaily.com/biology/Lodi%2C_Italy | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700264179/warc/CC-MAIN-20130516103104-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.974739 | 576 | 2.671875 | 3 | HuggingFaceFW/fineweb-edu/sample-100BT |
stressArticle Free Pass
stress, in physical sciences and engineering, force per unit area within materials that arises from externally applied forces, uneven heating, or permanent deformation and that permits an accurate description and prediction of elastic, plastic, and fluid behaviour. A stress is expressed as a quotient of a force divided by an area.
There are many kinds of stress. Normal stress arises from forces that are perpendicular to a cross-sectional area of the material, whereas shear stress arises from forces that are parallel to, and lie in, the plane of the cross-sectional area. If a bar having a cross-sectional area of 4 square inches (26 square cm) is pulled lengthwise by a force of 40,000 pounds (180,000 newtons) at each end, the normal stress within the bar is equal to 40,000 pounds divided by 4 square inches, or 10,000 pounds per square inch (psi; 7,000 newtons per square cm). This specific normal stress that results from tension is called tensile stress. If the two forces are reversed, so as to compress the bar along its length, the normal stress is called compressive stress. If the forces are everywhere perpendicular to all surfaces of a material, as in the case of an object immersed in a fluid that may be compressed itself, the normal stress is called hydrostatic pressure, or simply pressure. The stress beneath the Earth’s surface that compresses rock bodies to great densities is called lithostatic pressure.
Shear stress in solids results from actions such as twisting a metal bar about a longitudinal axis as in tightening a screw. Shear stress in fluids results from actions such as the flow of liquids and gases through pipes, the sliding of a metal surface over a liquid lubricant, and the passage of an airplane through air. Shear stresses, however small, applied to true fluids produce continuous deformation or flow as layers of the fluid move over each other at different velocities like individual cards in a deck of cards that is spread. For shear stress, see also shear modulus.
Reaction to stresses within elastic solids causes them to return to their original shape when the applied forces are removed. Yield stress, marking the transition from elastic to plastic behaviour, is the minimum stress at which a solid will undergo permanent deformation or plastic flow without a significant increase in the load or external force. The Earth shows an elastic response to the stresses caused by earthquakes in the way it propagates seismic waves, whereas it undergoes plastic deformation beneath the surface under great lithostatic pressure.
What made you want to look up "stress"? Please share what surprised you most... | <urn:uuid:79e0dfc6-44f0-433d-bdf7-1f27c991027e> | CC-MAIN-2013-20 | http://www.britannica.com/EBchecked/topic/568893/stress | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700264179/warc/CC-MAIN-20130516103104-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.928167 | 547 | 4.21875 | 4 | HuggingFaceFW/fineweb-edu/sample-100BT |
This multimedia lesson for Grades 7-10 explores the physical forces that act in concert to create snowflakes. Students build an apparatus that creates conditions similar to a winter cloud and produce their own snow crystals indoors. By watching the snow crystals grow, they learn about how snowflake size and shape is determined by the forces that act on water molecules at the atomic and molecular levels. Digital models and snowflake photo galleries bring together a cohesive package to help kids visualize what's happening at the molecular scale.
Editor's Note: This lab activity calls for dry ice. See Related Materials for a link to the NOAA's "Dry Ice Safety" Guidelines, and for a link to snow crystal images produced by an electron microscope.
Lewis structures, VSEPR, condensation, covalent bond, crystals, electron sharing, ice, physics of snowflakes, snow formation, valence electrons, valence shell
Metadata instance created
January 2, 2013
by Caroline Hall
January 2, 2013
by Caroline Hall
AAAS Benchmark Alignments (2008 Version)
4. The Physical Setting
4B. The Earth
6-8: 4B/M15. The atmosphere is a mixture of nitrogen, oxygen, and trace amounts of water vapor, carbon dioxide, and other gases.
4D. The Structure of Matter
6-8: 4D/M1a. All matter is made up of atoms, which are far too small to see directly through a microscope.
6-8: 4D/M1cd. Atoms may link together in well-defined molecules, or may be packed together in crystal patterns. Different arrangements of atoms into groups compose all substances and determine the characteristic properties of substances.
6-8: 4D/M3cd. In solids, the atoms or molecules are closely locked in position and can only vibrate. In liquids, they have higher energy, are more loosely connected, and can slide past one another; some molecules may get enough energy to escape into a gas. In gases, the atoms or molecules have still more energy and are free of one another except during occasional collisions.
9-12: 4D/H2. The number of protons in the nucleus determines what an atom's electron configuration can be and so defines the element. An atom's electron configuration, particularly the outermost electrons, determines how the atom can interact with other atoms. Atoms form bonds to other atoms by transferring or sharing electrons.
9-12: 4D/H7a. Atoms often join with one another in various combinations in distinct molecules or in repeating three-dimensional crystal patterns.
12. Habits of Mind
12C. Manipulation and Observation
6-8: 12C/M3. Make accurate measurements of length, volume, weight, elapsed time, rates, and temperature by using appropriate devices.
<a href="http://www.compadre.org/precollege/items/detail.cfm?ID=12568">WGBH Educational Foundation. Teachers' Domain: Why Do Snowflakes Come in So Many Shapes and Sizes?. Boston: WGBH Educational Foundation, 2010.</a>
Teachers' Domain: Why Do Snowflakes Come in So Many Shapes and Sizes? (WGBH Educational Foundation, Boston, 2010), WWW Document, (http://www.teachersdomain.org/resource/lsps07.sci.phys.matter.lpsnowflakes/).
Teachers' Domain: Why Do Snowflakes Come in So Many Shapes and Sizes?. (2010). Retrieved May 21, 2013, from WGBH Educational Foundation: http://www.teachersdomain.org/resource/lsps07.sci.phys.matter.lpsnowflakes/
WGBH Educational Foundation. Teachers' Domain: Why Do Snowflakes Come in So Many Shapes and Sizes?. Boston: WGBH Educational Foundation, 2010. http://www.teachersdomain.org/resource/lsps07.sci.phys.matter.lpsnowflakes/ (accessed 21 May 2013).
Teachers' Domain: Why Do Snowflakes Come in So Many Shapes and Sizes?. Boston: WGBH Educational Foundation, 2010. 21 May 2013 <http://www.teachersdomain.org/resource/lsps07.sci.phys.matter.lpsnowflakes/>.
%T Teachers' Domain: Why Do Snowflakes Come in So Many Shapes and Sizes? %D 2010 %I WGBH Educational Foundation %C Boston %U http://www.teachersdomain.org/resource/lsps07.sci.phys.matter.lpsnowflakes/ %O application/pdf
%0 Electronic Source %D 2010 %T Teachers' Domain: Why Do Snowflakes Come in So Many Shapes and Sizes? %I WGBH Educational Foundation %V 2013 %N 21 May 2013 %9 application/pdf %U http://www.teachersdomain.org/resource/lsps07.sci.phys.matter.lpsnowflakes/
Disclaimer: ComPADRE offers citation styles as a guide only. We cannot offer interpretations about citations as this is an automated procedure. Please refer to the style manuals in the Citation Source Information area for clarifications. | <urn:uuid:dbafed73-0f12-4aa3-a008-e9f488788ed7> | CC-MAIN-2013-20 | http://www.compadre.org/precollege/items/detail.cfm?ID=12568 | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700264179/warc/CC-MAIN-20130516103104-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.817584 | 1,122 | 4 | 4 | HuggingFaceFW/fineweb-edu/sample-100BT |
- Enter a word for the dictionary definition.
From The Collaborative International Dictionary of English v.0.48:
Nectarine \Nec`tar*ine"\ (n[e^]k`t[~e]r*[=e]n"), n. [Cf. F. nectarine. See Nectar.] (Bot.) A smooth-skinned variety of peach. [1913 Webster] Spanish nectarine, the plumlike fruit of the West Indian tree Chrysobalanus Icaco; -- also called cocoa plum. It is made into a sweet conserve which is largely exported from Cuba. [1913 Webster] | <urn:uuid:8026eb8e-2045-4425-bfb5-1eb355f7472a> | CC-MAIN-2013-20 | http://www.crosswordpuzzlehelp.net/old/dictionary.php?q=Spanish%20nectarine | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700264179/warc/CC-MAIN-20130516103104-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.782571 | 137 | 2.78125 | 3 | HuggingFaceFW/fineweb-edu/sample-100BT |
(BPT) - The start of the school year is a time of great anticipation for parents and kids alike. New teachers. New classes. New and old friends. It's a time for fun and learning.
Parents expect schools to be safe havens, but the reality is that children face a host of dangers all day long. Bullying, taunting and teasing are only some of the hazards that kids must deal with it every day at even the best schools in America.
About 30 percent of middle and high school students say they've been bullied. Among high school students, one out of nine teens reported they had been pushed, shoved, tripped or spit upon during the last school year, according to a National Institute of Child Health and Human Development research study.
FindLaw.com, the nation's leading website for free legal information, offers the following tips on how to keep your children safe at school:
* Talk to your kids about school safety. Talk about bullying and make sure your child understands what is and is not acceptable behavior. Also discuss when and how to report bullying.
* Go to the bus stop. If your schedule allows, go to the bus stop with your child and get to know the other kids and parents, along with the bus driver.
* Get to know your kids' teachers. Send your child's teacher an email to introduce yourself and regularly check in on your child's academic and social progress. Learn how his or her teacher approaches bullying and other issues that may distract from the school's learning environment, such as the use of cell phones and iPods.
* Read the school's policy on bullying. Become familiar with school policies about bullying - particularly the protocols for identifying and reporting bullying behavior. Pay careful attention to policies regarding cyberbullying, which can take place outside of school.
* Watch and listen for the cues. Many kids don't want to reveal to their parents that they're being bullied, taunted or teased by other kids. If your child is withdrawn, not doing homework, sick more often than normal or demonstrating other out-of-the-ordinary behavior, talk about what seems to be bothering him or her.
* Know where your kids are at. Sometimes bullying and other unsafe situations take place outside of school grounds, such as at other students' houses. Telling your kids that you want to know where they are and that they need permission to visit a friend's house shows them you care. It also reassures them that they can contact you if they need help.
* Monitor Internet use and texting. Put the home computer in a public place and don't allow your kids to use a computer in their bedroom by themselves.
* Talk to other parents. You may learn that their children also have been bullied or have been involved in activities on and off school grounds that you should be concerned about. You stand a much better chance of obtaining changes and creating a safer environment for your student by acting together rather than alone.
* Put it in writing. If you suspect your child is being bullied or sexually harassed by another student (or a teacher or staff member), ask for a face-to-face meeting with the school's principal. If the principal does not act, hire an attorney and escalate your complaint to the superintendent and school board. Putting your complaint in writing about the specific types of negative behavior affecting your child is necessary if you need to litigate the complaint in court.
* Take appropriate action when bullying becomes assault. If your child is physically assaulted on the bus, in school or on school grounds, contact the local police department, particularly if there is a school liaison officer assigned to the school, about whether a police report or assault charges should be filed. Do not wait to let the school handle the situation.
For more information about how to keep your kids safe at school, visit FindLaw.com. | <urn:uuid:be252f6c-849c-43a6-a09e-a67e38971d3f> | CC-MAIN-2013-20 | http://www.cw15.com/ara/education/story/Keeping-your-kids-safe-at-school/9Mj26bIJeEm07R11wJrHbQ.cspx | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700264179/warc/CC-MAIN-20130516103104-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.968937 | 786 | 3.546875 | 4 | HuggingFaceFW/fineweb-edu/sample-100BT |
|The Salmon-Challis National Forest covers over 4.3 million acres in east-central Idaho. Included within the boundaries of the Forest is 1.3 million acres of the Frank Church-- River of No Return Wilderness Area, the largest wilderness area in the Continental United States. Rugged and remote, this country offers adventure, solitude and breathtaking scenery. Panoramic vistas highlight travel atop the Continental Divide; northwest-southeast trending mountain ranges culminate in the jagged heights of Mount Borah, Idaho's tallest peak. The sagebrush slopes of the forest are covered with a colorful display of wildflowers in the spring.For over 8,000 years, ancestors of the Shoshone-Bannock people have lived in this region. White settlement began shortly after the Lewis and Clark Expedition traveled through the territory in 1805. Initially, fur trappers then miners worked this area. The development of Salmon, Challis, and their surrounding communities followed and by the 1880's were flourishing. Traces of the past can be found throughout the Salmon-Challis National Forest.Most roads within the Salmon-Challis National Forest branch off main highways and turn to gravel or dirt surfaces, with many being suitable for sedans, while others require 4-wheel drive vehicles. Recommended travel precautions are to have a full tank of gas and a good spare tire.Three popular road tours: the Custer Motorway Loop, the Lewis and Clark Backcountry Byway, and the Salmon River Road, take visitors through the Salmon River Mountains, to the crest of the Continental Divide, and along the scenic Salmon River. Visitors will discover historic mining towns and share their history of mining life. Steps can be traced back to the 1805 expedition that changed the West.There is abundant wildlife in The Salmon-Challis National Forest. Species include Rocky Mountain sheep, mountain goat, bald eagles, and river otter, among other wildlife who call the forest home.Known as the "white water capital of the world," the Salmon and Middle Fork Rivers offer adventures to provide a lifetime of memories. Permit applications for the wild section of the Main Salmon River and for the Middle Fork River are available at the North Fork and Middle Fork Range Districts.Nearly 3,292 miles of trails transverse the Salmon-Challis National Forest, almost half of which are located in the Wilderness. Hiking season is generally between April and October, with elevations above 7,500 feet usually clear of snow by July 4. Trails range from moderate to difficult. Many non-wilderness trails are designated for motorized use.Hunting opportunities for deer, elk, bighorn sheep, moose, mountain goat, black bear, and mountain lion exist on much of the Salmon-Challis National Forest. Opportunities for hunting chukar, grouse, and goose are also available.Most streams and lakes on the Salmon-Challis National Forest are home to trout. Steelhead average 4-6 pounds, with and occasional one weighing in at 15-20 pounds. Mackay Reservoir, situated on neighboring Bureau of Land Management land, offers good angling for kokanee salmon. Winter anglers may try their skills at Jimmy Smith and Williams Lake, a 30-minute drive from Salmon.There are a wide variety of opportunities for beginners to advanced downhill skiers and snow boarders within the Salmon-Challis National Forest. Williams Creek Summit offers 22 miles of moderate to difficult cross-country ski trails. Copper Mountain allows visitors to practice their backcountry ski skills. Gentler, groomed trails at Chief Joseph Pass on the Idaho-Montana border provide fun for the whole family. Local snowmobile clubs maintain a number of groomed routes on the Ridge Road to the Stanley-Landmark Snowmobile Trail system.There are over 40 campgrounds within the Salmon-Challis National Forest, ranging from primitive to developed. Most campgrounds have at least one wheelchair accessible campsite.|
|Facilities: Salmon-Challis National Forest provides over 40 campgrounds. Most of the campgrounds have restrooms.|
Best Time To Visit: Salmon-Challis National Forest is open year round for a variety of recreational opportunities. Hiking season is generally between April and October, with elevations above 7,500 feet usually clear of snow by July 4. Cross-country skiing and snowmobiling is available during the winter months.
Fees: Parking, camping, and/or entrance fees may be charged at some of the recreation sites within Salmon-Challis National Forest.
Accessibility: Most campgrounds have at least one wheelchair accessible campsite. Williams Lake provides wheelchair accessible spots for both fishing and picnicking.
Rules: Recommended travel precautions are to have a full tank of gas and a good spare tire. Check the local fishing, hunting, and fire regulations. Do not leave campfires unattended. Fireworks and explosives are prohibited in the forests. Pets must always be restrained or on a leash while in developed recreation sites. Obey all traffic signs. State traffic laws apply to the Salmon-Challis National Forest unless otherwise specified.
Directions: Salmon-Challis National Forest covers over 4.3 million acres in east-central Idaho. It can be accessed from Arco, Hailey, Challis, and Salmon.
Map: Click here for a map to Salmon-Challis National Forest
Reservations: Reservations are not needed or accepted to visit Salmon-Challis National Forest. Reservations may be accepted or required for campgrounds and other recreation sites within the forest.
|Salmon-Challis National Forest Supervisor's Office|
|50 Hwy 93 South|
|Salmon, Idaho 83467|
|General: (208) 756-5100| | <urn:uuid:ac728d17-b08b-402e-88b1-f731efe85416> | CC-MAIN-2013-20 | http://www.eatstayplay.com/html/id/a4335p476c2145.html | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700264179/warc/CC-MAIN-20130516103104-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.918214 | 1,176 | 2.515625 | 3 | HuggingFaceFW/fineweb-edu/sample-100BT |
By Terry Kovel
Fresh vegetables were part of the diet of the Victorian household during the warm, growing months. But stored root vegetables and home canned food were used on snowy days.
Advertisers knew that imaginary vegetables acting like humans were as popular a fantasy as fairies, elves, brownies, pixies and gnomes. Few color pictures were available. Magazines and newspapers were printed in black and white. But in the 1880s, retail stores advertised with colored trade cards, about 6-inch-by-2-1/2-inch, that were saved and often put in scrapbooks.
There were many different anthropomorphic fruit and vegetable cards. Humanized veggies were pictured not only in the U.S. but also in England, Germany, France and Italy. The comic figures with human bodies often had names, Mr. Prune, The Baldwin Twins (apple heads) or Mr. Pumpkin. And there often was a funny caption, like two strawberry heads asking "What are you doing in my bed?"
The trade cards are not the only place for veggie people. Vegetable people postcards came next, about 1900. Figural salt and pepper shakers, children's books, decorated plates and even small figurines were popular in the early 1900s.
Now that eating fresh food is a national goal, veggie people are being noticed by collectors. And maybe they will encourage the family to eat their fruit and vegetables. Trade cards can be $10 to $25 each, postcards a little less. Many saltshaker sets sell for less than $40.
Q: About 25 years ago, I bought a kitchen table with one leaf and four chairs at a used-furniture store in Connecticut. On one end of the table, there's a label that says "Dinah Cook Furniture" around the image of a black woman wearing a kerchief on her head. Can you tell me when the set was made and who made it?
A: "Dinah Cook Furniture" was a trademark used by the Western Chair Co. of Chicago. The trademark may have been used to appeal to black customers during the great migration of black Americans from the South to Northern cities. If so, the set probably dates from the 1920s or '30s.
Q: I have a 1937 Philadelphia Athletics scorecard that's in mint condition. It's really more like a program, because it's a six-page booklet that's 10 3/4 inches high by 6 5/8 inches wide. The inside of the booklet includes a team photo and roster, a schedule of home games, a list of the pitchers and catchers for all the teams in the American and National leagues, a photo of Chubby Dean with his facsimile autograph, the prices of refreshments and a lot of interesting ads. What is it worth?
A: Reproductions of your scorecard have been made, so the first thing to do is to make sure it's an original. If it's an original, you can try selling it online or to a dealer who sells sports memorabilia. Expect to get about $35-$45 for it. The Philadelphia Athletics, an American League team founded in 1901, became the Kansas City Athletics in 1955, then moved to California in 1968 and became the Oakland Athletics.
Q: I have two small rubber toy motorcycles that belonged to a cousin, born about 1930. One is red with green wheels; the other is green with red wheels. Both have Auburn printed on the rear wheel and a rider who appears to be a policeman. What can you tell me about them?
A: The Auburn Rubber Co. was founded in Auburn, Ind., in 1913. It started out as the Double Fabric Tire Corp., a manufacturer of tires. In the 1920s the company was reorganized and the name changed to the Auburn Rubber Co. Auburn began making rubber toy soldiers in 1935 and eventually became a major producer of rubber toys. Toy soldiers, cars, trucks, airplanes, boats, tractors, building blocks and many other rubber toys were made. The faces and details on the toys were hand painted. The toys were inexpensive and sold in dime stores. Sears, Roebuck catalogs sold a line of Auburn rubber toys under the brand name Happy Time. Toy rubber motorcycles were made in several colors in the 1940s and '50s. Auburn began making vinyl toys in 1954. The company was sold in 1960 and went bankrupt in 1969. Rubber toys can warp or become dry and brittle if they are not stored properly. They should be kept where it is cool. Value of your toy motorcycle, about $25 to $35.
Q: What is the difference between an "antique" and a "collectible"? And what does the word "vintage" mean? I figure you're the expert and can help me understand.
A: Different people, even different experts, define these words differently. Most collectors accept the U.S. Customs Service's 1930 definition of an "antique" as something of value that's 100 or more years old. In 1993 the U.S. Customs Modernization Act added that if the "essential character" of a piece has been changed or more than half of it has been repaired or restored, it's no longer considered an antique. A "collectible" is therefore something of value (to someone) that's less than 100 years old. The term "vintage" is wishy-washy. It's often used to describe clothing your grandmother — or even your mother — wore or furniture in your childhood bedroom. We usually use the word "vintage" to describe something of value that's more than 50 years old and "collectible" to refer to anything under 50. But there are no hard and fast rules.
Q: My two 12-inch ceramic Jim Beam decanters are 1968 election bottles. One is an elephant and the other a donkey. They're both dressed in polka-dot clown costumes. With presidential elections coming up this year, I was wondering if they have any value.
A: The Jim Beam brand of whiskey dates back to the late 1700s. The company started selling special decanters filled with Kentucky Straight Bourbon in 1953. Political bottles, one for each party, were made for the presidential-election years from 1956 to 1988. The bottles were made by Regal China Co. of Chicago. Your 1968 bottles sell today for $10-$25 each. The decanters are not as popular with collectors as they were 30 years ago. The most valuable Beam political decanter is a 1970 elephant bottle made for a Spiro Agnew vice-presidential fundraiser. At one time it was selling for more than $1,000.
Tip: A mirror made from an antique picture frame is worth about half as much as a period mirror in a period frame.
Take advantage of a free listing for your group to announce events or to find antique shows and other events. Go to Kovels.com/calendar to find and plan your antiquing trips.
Terry Kovel answers as many questions as possible through the column. By sending a letter with a question, you give full permission for use in the column or any other Kovel forum. Names, addresses or email addresses will not be published. We cannot guarantee the return of any photograph, but if a stamped envelope is included, we will try. The volume of mail makes personal answers or appraisals impossible. Write to Kovels, (Name of this newspaper), King Features Syndicate, 300 W. 57th St., New York, NY 10019.
Current prices are recorded from antiques shows, flea markets, sales and auctions throughout the United States. Prices vary in different locations because of local economic conditions.
Super Suds detergent box, "Super Suds, Floods o' Suds for Dishes and Duds," blue box and letters on white, Colgate-Palmolive-Peet Co., 1930s, 1 lb. 7 oz., $20.
Carnival glass toothpick holder, Octagon pattern, marigold, curved-in lip, Imperial Glass Co., 2 1/2 inches, $35.
License plate, Illinois, 1947, fiberboard (to save metal for war effort), black, white numbers 1144-114, $50.
Model kit, U.S. Army Patton Tank, Monogram Co., red box, unopened, 1959 $80.
Mortimer Snerd ventriloquist dummy, painted face, crooked mouth and buck teeth, cloth body, vinyl head and hands, pull string moves mouth, Jurn Novelty Co., 1950s, 29 inches, $85.
McCoy jardiniere, Springwood pattern, mint green, white flowers, 6 3/4 x 8 1/2 inches, $125.
Gentleman's smoking jacket, cotton and polyester, red-and-black print, black grosgrain sash belt, collar and cuffs, State O' Maine Co. label, large size, $145.
Sewing bird, brass, cast flowers, leaves and rope edges, c. 1856, 3 1/2 x 2 inches, $155.
Ericsson Bakelite telephone, black, curved mouthpiece, chrome dial, 1950s, 6 x 8 inches, $185.
Thonet bentwood dining chairs, upholstered seats and backs, 1950s, 32 1/2 inches, set of four, $695.
Available now. The best book to own if you want to buy or sell or collect — and if you order now, you'll receive a copy with the author's autograph. The new "Kovels' Antiques & Collectibles Price Guide, 2012," 44th edition, is your most accurate source for current prices. This large-size paperback has more than 2,500 color photographs and 40,000 up-to-date prices for over 775 categories of antiques and collectibles. You'll also find hundreds of factory histories and marks, a report on the record prices of the year, plus helpful sidebars and tips about buying, selling, collecting and preserving your treasures. Available online at Kovelsonlinestore.com; by phone at 800-303-1996; at your bookstore or send $27.95 plus $4.95 postage to Price Book, Box 22900, Beachwood, OH 44122. | <urn:uuid:62dfd259-865a-4262-80fc-edb65154be6a> | CC-MAIN-2013-20 | http://www.fosters.com/apps/pbcs.dll/article?AID=/20120823/GJENTERTAINMENT_01/708239995/0/rss11&CSProduct=fosters | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700264179/warc/CC-MAIN-20130516103104-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.965081 | 2,150 | 2.515625 | 3 | HuggingFaceFW/fineweb-edu/sample-100BT |
Throughout life there are many times when outside influences change or influence decision-making. The young child has inner motivation to learn and explore, but as he matures, finds outside sources to be a motivating force for development, as well. Along with being a beneficial influence, there are moments when peer pressure can overwhelm a child and lead him down a challenging path. And, peer pressure is a real thing – it is not only observable, but changes the way the brain behaves.
As a young adult, observational learning plays a part in development through observing and then doing. A child sees another child playing a game in a certain way and having success, so the observing child tries the same behavior. Albert Bandura was a leading researcher in this area. His famous bobo doll studies found that the young child is greatly influenced by observing other’s actions. When a child sees something that catches his attention, he retains the information, attempts to reproduce it, and then feels motivated to continue the behavior if it is met with success.
Observational learning and peer pressure are two different things – one being the observing of behaviors and then the child attempting to reproduce them based on a child’s own free will. Peer pressure is the act of one child coercing another to follow suit. Often the behavior being pressured is questionable or taboo, such as smoking cigarettes or drinking alcohol.
Peer Pressure and the Brain
Recent studies find that peer pressure influences the way our brains behave, which leads to better understanding about the impact of peer pressure and the developing child. According to studies from Temple University, peer pressure has an effect on brain signals involved in risk and reward department, especially when the teen’s friends are around. Compared to adults in the study, teenagers were much more likely to take risks they would not normally take on their own when with friends. Brain signals were more activated in the reward center of the brain, firing greatest during at risk behaviors.
Peer pressure can be difficult for young adults to deal with, and learning ways to say “no” or avoid pressure-filled situations can become overwhelming. Resisting peer pressure is not just about saying “no,” but how the brain functions. Children that have stronger connections among regions in their frontal lobes, along with other areas of the brain, are better equipped to resist peer pressure. During adolescence, the frontal lobes of the brain develop rapidly, causing axioms in the region to have a coating of fatty myelin, which insulates them and causes the frontal lobes to more effectively communicate with other brain regions. This helps the young adult to develop judgment and self-control needed to resist peer pressure.
Along with the frontal lobes contributing to the brain and peer pressure, other studies find that the prefrontal cortex plays a role in how teens respond to peer pressure. Just as with the previous study, children that were not exposed to peer pressure had greater connectivity within the brain as well as abilities to resist peer pressure.
Working through Peer Pressure
The teenage years are exciting years. The young adult is often going through physical changes due to puberty, adjusting to new friends and educational environments, and learning how to make decisions for themselves. Adults can offer a helping and supportive hand to young adults when dealing with peer pressure by considering the following:
Separation: Understanding that this is a time for the child to separate and learn how to be his own individual is important. It is hard to let go and allow the child to make mistakes for himself, especially when you want to offer input or change plans and actions, but allowing the child to go down his own path is important. As an adult, offering a helping hand if things go awry and being there to offer support is beneficial.
Talk it Out: As an adult, take a firm stand on rules and regulations with your child. Although you cannot control whom your child selects as friends, you can take a stand on your control of your child. Setting specific goals, rules, and limits encourages respect and trust, which must be earned in response. Do not be afraid to start talking with your child early about ways to resist peer pressure. Focus on how it will build your child’s confidence when he learns to say “no” at the right time and reassure him that it can be accomplished without feeling guilty or losing self-confidence.
Stay Involved: Keep family dinner as a priority, make time each week for a family meeting or game time, and plan family outings and vacations regularly. Spending quality time with kids models positive behavior and offers lots of opportunities for discussions about what is happening at school and with friends.
If at any time there are concerns a child is becoming involved in questionable behavior due to peer pressure, ask for help. Understand that involving others in helping a child cope with peer pressure, such as a family doctor, youth advisor, or other trusted friend, does not mean that the adult is not equipped to properly help the child, but that including others in assisting a child, that may be on the brink of heading down the wrong path, is beneficial.
By Sarah Lipoff. Sarah is an art educator and parent. Visit Sarah’s website here.
Read More → | <urn:uuid:4fafe4c1-2dd0-49fd-8b1b-41d1829f7cdf> | CC-MAIN-2013-20 | http://www.funderstanding.com/category/child-development/brain-child-development/ | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700264179/warc/CC-MAIN-20130516103104-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.963305 | 1,062 | 3.8125 | 4 | HuggingFaceFW/fineweb-edu/sample-100BT |
|Central Arizona Highlands|
The Central Arizona Highlands push themselves up in between the dramatic desert landscape of the Sonoran Desert to the south, and the vast Colorado Plateau to the north. This transition zone of ancient, eroded mountains lies like a sash across Arizona, from Kingman in the northwest all the way down to Safford and the Sitgreaves Apache National Forest in the southeast and beyond.
Exploring our trails you will discover a microcosm of the greater Central Arizona Highlands region – ponderosa pine forest, juniper-piyon woodland, chaparral-covered hillsides, a precious shaded creek system, and ancient geologic formations.
This diversity of plant life results in a wonderful richness of wildlife species, offering us all a wealth of opportunities for exploration, discovery, and learning. For more information about all the geologic zones of Arizona, CLICK HERE | <urn:uuid:794f90f0-c33a-40ea-8e57-5fa1e030cfb5> | CC-MAIN-2013-20 | http://www.highlandscenter.org/index.php?option=com_content&view=article&id=4&Itemid=21 | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700264179/warc/CC-MAIN-20130516103104-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.847842 | 180 | 2.953125 | 3 | HuggingFaceFW/fineweb-edu/sample-100BT |
< Browse to Previous Essay | Browse to Next Essay >
Everett -- Thumbnail History
HistoryLink.org Essay 7397
: Printer-Friendly Format
Once called the “City of Smokestacks,” Everett has a long association with industry and labor. Its first beginnings were two Native American settlements at opposite sides of the heavily wooded region, one on the Snohomish River and the other on Port Gardner Bay. Platted in the 1890s and named after the son of an early investor, it soon attracted the attention of East Coast money. Over the next 100 years, Everett would be a formidable logging mill and industrial center. In 2005, Everett numbered 96,000 citizens.
The Port Gardner Peninsula is a point of land bound by the Snohomish River on its east flank and northern tip and by Port Gardner Bay on the west. People have inhabited the Everett Peninsula for more than 10,000 years. In recent centuries, Hibulb (or Hebolb), the principal village of the Snohomish tribe stood at the northwest point of the peninsula. Its location near the mouth of the Snohomish River and next to Port Gardner Bay provided both abundant food and transportation. Other villages were located across the waterways. The Snohomish fortified Hibulb with a stockade made of Western red cedar posts to guard against their local enemies, the Makah, Cowichan, Muckleshoot, and the occasional northern raider.
On June 4, 1792, George Vancouver landed on the beach south of the village and claimed the entire area for the King of England. He named the bay Port Gardner for a member of his party. He apparently did not explore the river. After this first contact with the Snohomish, the next 50 years were quiet until traders with the Hudson’s Bay Company on the Columbia River ventured through in 1824. Hudson's Bay Company records show that they explored the Snohomish River. They named it “Sinnahamis.” Its present name “Snohomish” dates from the U.S. Coastal Survey of 1854 when it was charted.
In 1853, Washington Territory was formed. That same year the first white settlers in what would become Snohomish County established a water-powered sawmill on Tulalip Bay across the water from Hibulb. When the Treaty of 1855 created a reservation there for the Snohomish and other regional Indians, the settlers abandoned the operation and turned it over to the tribes. Gradually groups of white men from Port Gamble, Port Ludlow, Utsaladdy, and other Puget Sound points began to show up on the heavily forested peninsula to cut its giant timbers. They set up small logging camps in places reserved for homesteads.
During the Indian wars that erupted in King and Pierce counties after the treaty signings, the Snohomish area remained peaceful. Enterprising men making plans for a military road between Fort Bellingham and Fort Steilacoom in 1859 stimulated the exploration of the Snohomish River and its valleys. A ferry was planned at the spot where the road would cross the river. When Congress stopped funding the project, some of the young men working on the military road stayed there anyway. E. C. Ferguson claimed his own place and named it Snohomish City (1859). He was first to describe the area near present day Everett as full of trees:
“with their long strings of moss hanging from branches, which nearly shut out the sunlight ... At the time the opening at the head of Steamboat Slough was not more than fifty feet wide" (Dilgard and Riddle).
First Settlers on the Peninsula
Dennis Brigham was the first permanent settler in the area that would become Everett. A carpenter from Worcester, Massachusetts, he came in 1861 the same year Snohomish County was organized. He built a cabin on 160 acres along Port Gardner Bay and lived alone. Cut off from his nearest neighbors by the deep forests, he still had enough contact to gain the name of “Dirty Plate Face.”
In 1863, the area saw increased settlement. Erskine D. Kromer, telegraph operator and lineman for the World Telegraph, took a claim just south of Brigham. When the venture ended he settled down with a Coast Salish wife and raised a family. Leander Bagley and H. A. Taylor opened the first store in the area on the point next to Helbo. Indians pushed out by homesteaders and loggers came by to trade. The store would change ownership several times.
Also in 1863, on the snag-filled Snohomish River, E. D. Smith set up a logging camp at an angled bend in the river. Here the water was deep and an undercutting current kept his log booms against the bank. At the time there were no mills in Snohomish County. Logs were rafted down river and sent to mills around the sound. Everett’s future was foreshadowed when, during that same year, Jacob and David Livingston set up the first steam sawmill in the county near present day Harbor View Park on the bayside. It was a short-lived venture.
Settlement continued, although one early passerby in 1865 wrote that he saw nothing but woods. The settlers were there. Ezra Hatch claimed land in what would become downtown Everett and George Sines claimed land on the riverside. Together with Kromer, they would hold the most valuable holdings in the future city. There were others: Benjamin Young, George and Perrin Preston, J. L. Clark, and William Shears. They lived in simple log cabins scattered around in the woods, but when Bagley sold his share of the store to J. D. Tullis with the right to lease a portion back for a home and shipyard, Everett industry arrived. In 1886 he built the small sloop Rebecca which he sailed throughout the area. Eventually, the Prestons bought out all the shares to the store. George and Perrin Preston with his Snohomish wife Sye-Dah-bo-Deitz or Peggy would give the name Preston Point to the ancient Snohomish center.
Between the 1870 and 1880 census the white population in Snohomish County increased from 400 to 1,387, of which a minimal amount was found on the peninsula. Neil Spithill and his Snohomish wife Anastasia, the daughter of Chief Bonaparte, settled on the river where the peninsula jutted into it like a left-hand thumb. In 1872, Jacob Livingston filed the first townsite (“Western New York”) on Port Gardner Bay not far from his failed sawmill. John Davis settled at Preston Point where 50 acres were diked, and between the Snohomish River and the sloughs crops of oats, hay, hops, wheat, barley potatoes, and fruit began to appear. E. D. Smith continued to expand his logging businesses, employing 150 men. The area’s first postmaster, Smith platted the town of Lowell in 1872. In 1883, the U.S. government began snag-removal and cleared other impediments on the river. With the coming of mechanized lumber and cedar shingle production, several mills located in the area. Smith began construction on his own mill in 1889 the same year Washington became a state.
Booms and Busts
Statehood brought celebration and speculation. Connection to the area via the Seattle and Montana Railway was close at hand, but when James J. Hill announced that his Great Northern Railway would come over the Cascades to Puget Sound, many people thought that meant the railroad would come to the peninsula. There was money to be made.
First came the Rucker Brothers, Wyatt and Bethel and their mother. They bought the old Dennis Brigham homestead property on the bayside in 1890. They built a house and planned to start the townsite of “Port Gardner.” Joining them was William Swalwell and his brother Wellington. The Swalwells picked up a large section of the Spitlhill’s claim on the river covered with a growth of “timber so dense that trees on all sides touched the little cabin” (Roth). Frank Friday, who bought the old Kramer homestead from Kramer's widow added to the real estate mix. This juxtaposition of bayside to riverside settlements set the layout of the future city streets, though the Swalwell’s Landing, as it became known, was separated from the bay by “a mile of second-growth timber, impassable underbrush and a marshy area near the center of the peninsula” (Dilgard and Riddle). Things began to heat up when Tacoma lumberman and land speculator Henry Hewitt Jr. (1840-1918) arrived in the spring of 1890 with $400,000 of his own money, dreaming of a great industrial city.
After learning that one of John D. Rockefeller’s associates, Charles L. Colby (1839-1896), was looking for a site for the American Steel Barge Company of which he was president, Hewitt met with him. He convinced him that the peninsula with its river and bay access offered the perfect location for that and other industrial concerns. Impressed, Colby talked it up with friends and relatives. Once they were on board, Hewitt immediately approached the Ruckers, Friday, and Salwell and enticed them to join him. They transferred half of their holdings, nearly 800 acres, to the syndicate backed with the East Coast money of Rockefeller, Colby, and Colgate Hoyt, a director of the Great Northern Railroad. Hewitt also bargained with E. D. Smith for a paper mill.
In November 1890, the group incorporated the Everett Land Company. They made Hewitt president. For a time they met in offices at E. D. Smith’s boarding house in Lowell. By spring of 1891, the peninsula began to hum as land was cleared for a nail factory, the barge works, a paper mill, and smelter. Five hundred men graded, surveyed, and platted the townsite. Hewitt Avenue, one and half mile long and 100 feet wide, was cut from bay side to riverside. The townsite of stumps became Everett, after the son of Charles Colby.
Over the months, the city of Everett saw astonishing growth. Before the Everett Land Company lots went on sale, Swalwell jumped the gun and began selling his own lots on banks of the Snohomish River in September 1891. He built a large dock for the sternwheel steamer traffic. Dubbed the “cradle of Everett,” Swalwell’s Landing boomed at the riverside foot of Hewitt, at intersection of Chestnut and Pacific. The Pacific/Chestnut community was a wild west town with gambling and prostitution along with the offices of Brown Engineering Company in charge of platting the townsite, "Workingman’s Grocery,” a small shoe store, another grocery store, a tent hotel, meat market, and barber shop. The streets were muck choked, its sidewalks made of thrown down planks. Farther south at Lowell, Smith built a dock for his new paper mill already in production.
On the bayside, the Everett Land Company built a long wharf at 14th Street on which a sawmill was built at the end. They also built an immense warehouse of some 400 feet and a fancy brick hotel, the Monte Cristo, three stories high. By the time the company started selling their residential and commercial property in late 1891, the building frenzy had attracted the nation. “An Army of Men at Work On a Mammoth Establishment,” the headline in the newly established Port Gardner News boasted in September 1891.
By the spring of 1892, Everett resembled a city albeit with stumps. There were frame homes, schools, churches (land provided by the Everett Land Company), and theaters as well as 5,600 citizens, a third of them foreign born (mostly English and Scandinavian) enjoying streetcar service, electricity, streetlights, and telephones. The Everett Land Company won a suit to own the waterfront. The promise of riches in the mines in the Cascades spurred the building of the Everett-Monte Cristo railroad from there to a smelter on the peninsula.
In April 1893, Everett incorporated by election. Then came trouble. In May, the Silver Panic caused a national depression that slammed into Everett. Factories closed down. Banks failed. Wages dropped 60 percent. The railroads either failed or faltered. People left in droves. By 1895, Rockefeller started to withdraw his investments. Hewitt was dismissed from the Everett Land Company. Colby took over. The lack of return on fees nearly bankrupted the city government. The streetlights were turned off. Against this background the town of Snohomish fought the struggling city of Everett over which would be the county seat. Everett finally took the claim away in 1897.
A Second Wind
Everett began to recover in 1899 after Rockefeller's Everett Land Company transferred its holdings to James J. Hill's Everett Improvement Company. The railroad magnate saw benefits for his Great Northern Railroad. He sent 42-year-old John McChesney as his representative. Industrial growth improved. Work continued on dredging the river and the bay. Frederick Weyerhaeuser, neighbor of Hill in St. Paul, Minnesota, came to Everett and founded the Weyerhaeuser Timber Company. He built the world’s largest lumber mill which produced 70 million feet by 1912. David A. Clough and Harry Ramwell formed the American Tugboat Company.
By 1903, the Polk Everett City Directory boasted of 10 sawmills, 12 shingle mils, a paper mill, flouring mill, foundries and machine shops, planing mills, a smelter, an arsenic plant, a refinery, “creosoting” works, a brewer, a sash and door plant, an ice and cold storage plant, and a creamery. Industry employed more than 2,835 men. Telephone subscriptions went from 493 in 1901 to 980 with 23 women employees and eight linemen.
Secret societies as wide ranging as the Elks and the Ancient Order of United Workmen and the Catholic Order of Foresters and the Improved Order of Red Men “meeting at next great camp in the Hunting Grounds of Aberdeen” (Polk) flourished. Times were good.
In 1907, Everett passed the First Class City Charter and boomed after the San Francisco earthquake and fire brought huge orders for Northwest lumber. The city’s own big fire in 1909 destroyed parts of the city, but did not deter future growth. Three years later its population reached three times its size in 1900 -- 25,000. Ninety-five manufacturing plants, “including 11 lumber mills, 16 shingle mills and 17 mills producing both” (Shoreline Historical Survey,) dominated the area.
Unions also dominated the city, making it one of the most unionized in the country. There were 25 unions in all. Of these, the International Shingle Weavers Union of the American Federation of Labor was the strongest. The work they did at shingle mills was dangerous. The bolter used a circular saw with a blade that stood 50 inches in diameter and had three-inch teeth. A man pushed the log toward it at waist height with his knee and hands. Men fell or were pulled into it. Of the 224 people who died in Everett in 1909, 35 were killed in the mills -- almost one a week. Labor unrest grew and strikes threatened.
In 1916, the shingle weaver’s strike culminated in a bloody confrontation at the city dock when two boatloads of Industrial Workers of the World members sailed up from Seattle to demonstrate support of striking shingle mill workers and free speech. Five workers on the steamer Verona and two deputies on the dock were killed. Some 30 others were wounded. The strike ended not long after. This became known as the Everett Massacre.
During World War I, Everett benefited from the demand for lumber, but for the rest of the twentieth century the city saw many down times as it went through a national depression in 1920, the Great Depression, and problems with continual silting in the river channels.
Always a lumber and industrial town, it began to diversify. A Works Progress Administration project in 1936 created Paine Field on 640 acres of land owned by Merrill Ring Logging and the Pope and Talbot Company eight miles southwest of the city. The airfield established aviation and eventually a military presence in the area. The county matched federal dollars.
During World War II the field became a military base. Its name was changed to Paine Field in honor of Lt. Topliff Olin Paine, pioneer aviator from Everett killed in a 1922 Air Mail Service crash. An Army Air Corps unit moved in and stayed for five years. Runways were improved and fueling capabilities added for certain aircraft types. Alaska Airlines started a presence. The military returned during the Korean War (1950-1953) taking over the control tower, but withdrew in 1968. This opened the way for Boeing Corporation. Already owners of acreage north of the airfield, Boeing built the world’s largest building by volume (472 million cubic feet) for their radically new 747 jetliner.
Construction on Naval Station Everett began in November 1987. In January 1994, Navy personnel moved into the completed Fleet Support and Administration buildings and officially began operations. Currently, Everett is home to three frigates, one nuclear-powered aircraft carrier, one destroyer, and a Coast Guard buoy tender. It is the United States Navy’s most modern base.
In 2005, the city of Everett enjoyed growth and revitalization. During the past 20 years, the downtown area has been upgraded and some of the historic structures have been restored. Restaurants, shops, and parks line the bayside of the city. Industrial parks are planned for riverside. A community college and homes stand around Preston Point. Dennis Brigham and E. D. Smith would both be amazed. Henry Hewitt would say that his dream has gone on.
Don Benry, The Lowell Story, (Everett: Lowell Civil Association, 1985), 18-37; David Dilgard, Margaret Riddle and Kristin Ravetz, A Survey of Everett’s Historical Properties (Everett: Everett Public Library and Department of Planning and Community Development, 1996); David Dilgard and Margaret Riddle, Shoreline Historical Survey Report (Everett: Shoreline Master Plan Committee for City of Everett, 1973), 2-28 and 66-73; David Dilgard, Mill Town Footlights (Everett: Everett Public Library, 2001); Lawrence E. O’Donnell, Everett Past and Present (Everett: K & H Printers, 1993), 2-15; Everett City Directory (Seattle: R. L. Polk, 1893), 47-66; Everett City Directory (Seattle: R. L Polk, 1903), 64; Norman H. Clark, Mill Town (Seattle: University of Washington Press, 1970); History of Snohomish County, Washington Vols. I and 2 ed. by William Whitfield (Chicago: Pioneer Historical Publishing Company, 1926); The History of Skagit and Snohomish Counties, Washington (Interstate Publishing Company, 1906), 253-258 and 314-331; Elof Norman, The Coffee Chased Us Up Monte Cristo Memories (Seattle: Mountaineers, 1977); "Early History of Snohomish River and Vicinity," Everett Herald, January 14, 1936; Snohomish Eye, September 1893-1894; Advertisements, Everett Herald, December 17, 1891; Snohomish Sun, 1891; Everett Herald December 10, 1891 through 1892; "Puget Sound Paper Mill," Port Gardner News, September 11, 1893; "Local News," The Eye, August 22, 1893; Everett Herald, December 10, 1891; The Snohomish Story: From Ox team to Jet Stream (Snohomish: Snohomish Centennial Association, 1959).
< Browse to Previous Essay
Browse to Next Essay >
Cities & Towns |
Licensing: This essay is licensed under a Creative Commons license that
encourages reproduction with attribution. Credit should be given to both
HistoryLink.org and to the author, and sources must be included with any
reproduction. Click the icon for more info. Please note that this
Creative Commons license applies to text only, and not to images. For
more information regarding individual photos or images, please contact
the source noted in the image credit.
Major Support for HistoryLink.org Provided
By: The State of Washington | Patsy Bullitt Collins
| Paul G. Allen Family Foundation | Museum Of History & Industry
| 4Culture (King County Lodging Tax Revenue) | City of Seattle
| City of Bellevue | City of Tacoma | King County | The Peach
Foundation | Microsoft Corporation, Other Public and Private
Sponsors and Visitors Like You
This essay made possible by:
The State of Washington
Washington State Department of Archeology and Historic Preservation
Hewitt Avenue looking east, Everett
Postcard Courtesy Everett Public Library
Swalwell's Landing, site of newly platted Everett, 1891
Photo by Frank La Roche, Courtesy Everett Public Library (Image No. 1056)
Birdseye view of the Everett Peninsula, ca. 1893
Courtesy City of Smokestacks
William Weahlub of the Tulalip Reservation smoking salmon and roe on the beach, 1906
Photo by Norman Edson, Courtesy UW Special Collections
Great Northern Railway Depot, Everett, 1920s
Clark-Nickerson Lumber Mill, Everett, 1900s
Night, downtown Everett, 1920s
Hewitt Avenue and Commerce Block, Everett, 1914
Hewitt Avenue looking east, Everett, 1920s
Looking west along Hewitt Avenue across Wetmore, Everett, 1920s
Photo by J. A. Juleen, Courtesy Everett Public Library (Neg. Juleen842)
Aerial view of Everett, 1950s
Naval Station Everett, 2004
Courtesy U.S. Navy
Everett, September 28, 2005
HistoryLink.org Photo by Priscilla Long
Everett, September 28, 2005
HistoryLink.org Photo by Priscilla Long | <urn:uuid:55beaadf-2d1a-4ffc-a1ae-baf3ab3594d7> | CC-MAIN-2013-20 | http://www.historylink.org/This_week/index.cfm?DisplayPage=output.cfm&file_id=7397 | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700264179/warc/CC-MAIN-20130516103104-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.956339 | 4,728 | 3.328125 | 3 | HuggingFaceFW/fineweb-edu/sample-100BT |
The Dream Factory
|• Introduction||• Hollywood Magic and Movie Premieres|
|• Los Angeles: Making and Protecting the Image||• Celebrities in the Pulpit|
|• Glamour Personified: Gloria Swanson|
Los Angeles: Making and Protecting the Image
Boosters energetically promoted the city of Los Angeles in the first decades of the twentieth century in attempts to lure tourists, new residents, and investment dollars. Real estate agents focused on the nearly constant warmth of the Southern California climate, and they portrayed attractive city streets, beautiful spacious homes, well-kept gardens, and bountiful citrus farms as the norm in the city and its surrounding areas. The Los Angeles Chamber of Commerce used the phrase "Los Angeles—Nature's Workshop" to promote the city as a place filled with natural beauty that fostered good health.
Ultimately, postcards and booster publications coming out of Los Angeles relied on mass-produced and widely distributed images of a relatively small set of actual neighborhoods, homes, gardens, and orange groves that were deemed "typical" as part of promotional efforts. Events like relatively small outbreaks of smallpox and pneumonic plague in 1924 threatened the image of Los Angeles that had been disseminated throughout the rest of the nation. Boosters had to redouble their efforts to promote the city in their wake.
Click Image to Get a Closer Look | <urn:uuid:a311cbdd-e954-4c21-8a2f-74569846f35d> | CC-MAIN-2013-20 | http://www.hrc.utexas.edu/educator/modules/teachingthetwenties/theme_viewer.php?theme=small§ion=dream&subsect=2&sov=2 | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700264179/warc/CC-MAIN-20130516103104-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.932193 | 283 | 2.984375 | 3 | HuggingFaceFW/fineweb-edu/sample-100BT |
Ptosis Correction Surgery:
Ptosis Correction Surgery India offers information on Ptosis Correction Surgery in India, Ptosis Correction Surgery cost India, Ptosis Correction Surgery hospital in India, Delhi, Mumbai, Chennai, Hyderabad & Bangalore, Ptosis Correction Surgeon in India.
Ptosis is the medical term for drooping of the upper eyelid, a condition that may affect one or botheyes.
The ptosis may be mild - in which the lid partially covers the pupil; or severe - in which the lid completely covers the pupil.
When does Ptosis occur?
Ptosis can occur at any age. When present since birth it is called congenital ptosis. When present in the elderly it is called acquired ptosis.
What causes Ptosis?
While the cause of congenital ptosis is often unclear, the most common reason is improper development of the levator muscle. The levator muscle is the major muscle responsible for elevating the upper eyelid. In adults ptosis is generally due to weakening / dehiscence of the levator muscle. It may also occur following injury to the muscle as after lid injuries and eye surgeries. Rarely it may be due to myasthenia gravis ( a condition where there is progressive weakness of muscles).
Why should Ptosis be treated?
Children with significant ptosis may need to tilt their head back into a chin-up position, lift their eyelid with a finger, or raise their eyebrows in an effort to see from under their drooping eyelid. Children with congenital ptosis may also have amblyopia ("lazy eye"), strabismus or squint (eyes that are not properly aligned or straight), refractive errors, astigmatism, or blurred vision. In addition, drooping of the eyelid may result in an undesired facial appearance and difficult social life. In moderate ptosis there is a loss of the upper field of vision by the drooping upper lid.
How is Ptosis treated?
The eye condition Ptosis is trated by a specified sugery called ptosis surgery.
Ptosis is treated surgically, with the specific operation based on the severity of the ptosis and the strength of the levator muscle. If the ptosis is not severe, surgery is generally performed when the child is between 3 and 5 years of age (the "pre-school" years). However, when the ptosis interferes with the child's vision, surgery is performed at an earlier age to allow proper visual development. Ptosis repair is usually completed under general anesthesia in infants and young children and under local anesthesia in adults.
What to expect after surgery ?
Most patients will tolerate the procedure very well and have a rapid recovery. Cold packs may need to be applied to the operated eyelid for the first 48 hours following surgery. Antibiotic ointments applied to the incision are sometimes recommended. The elevation of the eyelid will often be immediately noticeable, though in some cases bruising and swelling will obscure this finding. Most patients will have sutures that need removing about a week following surgery. In children, absorbable sutures are often used.
The bruising and swelling associated with the surgery will usually resolve in two to three weeks. Some patients may need adjustment of the sutures to better align the lid height. This may or may not require additional anaesthesia or a trip to the operating room.
India Surgery Ptosis,Ptosis Correction, India Cost Price Ptosis, Ptosis Correction Surgery, Ptosis Correction, India Ptosis Correction Surgery, India Cost Ptosis Correction Surgery, Low Cost Mechanical Ptosis Correction Mumbai,, India Low Cost Ptosis Correction Surgery Hospital, Affordable Ptosis Correction Hospital Mumbai, Health Care, Ptosis Corrective Surgery, Eyelid Surgery, Drooping, Treatment On Ptosis Correction Surgery, India Ptosis Correction Surgery Surgeons, Ptosis Correction Surgery Doctors
Call: + 91 9029304141 (10 am. To 8 pm. IST)
Email : [email protected] (Preferred)
(Only for international patients seeking treatment in India) | <urn:uuid:5bf543a8-f7b4-40fe-ba47-2b49b4e5396f> | CC-MAIN-2013-20 | http://www.indiasurgerytour.com/india-eye-surgery/india-surgery-ptosis-correction.html | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700264179/warc/CC-MAIN-20130516103104-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.926745 | 852 | 3.25 | 3 | HuggingFaceFW/fineweb-edu/sample-100BT |
Hacking Quantum Cryptography Just Got Harder
With quantum encryption, in which a message gets encoded in bits represented by particles in different states, a secret message can remain secure even if the system is compromised by a malicious hacker.
CREDIT: margita | Shutterstock
VANCOUVER, British Columbia — No matter how complex they are, most secret codes turn out to be breakable. Producing the ultimate secure code may require encoding a secret message inside the quantum relationship between atoms, scientists say.
Artur Ekert, director of the Center for Quantum Technologies at the National University of Singapore, presented the new findings here at the annual meeting of the American Association for the Advancement of Science.
Ekert, speaking Saturday (Feb. 18), described how decoders can adjust for a compromised encryption device, as long as they know the degree of compromise.
The subject of subatomic particles is a large step away from the use of papyrus, the ancient writing material employed in the first known cryptographic device. That device, called a scytale, was used in 400 B.C. by Spartan military commanders to send coded messages to one another. The commanders would wrap strips of papyrus around a wooden baton and write the message across the strips so that it could be read only when the strips were wrapped around a baton of matching size. [The Coolest Quantum Particles Explained]
Later, the technique of substitution was developed, in which the entire alphabet would be shifted, say, three characters to the right, so than an "a" would be replaced by "d," and "b" replaced by "e," and so on. Only someone who knew the substitution rule could read the message. Julius Caesar employed such a cipher scheme in the first century B.C.
Over time, ciphers became more and more complicated, so that they were harder and harder to crack. Harder, but not impossible.
"When you look at the history of cryptography, you come up with a system, and sooner or later someone else comes up with a way of breaking the system," Ekert said. "You may ask yourself: Is it going to be like this forever? Is there such a thing as the perfect cipher?"
The perfect cipher
The closest thing to a perfect cipher involves what's called a one-time pad.
"You just write your message as a sequence of bits and you then add those bits to a key and obtain a cryptogram," Ekert said."If you take the cryptogram and add it to the key, you get plain text. In fact, one can prove that if the keys are random and as long as the messages, then the system offers perfect security."
In theory, it's a great solution, but in practice, it has been hard to achieve. [10 Best Encryption Software Products]
"If the keys are as long as the message, then you need a secure way to distribute the key," Ekert said.
The nature of physics known as quantum mechanics seems to offer the best hope of knowing whether a key is secure.
Quantum mechanics says that certain properties of subatomic particles can't be measured without disturbing the particles and changing the outcome. In essence, a particle exists in a state of indecision until a measurement is made, forcing it to choose one state or another. Thus, if someone made a measurement of the particle, it would irrevocably change the particle.
If an encryption key were encoded in bits represented by particles in different states, it would be immediately obvious when a key was not secure because the measurement made to hack the key would have changed the key.
This, of course, still depends on the ability of the two parties sending and receiving the message to be able to independently choose what to measure, using a truly random number generator — in other words, exercising free will — and using devices they trust.
But what if a hacker were controlling one of the parties, or tampering with the encryption device?
Ekert and his colleagues showed that even in this case, if the messaging parties still have some free will, their code could remain secure as long as they know to what degree they are compromised.
In other words, a random number generator that is not truly random can still be used to send an undecipherable secret message, as long as the sender knows how random it is and adjusts for that fact.
"Even if they are manipulated, as long as they are not stupid and have a little bit of free will, they can still do it," Ekert said.
MORE FROM LiveScience.com | <urn:uuid:d0f6c11e-5d9a-4eac-8057-aad25b3d2613> | CC-MAIN-2013-20 | http://www.livescience.com/18587-hacking-quantum-cryptography-unbreakable-code.html | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700264179/warc/CC-MAIN-20130516103104-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.959323 | 946 | 3.390625 | 3 | HuggingFaceFW/fineweb-edu/sample-100BT |
Republican Sen. Rob Portman's flip-flop approval for same-sex marriage, is just the latest change of heart on the issue by conservatives.
Even Democrats like President Obama -- have turned around after opposing it. This change in attitude is just one of many milestones for the movement.
Here are five of the most important turning points in the same-sex marriage debate:
1993: In a landmark case, Hawaii's Supreme Court ruled that the state can't deny same-sex couples the right to marry unless it finds "a compelling reason" to do so. It orders the issue back to the state legislature, which then voted to ban gay marriage. This was one of earliest debates on the issue at the state level, and was a precursor to the legal battles nationwide. Today, domestic partnerships and civil unions for same-sex couples are legal in Hawaii.
1996: President Bill Clinton signed the Defense of Marriage Act, or DOMA, -- which defines marriage as a legal union between a man and a woman. The law denies federal benefits to same-sex couples in the nine states where gay marriage is legal. Clinton said he signed it because it would have tamped down calls for a constitutional amendment to ban gay marriage. Only 81 out of 535 members of Congress opposed DOMA. Fast-forward seventeen years to March 2013, when Clinton urged the Supreme Court to overturn DOMA. He explained: "As the president who signed the act into law, I have come to believe that DOMA is contrary to those principles and, in fact, incompatible with our Constitution."
2004: President Bush championed a constitutional amendment that would outlaw gay marriage. It was needed, he said, to stop "activist judges" from redefining marriage. The idea found support among Senate conservatives, but its supporters couldn't gather enough votes. By the way, all this unfolded during a contentious presidential campaign. Democratic White House hopefuls Sens. John Kerry and John Edwards opposed the amendment, but they also were against creating a specific law making same-sex marriage legal.
2012: For the first time, voters approved same-sex marriage statewide at the ballot box. Similar measures had been rejected for years. Same-sex couples became free to marry in Maryland, Maine and Washington. Gay rights supporters also scored a smaller victory in Minnesota, where voters rejected a constitutional amendment to ban gay marriage. Interestingly, support for same-sex marriage came from a mixed coalition of voters. Before 2012, six states had already legalized gay marriage -- but via courts and legislatures -- not voters.
2013: For the first time, the Obama administration joined the legal battle against California's 2008 same-sex marriage ban. The Justice Department made it official in February when it filed a brief to the Supreme Court. The Obama administration urged the high court to invalidate the ban. Obama said that if he sat on the Supreme Court, he would vote to strike down Proposition 8. The court document expressed the president's evolution on the issue. In a short time he evolved from a backer of civil unions to a supporter of equality in marriage. Dozens of high-profile Republicans also argued in favor of same-sex marriage, in a court brief. | <urn:uuid:0b166cce-1528-4329-9b87-c0895d7f690d> | CC-MAIN-2013-20 | http://www.localnews8.com/news/politics/5-turning-points-in-gay-marriage-debate/-/308336/19333102/-/15gewle/-/index.html | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700264179/warc/CC-MAIN-20130516103104-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.965105 | 646 | 2.78125 | 3 | HuggingFaceFW/fineweb-edu/sample-100BT |
Definition of Single-blind
Single-blind: Term used to described a study in which either the investigator or the participant, but not both of them, is unaware of the nature of the treatment the participant is receiving. Also called single-masked.
Last Editorial Review: 6/14/2012
Back to MedTerms online medical dictionary A-Z List
Need help identifying pills and medications?
Get the latest health and medical information delivered direct to your inbox FREE! | <urn:uuid:b3dbf6db-abad-4163-90dd-365d7e741ff6> | CC-MAIN-2013-20 | http://www.medterms.com/script/main/art.asp?articlekey=38695 | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700264179/warc/CC-MAIN-20130516103104-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.885955 | 97 | 3.0625 | 3 | HuggingFaceFW/fineweb-edu/sample-100BT |
NPP launched Oct. 28, 2011. It is a first step in building the next satellite system to collect data on climate change and weather conditions.
Teachers to put science to the test in a microgravity environment aboard the agency’s reduced gravity aircraft.
01.25.12 - NASA Explorer Schools held a live video chat on Jan. 25, 2012 with Josh Willis who answered questions about sea level rise and global climate change.
01.12.12 - NES held a video webchat Jan 12, 2012 with Dr. Bill Cooke and Rhiannon Blaauw. They answered questions about meteors, meteorites and comets and their potential danger to spacecraft.
12.13.11 - Danielle Margiotta joined NES on Dec. 13, 2011 and answered student questions about how NASA engineers prepare satellites to endure the harsh environment of space.
11.23.11 - NASA Explorer Schools held a video chat on Nov. 23, 2011 with Zareh Gorjian for a look at NASA's computer graphics area.
11.03.11 - On Nov. 3, 2011, NASA's Deputy Director of Planetary Science, Jim Adams answered student questions about NASA's recent planetary mission discoveries and upcoming launches. Adams discussed his career path and some of the most rewarding moments in his 22-year career with NASA.
10.13.11 - In celebration of Hispanic Heritage Month, Dr. Félix Soto Toro joined NES on Oct. 13, 2011, for our first live bilingual video chat. Students asked questions of this astronaut applicant and electrical engineer and found out what it was like for Soto to grow up in Barrio Amelia Guaynabo, Puerto Rico, with few advantages. They also learned what inspired him to pursue a career with NASA. A video or transcript of this chat will be posted at a later time.
02.17.11 - Being a scientist doesn't always mean spending your days in a lab. For NASA microbial ecologists, going to work might mean climbing aboard a research vessel, or collecting marine and soil samples in the Andes, Mexico, or even in Europe and Africa! Students were able to find out what it's like to hunt microbes around the globe with Angela Detweiler and Dr. Lee Bebout. Chat transcripts are now available.
03.29.11 - The NES project invited all K-12 students to participate in a one-hour-long NASA career panel video webchat on March 29, 2011. This year's panelists were three outstanding women who have chosen to pursue careers in science and engineering. A transcript and/or video will be posted at a later time. | <urn:uuid:dd7f67ea-0969-440f-8e56-c9e1e3d6d1aa> | CC-MAIN-2013-20 | http://www.nasa.gov/offices/education/programs/national/nes2/home/new-promo-coll_archive_5.html | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700264179/warc/CC-MAIN-20130516103104-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.939101 | 543 | 2.765625 | 3 | HuggingFaceFW/fineweb-edu/sample-100BT |
The shape-memory alloy actuators might power minimally invasive surgical devices or tiny laptop cameras
Shape-memory alloys that change shape when heated could become tiny mechanical muscles for electronic devices. New mechanical devices based on the alloys produce three to six times more torque than electric motors, and weigh just one-20th as much.
Such devices, known as actuators, can be cut from a flat sheet of metal just a fraction of a millimeter thick. They emerged from a roject that aims to build printable robots, where the robots would consist of both the metal actuators and plastic components that could be built layer-by-layer through a process similar to inkjet printing. | <urn:uuid:445f2943-2c46-4a0d-a7fc-003ee5c0c754> | CC-MAIN-2013-20 | http://www.popsci.com/category/tags/mechanical-devices | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700264179/warc/CC-MAIN-20130516103104-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.95082 | 140 | 3.140625 | 3 | HuggingFaceFW/fineweb-edu/sample-100BT |
The story explains how Huey, Dewey and Louie originally joined the Junior Woodchucks. Years ago, when the boys were still very small, they were up to so much mischief that Donald finally got fed up with it and decided that something must be done. By chance, he ran across a scout group of the Junior Woodchucks, and this inspired him to send his nephews to join the organisation.
At the annual grand jamboree of the Junior Woodchucks, the boys discovered that their own grandmother is the daughter of the organisation's founder. Thus interested, the boys wanted to join the Junior Woodchucks immediately. The chiefs originally didn't want to accept them, but when they learned they were the great-great-grandchildren of their original founder, they accepted them immediately. They never had descendants of their founder before, even though Huey, Dewey and Louie weren't the first ones to try; Donald had also tried to join them, but was rejected because of his bad temper.
As novices in the Junior Woodchucks, the boys' first task was to find the remains of the Fort Duck, which was demolished to make room for Scrooge McDuck's Money Bin. The trail led the boys, accompanied by Major Snozzie, to a wood pulp factory owned by Scrooge, where the logs from the fortress were about to be made into pulp. But when the worker responsible for the pulp making learned of the logs' origin, as a former Junior Woodchuck himself, he immediately stopped the machines, to avoid destroying the historical remains.
The story ends with the boys being promoted to full members of the Junior Woodchucks and Donald being awarded an honorary medal. | <urn:uuid:0ef8cc2b-82a9-4641-8adc-836daa4d3db5> | CC-MAIN-2013-20 | http://www.reference.com/browse/j.g.r.+de+francia | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700264179/warc/CC-MAIN-20130516103104-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.990291 | 351 | 2.96875 | 3 | HuggingFaceFW/fineweb-edu/sample-100BT |
June 14, 2007 At one time Cyclone Gonu was a powerful Category 5 storm packing sustained winds of 160 mph (139 knots), according to the Joint Typhoon Warning Center, making it the most powerful cyclone ever to threaten the Arabian Peninsula since record keeping began back in 1945. Fortunately the storm weakened significantly by the time it brushed the far eastern tip of Oman, but it still threatened petroleum shipping lanes in the northern part of the Arabian Sea that are unprepared for such an intense cyclone.
While tropical cyclones occasionally form in the Arabian Sea, they rarely exceed tropical storm intensity. In 2006, Tropical Storm Mukda was the only tropical system to form in the region and it remained well out to sea before dissipating.
Gonu became a tropical storm on the morning (local time) of Sat., Jun. 2, in the east-central Arabian Sea. After some initial fluctuations in direction, it settled on a northwesterly track and began to intensify. Gonu strengthened from tropical storm intensity on the morning of June 3 to Category 2 that night. By daybreak on June 4, Gonu had intensified to Category 4 with winds estimated at 132 mph (115 knots).
NASA's Tropical Rainfall Measuring Mission (TRMM) satellite captured an image of Gonu as it was moving northwest through the central Arabian Sea. Taken on Mon., Jun. 4 at 0323 UTC (11:23 p.m. EDT on Sun., Jun. 3), it shows the horizontal distribution of rain intensity looking down on the storm. TRMM reveals the tell-tale signs of a potent storm. Not only does Gonu have a complete, well-formed symmetrical eye surrounded by an intense eyewall (innermost red ring), this inner eyewall is surrounded by a concentric outer eyewall (outermost red and green ring). This double eyewall structure only occurs in very intense storms. Eventually the outer eyewall will contract and replace the inner eyewall.
Another image provides a unique 3-D perspective of Gonu using data collected from the TRMM Precipitation Radar from the same overpass as the previous image. Higher radar echo tops are indicated in red. The areas of intense rain in the previous image are associated with deep convective towers both in the innermost eyewall and in parts of outer eyewall. The inner ring has the higher tops at this time. Deep convective towers near the storm's center can be a precursor to future strengthening as they indicate that large amounts of heat are being released into the storm's core. At the time of these images, Gonu was a Category 4 cyclone. Several hours later, Gonu reached Category 5 intensity.
The system finally began to weaken during the night of June 4 and was downgraded to a Category 3 storm at 1200 UTC (8:00 a.m. EDT) on June 5.
NASA's Quikscat spacecraft also observed Gonu. Its SeaWinds scatterometer, a specialized microwave radar, measured near-surface wind speed and direction within the storm.
Gonu continued to weaken as it neared the coast of Oman. The center remained just offshore Oman's northeast coast as a Category 1 storm before turning northward towards Iran, where it is expected to make landfall as a tropical storm.
TRMM is a joint mission between NASA and the Japanese space agency JAXA. QuikScat is managed by NASA's Jet Propulsion Laboratory. Images produced by Hal Pierce (SSAI / NASA GSFC). Caption by Steve Lang (SSAI / NASA GSFC), Mike Bettwy (RSIS / NASA GSFC), and NASA/JPL/QuikScat Science Team.
Other social bookmarking and sharing tools:
Note: Materials may be edited for content and length. For further information, please contact the source cited above.
Note: If no author is given, the source is cited instead. | <urn:uuid:304f6c93-31f0-49d8-9017-4c0c2dd36ed6> | CC-MAIN-2013-20 | http://www.sciencedaily.com/releases/2007/06/070613070547.htm | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700264179/warc/CC-MAIN-20130516103104-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.936788 | 806 | 3.140625 | 3 | HuggingFaceFW/fineweb-edu/sample-100BT |
The antimatter costs 62.5 trillion per gram. In the future it is theoretically possible to use antimatter as fuel for spaceships to other planets. The problem is that its production requires extremely expensive technology, and to create just 1 gram, the world would have to work a whole year (global GDP is 65 trillion dollars). In physics the ' anti-matter is a conglomeration of antiparticles corresponding to the particles that constitute the ordinary matter...
Californium costs $ 27 million per gram. Why is it needed? An element of californium is so expensive to produce, the isotopes of californium do not have any practical application. In the West it was created only once since its opening in 1950. The californium is a ' chemical element with the symbol Cf and the atomic number 98. It is a transuranic element, synthetic , radioactive : californium has very few practical uses and was discovered by bombarding c...
The price of diamonds is of 55 000 dollars per gram. The colorless stone can cost more than 11 thousand dollars per carat, but colored diamonds are worth more. Why do we need diamonds? The natural diamonds are most often used in the jewelry industry. Also, the extreme hardness of diamonds finds its application in ind...
materials substances the most expensive substances in the world gold which are the most expensive substances in the world saffron platinum rhodium methamphetamine rhino horn heroin cocaine lsd plutonium taffeit precious metals the most expensive metals gems stone gems precious stones diamonds taaffeite jewels jewelry | <urn:uuid:0a679192-d177-40fd-b385-ff7fe6dea143> | CC-MAIN-2013-20 | http://www.therealbest.com/bests?tag=+precious+metals | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700264179/warc/CC-MAIN-20130516103104-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.895923 | 320 | 3.046875 | 3 | HuggingFaceFW/fineweb-edu/sample-100BT |
These are two of my favorite pictures from my research on children’s books about Einstein and Curie. (You can click on them to see the bigger images). They are I think, the most visual example of my thesis’s argument and I think they are also illustrative of exactly what we need to pay attention to in Children’s biography.
Stories about famous figures’ biographies are the most directly applicable aspect of children’s literature. This is the part of the story that with which children can most readily identify. Tragically, this part of the story of these lives is generally the thinest part of the historical record. Because children’s literature is so rarely reviewed by historians, this is not an issue for many children’s authors. They can simply invent the figures childhood.
The first picture is a picture of the young Albert Einstein terrorizing his baby sitter. Albert is described as cruel, and angry, he throws tantrums the text tells young readers that “His temper so terrifies a tutor hired to help young Albert prepare for school that she runs away, never to be seen again.” In the picture Albert and his anger are foregrounded as the tutor runs away in terror, apparently never to be seen again. You will be hard pressed to find historical precedent for this story: By all accounts Albert was a much more timid boy, but it is easy to see here how masculinity and power are imbued on this child.
The second picture is of Curie crying in the arms of her teacher. Before I get into the details, consider the differences between these two images. Notice the relative size of Curie and her teacher. Einstein is bigger than his tutor, while the small (and surprisingly Aryan) Curie is presented as significantly smaller. In the second picture, the teacher does not come down to her level and instead maintains her size and visual power. This story appears in almost every single children’s book about Curie. The young Manya Skłodowska was the youngest and smartest student in her class. Her school, which was run by Polish teachers, was under constant threat from the Russians who occupied Poland. The school was barred from teaching children in Polish and teaching Polish history. Instead, schools were required to have children memorize Russian history and learn Russian language. The school that Manya attended disobeyed these rules. When Russian school inspectors came to check on the school a look-out in the hallway would warn the class and the class would hide their Polish books. Once the inspector came in, the teacher would call on Manya to answer his questions. In the story, Manya succeeds by answering all of the Russian inspector’s questions in Russian to his liking. After he leaves she cries.
In this story it becomes apparent that while Manya is very smart and strong she still has a kind of frailty. Readers are told that Manya’s knowledge gives her a kind of importance. She is called on in class and because of her impressive memory; she saves the class from the inspector. While the stories of Einstein were exaggerate stories that stress his clashes with authority the story of the Russian inspector is usually treated in a way that is much more consistent with the authoritative texts. However, Eva Curie tells several other stories about Manya that only make it into one of the children’s books, and thus the picture of the young Manya is shaped more by exclusion than by exaggeration.
The following anecdotes come from Eleanor Doorly’s 1939 book, The Radium Women: Madame Curie. Doorly’s book went through many printings and was highly acclaimed, being recommended in three consecutive editions of the Children’s Catalogue. Doorly states quite clearly in the opening of her book that it is a children’s adaptation of Eva Curie’s biography of her mother. This book stays very close to Eva’s biography and offers insight into a different trajectory that could have been developed in accounts of Curie. These selections come from the second chapter of her book, appropriately entitled “Rebels.”
In the Russian-run high school Manya and her friend Kazia “took delight in inventing witticisms against their Russian professors, their German master, and especially against Miss Mayer who detested Manya only a little less than Manya detested her.” Their teacher Miss Mayer stated, “It’s no more use speaking to that Sklodovska girl,” she said, “than throwing green peas at a wall!” On one occasion Eva tells us of a time in which Manya was openly disrespectful, and witty. “I won’t have you look at me like that!’ Miss Mayer would shout. ‘You have no right to look down on me!’” Manya responded “‘I can’t help it,’ said Manya truthfully, for she was a head taller that Miss Mayer. No doubt she was glad that words sometimes have two meanings.”
In the second series of stories, the young Manya is openly disrespectful of her teachers. While the story of her crying in front of the Russian inspector is interesting it should be seen as just one of several stories about Manya’s school experience. Importantly, it is the only story that puts her in a position of weakness against the authority of both the teacher and the inspector. Other stories show the potential of portraying a Manya who is similar to the exaggerated Einstein, openly disrespectful of a rather hostile teacher.
Brown, Don. Odd Boy Out: Young Albert Einstein . Houghton Mifflin, 2004.
Doorly, Eleanor. The Radium Woman, a Life of Marie Curie; and Woodcuts. New York: Roy Publishers, 1939. | <urn:uuid:70673dd1-ffc1-485d-b016-87cf7655a1ed> | CC-MAIN-2013-20 | http://www.trevorowens.org/2007/09/curie-and-einstein-go-to-school/ | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700264179/warc/CC-MAIN-20130516103104-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.976395 | 1,214 | 3.296875 | 3 | HuggingFaceFW/fineweb-edu/sample-100BT |
“If God is all-powerful and all-good, it would have created a universe in the same way it created heaven: with free will for all, no suffering and no evil. But evil and suffering exist. Therefore God does not exist, is not all-powerful or is not benevolent (good). A theodicy is an attempt to explain why a good god would have created evil and suffering. The most popular defence is that it is so Humans could have free will. However the entire universe and the natural world is filled with suffering, violence and destruction so any Humanity-centric explanation does not seem to work.”
The most common theodicy is the free will theodicy1. This is that God created evil so that we could then choose between good and evil, and make moral choices. If all choices result in good, there would be no moral choices. If love is acceptable, it must be chosen over hate and therefore evil and suffering result when we make morally poor choices. However this classical theodicy does not hold up, for many reasons. Although many believers adhere to the free will theodicy, it is not preached in the holy books of the main monotheistic religions. See: Monotheism and Free Will: The Christian Bible and the Quran both teach strict determinism - that God decides all of our fates, and our own choices and decisions cannot change God's plan for everyone. Prominent historical Christian theologians who have rejected the free will theodicy include St Augustine, Martin Luther and John Calvin2. The arguments on this page are thousands of years old, but, many continue to believe in the simplicity of the free will theodicy, so, it does no harm to state the arguments against it again.
The fact that there is both free will and no evil in heaven tells us that evil and suffering are not a requirement of free will. If there is no reason for suffering in heaven, then, God should instantly put everyone in heaven, where we would all continue to have free will, but also not suffer.
“Earthquakes, volcanoes, floods and disease affect human beings indiscriminately and result from geological factors not from free will. Unborn babies lay amongst the victims. Animals and humans alike suffer as a result of natural evil. These disasters have been prevalent for all of Earth's history so have nothing to do with human agency. Not only that, but the entire universe is steeped in large-scale destruction and violence as part of the very design of the physical world. None of this indicates that there is a 'good' design behind it all, and it especially indicates that there is no good god.”
God not only created the possibility of suffering, pain and sin, but it appears that it made us with a strong inclination towards it. Life being "unfair" is a symptom of pain or suffering, of inadequacy or feelings or even sinful emotions such as greed. God created these emotions, we do not choose to accept them, they are inherent in our nature. God could have created us so that we do not feel these emotions, that they simply don't exist. It would eliminate a lot of evil, and would not take away our free will - we'd just have a different range of emotions that we didn't choose to have. The fact that God has created our nature and instincts to be geared towards sin and imperfection means that it wants us to choose evil over good. It has created evil, and created us so that we will mostly "choose" it.
God sometimes creates people with (for example) genetic diseases that predispose them to paranoid schizophrenia, violent crime, sexual abuse and amorality and other inherited personality defects. Other people are born with a predisposition to calmness, subordination and pacifism. Both types of person have free will. God could easily create the majority of humanity so that our personality is much better and kinder in general and not just in outstanding individuals. It seems that the actual quantity and evil and suffering could be much less, and free will would still exist.
Suffering is not required for free will. When someone commits a crime or otherwise causes suffering, why does the victim suffer? The victim has not chosen evil, they are merely the unfortunate victims of someone else's choice. If justice or morality exist, and come from God, then only those acts that are evil should be punished. When someone is a victim, God itself should put the crime right, and avoid the innocent suffering. The person responsible for the evil may still suffer punishment or retribution, but why does the victim need to suffer? If suffering and evil are the result of free will, then, why is it that much suffering is caused by outside agencies?
This transmission of the effects of one person's bad choices to another's experience is unnecessary for free will. In a society where there are no victims, punishment would be unnecessary. If our nature was geared towards good, there would be no need for punishment to be used as a terror tactic to reduce crime. That punishment is necessary means that God cannot be all-powerful. It is not an effect of free will that we should suffer the consequences of each other's bad choices, it is an effect of a universe operating under a different, amoral sense of justice, and not under the care of an all-powerful, just God.
Free will is the ability to make choices. This means, we must have options. What these options are is irrelevant. A saint, Jesus, Muhammad, etc, had free will. These people also did not (perhaps) ever commit a sin. Nevertheless it is ludicrous to say that because a person does not choose evil that they have no free will. In other words, it is possible for a person never to accept evil, and still have free will. This means that we could have a nature that never wills transgression, and we could still have free will. There are many millions of choices and paths you can take in life, there is no requirement for "cause evil" to be an effect of them. Free will still exists without it.
If God was good, we would all exist (as those in heaven do) in a situation where we all continually have the free will to choose between different good courses of action. Evil simply isn't required for free will.
When a person chooses evil, God could rectify the real life effect of it, and simply let the perpetrator feel its effect. There is no need for evil to manifest outside of a person's own choices. Evil, in short, could be chosen, but not realized. There is no reason for evil to cause suffering. If we had a choice between doing something good or bad, if we chose bad, why does it cause suffering? Why must it? It seems we could chose bad, and for it to have no effect other than to prevent us feeling that we did good. That is enough for free will. A forgiving God would note that someone just chose badly, and rectify their mistake and forgive them. No suffering would result. If it means that those who choose badly fail to get to heaven then so be it, but, there is no need for actual suffering to occur. "Evil begets evil" is not the fruit of a good god; bad choices by us needn't result in the punishment of pain or suffering for anyone. The lack of separation of evil from its effects shows that God is not interested in preventing evil.
Some monotheistic religions such as Christianity and Islam preach that Adam and Eve were the beginning of mankind rather than our evolutionary predecessors (which was a whole species rather than two individuals). In these religions, Adam and Eve are said to have been created in paradise, but banished for committing the original sin.
The Original Sin is the reason Christians say that Human Beings experience suffering - as a result of Adam and Eve's actions. Humankind was created in, and was supposed to exist in, a state of immortal paradise. But as a result of Adam and Eve's original sin, we have all been punished with our earthly existence, completely with suffering, pain and death (Romans 5:12, 1 Corinthians 15:21). Genesis 3:14-19 describes some of the punishments in more detail. The reason there is any death at all is because Adam and Eve disobeyed God.
The story in the Qur'an, Sura 7:24-27, tells of when Adam and Eve are punished and banished from paradise, and must thereafter live on the Earth complete with its suffering, pain and death. This however, makes all their children and descendents suffer from the same punishment. This is despite the Qur'anic statement that "none shall bear the burden of another's sin" (35:18 and 53:38).
Before Augustine coined the phrase original sin it was known simply as ancestral sin. It is a feature of Christianity that been much criticized. Famed antagonist Richard Dawkins asks "What kind of ethical philosophy is it that condemns every child, even before it is born, to inherit the sin of a remote ancestor?"4.
Is it is really moral to punish someone for someone else's actions? All good parents teach their children that that is not fair and unjust. Hence, the story of Adam and Eve teaches us that God is unjust or at least, not always just, and therefore is not perfectly benevolent.
The story teaches us that it is divine will that sometimes the relatives of sinners can be punished for the guilty.
The story teaches us that free will is not the cause of the suffering of mankind. We are all born in a world of pain and death because someone-else committed a crime, which was nothing to do with our own free will to choose wrongly.
The free will justification for evil does not work. Free will does not require the existence of evil or suffering - Heaven is a place where there is free will, and no suffering. There is a lot of suffering and evil that is not the result of free will such as from natural disasters, so free will could not actually account for all suffering, only some of it. The question of why God creates additional suffering would still exist. Also, the free will of one person can cause suffering for another innocent person, God should not allow the moral choices of one being affect other beings as this goes against accountability, which is the whole point of free will. In short, it seems that the existence of pain and suffering contradicts the existence of a good god.
“To the present day, all theodicies have failed to explain why a good god would create evil, meaning that the existence of evil is simply incompatible with the existence of a good god. After thousands of years of life-consuming passion, weary theologians have not formulated a new answer to the problem of evil for a long time. The violence of the natural world, disease, the major catastrophes and chaotic destruction seen across the universe and the unsuitability of the vastness of reality for life all indicate that god is not concerned with life, and might actually even be evil. Failure to answer the problem of evil sheds continual doubt on the very foundations of theistic religions.”
The Koran. Translation by N. J. Dawood. Penguin Classics edition published by Penguin Group Ltd, London, UK. First published 1956, quotes taken from 1999 edition.
The Bible (NIV). The NIV is the best translation for accuracy whilst maintaining readability. Multiple authors, a compendium of multiple previously published books. I prefer to take quotes from the NIV but where I quote the Bible en masse I must quote from the KJV because it is not copyrighted, whilst the NIV is. [Book Review]
The Encyclopedia of Religion (1987, Ed.). 16 volumes. Eliade is editor-in-chief. Published by Macmillan Publishing Company, New York, USA.
The Devil in Early Modern England (2000). Sutton Publishing Limited, England. | <urn:uuid:79dfa149-f3b7-4316-b5e3-5f20a27df503> | CC-MAIN-2013-20 | http://www.vexen.co.uk/religion/theodicy_freewill.html | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700264179/warc/CC-MAIN-20130516103104-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.959459 | 2,448 | 2.796875 | 3 | HuggingFaceFW/fineweb-edu/sample-100BT |
(1881 - 1973)
Regarding the canon of art history, no other artist has exerted such influence as Pablo Picasso.
Frequently dubbed the "dean of modernism," the Spanish artist was revolutionary in the way he challenged the conventions of painting. His stylistic pluralism, legendary reconfiguration of pictorial space and inexhaustible creative force have made Picasso one of the most revered artists of the 20th century.
Influenced by symbolism and Toulouse-Lautrec, Picasso developed his own independent style in Paris during his renowned Blue Period (1900-1904): motifs from everyday life... | <urn:uuid:e347ca03-870c-40d2-9d5d-c97024672562> | CC-MAIN-2013-20 | http://www.williambennettgallery.com/artists/picasso/pieces/PICA1191.php | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700264179/warc/CC-MAIN-20130516103104-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.954427 | 130 | 3.4375 | 3 | HuggingFaceFW/fineweb-edu/sample-100BT |
A new world record wind gust: 253 mph in Australia's Tropical Cyclone Olivia
The 6,288-foot peak of New Hampshire's Mount Washington is a forbidding landscape of wind-swept barren rock, home to some of planet Earth's fiercest winds. As a 5-year old boy, I remember being blown over by a terrific gust of wind on the summit, and rolling out of control towards a dangerous drop-off before a fortuitously-placed rock saved me. Perusing the Guinness Book of World Records as a kid, three iconic world weather records always held a particular mystique and fascination for me: the incredible 136°F (57.8°C) at El Azizia, Libya in 1922, the -128.5°F (-89.2°C) at the "Pole of Cold" in Vostok, Antarctica in 1983, and the amazing 231 mph wind gust (103.3 m/s) recorded in 1934 on the summit of Mount Washington, New Hampshire. Well, the legendary winds of Mount Washington have to take second place now, next to the tropical waters of northwest Australia. The World Meteorological Organization (WMO) has announced that the new world wind speed record at the surface is a 253 mph (113.2 m/s) wind gust measured on Barrow Island, Australia. The gust occurred on April 10, 1996, during passage of the eyewall of Category 4 Tropical Cyclone Olivia.
Figure 1. Instruments coated with rime ice on the summit of Mt. Washington, New Hampshire. Image credit: Mike Theiss.
Tropical Cyclone Olivia
Tropical Cyclone Olivia was a Category 4 storm on the U.S. Saffir-Simpson scale, and generated sustained winds of 145 mph (1-minute average) as it crossed over Barrow Island off the northwest coast of Australia on April 10, 1996. Olivia had a central pressure of 927 mb and an eye 45 miles in diameter at the time, and generated waves 21 meters (69 feet) high offshore. According to Black et al. (1999), the eyewall likely had a tornado-scale mesovortex embedded in it that caused the extreme wind gust of 253 mph. The gust was measured at the standard measuring height of 10 meters above ground, on ground at an elevation of 64 meters (210 feet). A similar mesovortex was encountered by a Hurricane Hunter aircraft in Hurricane Hugo of 1989, and a mesovortex was also believed to be responsible for the 239 mph wind gust measured at 1400 meters by a dropsonde in Hurricane Isabel in 2003. For reference, 200 mph is the threshold for the strongest category of tornado, the EF-5, and any gusts of this strength are capable of causing catastrophic damage.
Figure 2. Visible satellite image of Tropical Cyclone Olivia a few hours before it crossed Barrow Island, Australia, setting a new world-record wind gust of 253 mph. Image credit: Japan Meteorological Agency.
Figure 3. Wind trace taken at Barrow Island, Australia during Tropical Cyclone Olivia. Image credit: Buchan, S.J., P.G. Black, and R.L. Cohen, 1999, "The Impact of Tropical Cyclone Olivia on Australia's Northwest Shelf", paper presented at the 1999 Offshore Technology Conference in Houston, Texas, 3-6 May, 1999.
Why did it take so long for the new record to be announced?
The instrument used to take the world record wind gust was funded by a private company, Chevron, and Chevron's data was not made available to forecasters at Australia's Bureau of Meteorology (BOM) during the storm. After the storm, the tropical cyclone experts at BOM were made aware of the data, but it was viewed as suspect, since the gusts were so extreme and the data was taken with equipment of unknown accuracy. Hence, the observations were not included in the post-storm report. Steve Buchan from RPS MetOcean believed in the accuracy of the observations, and coauthored a paper on the record gust, presented at the 1999 Offshore Technology Conference in Houston (Buchan et al., 1999). The data lay dormant until 2009, when Joe Courtney of the Australian Bureau of Meteorology was made aware of it. Courtney wrote up a report, coauthored with Steve Buchan, and presented this to the WMO extremes committee for ratification. The report has not been made public yet, and is awaiting approval by Chevron. The verified data will be released next month at a World Meteorological Organization meeting in Turkey, when the new world wind record will become official.
New Hampshire residents are not happy
Residents of New Hampshire are understandably not too happy about losing their cherished claim to fame. The current home page of the Mount Washington Observatory reads, "For once, the big news on Mount Washington isn't our extreme weather. Sadly, it's about how our extreme weather--our world record wind speed, to be exact--was outdone by that of a warm, tropical island".
Comparison with other wind records
Top wind in an Atlantic hurricane: 239 mph (107 m/s) at an altitude of 1400 meters, measured by dropsonde in Hurricane Isabel (2003).
Top surface wind in an Atlantic hurricane: 211 mph (94.4 m/s), Hurricane Gustav, Paso Real de San Diego meteorological station in the western Cuban province of Pinar del Rio, Cuba, on the afternoon of August 30, 2008.
Top wind in a tornado: 302 mph (135 m/s), measured via Doppler radar at an altitude of 100 meters (330 feet), in the Bridge Creek, Oklahoma tornado of May 3, 1999.
Top surface wind not associated with a tropical cyclone or tornado: 231 mph (103.3 m/s), April 12, 1934 on the summit of Mount Washington, New Hampshire.
Top wind in a typhoon: 191 mph (85.4 m/s) on Taiwanese Island of Lanya, Super Typhoon Ryan, Sep 22, 1995; also on island of Miyakojima, Super Typhoon Cora, Sep 5, 1966.
Top surface wind not measured on a mountain or in a tropical cyclone: 207 mph (92.5 m/s) measured in Greenland at Thule Air Force Base on March 6, 1972.
Top wind measured in a U.S. hurricane: 186 mph (83.1 m/s) measured at Blue Hill Observatory, Massachusetts, during the 1938 New England Hurricane.
Buchan, S.J., P.G. Black, and R.L. Cohen, 1999, "The Impact of Tropical Cyclone Olivia on Australia's Northwest Shelf", paper presented at the 1999 Offshore Technology Conference in Houston, Texas, 3-6 May, 1999.
Black, P.G., Buchan, S.J., and R.L. Cohen, 1999, "The Tropical Cyclone Eyewall Mesovortex: A Physical Mechanism Explaining Extreme Peak Gust Occurrence in TC Olivia, 4 April 1996 on Barrow Island, Australia", paper presented at the 1999 Offshore Technology Conference in Houston, Texas, 3-6 May, 1999. | <urn:uuid:3cf8391c-7628-4b73-b23d-af8d16292401> | CC-MAIN-2013-20 | http://www.wunderground.com/blog/JeffMasters/comment.html?entrynum=1420&page=7 | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700264179/warc/CC-MAIN-20130516103104-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.924488 | 1,482 | 2.984375 | 3 | HuggingFaceFW/fineweb-edu/sample-100BT |
July 18, 2012
Since the Industrial Revolution, ocean acidity has risen by 30 percent as a direct result of fossil-fuel burning and deforestation. And within the last 50 years, human industry has caused the world’s oceans to experience a sharp increase in acidity that rivals levels seen when ancient carbon cycles triggered mass extinctions, which took out more than 90 percent of the oceans’ species and more than 75 percent of terrestrial species.
Rising ocean acidity is now considered to be just as much of a formidable threat to the health of Earth’s environment as the atmospheric climate changes brought on by pumping out greenhouse gases. Scientists are now trying to understand what that means for the future survival of marine and terrestrial organisms.
In June, ScienceNOW reported that out of the 35 billion metric tons of carbon dioxide released annually through fossil fuel use, one-third of those emissions diffuse into the surface layer of the ocean. The effects those emissions will have on the biosphere is sobering, as rising ocean acidity will completely upset the balance of marine life in the world’s oceans and will subsequently affect humans and animals who benefit from the oceans’ food resources.
The damage to marine life is due in large part to the fact that higher acidity dissolves naturally-occurring calcium carbonate that many marine species–including plankton, sea urchins, shellfish and coral–use to construct their shells and external skeletons. Studies conducted off Arctic regions have shown that the combination of melting sea ice, atmospheric carbon dioxide and subsequently hotter, CO2-saturated surface waters has led to the undersaturation of calcium carbonate in ocean waters. The reduction in the amount of calcium carbonate in the ocean spells out disaster for the organisms that rely on those nutrients to build their protective shells and body structures.
The link between ocean acidity and calcium carbonate is a directly inverse relationship, which allows scientists to use the oceans’ calcium carbonate saturation levels to measure just how acidic the waters are. In a study by the University of Hawaii at Manoa published earlier this year, researchers calculated that the level of calcium carbonate saturation in the world’s oceans has fallen faster in the last 200 years than has been seen in the last 21,000 years–signaling an extraordinary rise in ocean acidity to levels higher than would ever occur naturally.
The authors of the study continued on to say that currently only 50 percent of the world’s ocean waters are saturated with enough calcium carbonate to support coral reef growth and maintenance, but by 2100, that proportion is expected to drop to a mere five percent, putting most of the world’s beautiful and diverse coral reef habitats in danger.
In the face of so much mounting and discouraging evidence that the oceans are on a trajectory toward irreparable marine life damage, a new study offers hope that certain species may be able to adapt quick enough to keep pace with the changing make-up of Earth’s waters.
In a study published last week in the journal Nature Climate Change, researchers from the ARC Center of Excellence for Coral Reef Studies found that baby clownfish (Amphiprion melanopus) are able to cope with increased acidity if their parents also lived in higher acidic water, a remarkable finding after a study conducted last year on another clownfish species (Amphiprion percula) suggested acidic waters reduced the fish’s sense of smell, making it likely for the fish to mistakenly swim toward predators.
But the new study will require further research to determine whether or not the adaptive abilities of the clownfish are also present in more environmentally-sensitive marine species.
While the news that at least some baby fish may be able to adapt to changes provides optimism, there is still much to learn about the process. It is unclear through what mechanism clownfish are able to pass along this trait to their offspring so quickly, evolutionarily speaking. Organisms capable of generation-to-generation adaptations could have an advantage in the coming decades, as anthropogenic emissions push Earth to non-natural extremes and place new stresses on the biosphere.
Sign up for our free email newsletter and receive the best stories from Smithsonian.com each week. | <urn:uuid:d5fc8f97-1ffe-4404-b9ee-d359c5162435> | CC-MAIN-2013-20 | http://blogs.smithsonianmag.com/science/2012/07/ocean-acidity-rivals-climate-change-as-environmental-threat/ | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.938541 | 860 | 3.796875 | 4 | HuggingFaceFW/fineweb-edu/sample-100BT |
Proceedings of the 2005 Puget Sound Georgia Basin Research Conference
KPLU (NPR): Marine Conference
SEATTLE, WA (2005-03-30) The Puget Sound is in trouble and hundreds of scientists are gathering this week in Seattle to discuss why and what can be done to fix the problem. KPLU environment reporter Steve Krueger has this preview of what lies ahead.
Also posted on:
By Peggy Andersen
SEATTLE - During the great annual gray whale migrations between feeding grounds in the north Pacific and breeding spots off Mexico, about 200 individuals apparently take up "seasonal residence" in the Pacific Northwest, scientists say.
Six gray whales, for example, have been spotted around Whidbey Island nearly every spring since 1991, says biologist John Calambokidis of Olympia-based Cascadia Research. Other small groups of gray whales return annually to preferred spots along the coasts of Oregon and British Columbia.
"In recent years, we've done a much better job identifying these seasonal resident animals," Calambokidis said. In some cases, "we have evidence they don't go to Alaska. They migrate south to the breeding grounds but seem to make this their primary feeding area."
Also, he said, unusually high numbers of beached grays reported in the spring of 1999 and 2000 apparently did not mark the start of a population decline for gray whales.
"The mortality since then has been very low," he said.
Calambokidis presented recent research about grays as the Puget Sound Georgia Basin Research Conference got under way Tuesday at the downtown Washington State Convention Center. The three-day session, featuring scores of scientists on a range of topics, is sponsored by the state's Puget Sound Action Team and Environment Canada.
In a brief luncheon address, Gov. Christine Gregoire said she's making "real science" a priority in making decisions about the environment. There need not be a conflict between business and the environment, she said - businesses are drawn to the region for its quality of life.
Historically, Calambokidis said, gray whales that ventured inland were likely more vulnerable to shore-based hunters than those that swam farther offshore, churning all the way north to the Bering and Beaufort seas of Alaska and the Chukchi Sea off Siberia.
A gray whale calf emerges to be touched by tourists in Ojo de Liebre lagoon in Baja California Sur, Mexico, in March 1999 during the great annual gray whale migration between feeding grounds in the North Pacific and breeding spots off Mexico.
(Associated Press file photo)
The ones that stop in the Northwest tend to not have as many young as the larger population, he said. Determining the gender of the seasonal residents is a work in progress, but females with calves tend to start the migration late and inland stops "may not be advantageous" for them, Calambokidis said.
Some of the returnees move on in early summer and may in fact head north, he said. Some only drop in once or twice. Grays seen farther inland, in central and south Puget Sound, tend to be stragglers foraging for food - sometimes desperately - that rejoin the migration if they can.
There was a surge in reports of dead, beached gray whales five years ago, when population estimates peaked at about 27,000 and the Makah Indian Tribe moved to reaffirm its whaling rights under an 1855 treaty.
While most whale deaths occur in the ocean, the 50 carcasses found on Washington state shores alone in 1999-2000 may have marked a converging of two extremes, Calambokidis said: The whale population reaching its maximum carrying capacity and a natural downturn in the cyclical availability of food and prey.
Many researchers believe both the high population number and the big die-off were "blips," Calambokidis said.
"That's why there was a dramatic event, instead of a gradual tapering off." Records from around the Northwest indicate that the "major mortality event" was a very isolated incident, he said.
On average, Washington state has four gray whale beachings a year, based on reports from the regional stranding network that has been in place since the 1970s, Calambokidis said.
"We haven't really changed our response to strandings," he said. A beached whale carcass as long as 40 feet is hard to miss in a populated area, while dead whales on remote stretches of beach may go unnoticed.
Gray whales, the first creature listed for protection under the Endangered Species Act, were decimated by commercial whaling that peaked in the late 19th century.
Recent gray whale counts conducted along the migration route suggest the population may have settled at about 17,000 animals - roughly the pre-whaling total, Calambokidis said.
The grays' removal from the Endangered Species List in 1994 prompted the Makah to reclaim whaling rights after 70 years. The issue has been bogged down in federal court appeals since the tribe killed a single whale in May 1999.
Antiwhaling activists characterized "resident" gray whales as a separate population that warranted special protection. Some definitions of Makah whaling grounds limited the tribe to offshore whales, while others allowed whaling some distance into the Strait of Juan de Fuca, the waterway that divides the United States and Canada before making a sharp right into Puget Sound.
"Now that we have accurate evidence of their abundance ... it would allow someone to make estimates of what level of kills could come from that group," Calambokidis said. "We have a much more solid basis of information for either side in that debate."
On the Net:
By SUSAN GORDON
Gov. Christine Gregoire promised Monday to take action to protect and restore Puget Sound.
She told a gathering of 600 environmental scientists and others at a U.S.-Canadian research conference that the Sound's health is both central to Washington's future prosperity and a legacy important to future generations.
"Only if we redouble our efforts will we succeed," she said.
Gregoire wants to boost spending on what she described as scientifically based solutions to problems such as pollution and environmental degradation.
She proposes to spend $31.5 million over the next two years to clean up mercury contamination, control the spread of toxic flame retardants, restore polluted shellfish beds and remove spartina, an invasive beach grass, among other things.
Her proposal includes $7.5 million for continuing scientific monitoring.
"We are going to invest and we are going to deliver," she said.
Gregoire has already proposed spending $5 million on the Hood Canal, where pollution has been blamed for an oxygen imbalance that has killed fish.
Gregoire's pledge to save the Sound came during luncheon speech at the Puget Sound Georgia Basin Research Conference, a three-day event at the Washington State Trade & Convention Center in Seattle.
The annual conference brings together U.S. and Canadian scientists who present new scientific findings on some of the most pressing environmental problems facing the region.
Kathy Fletcher, executive director of the environmental group People for Puget Sound, was in the audience.
"It's music to my ears," she said of Gregoire's promise of action. "She's been around this issue long enough to know we need to do a lot more than studies and research."
The governor described the state's continuing population boom as a threat.
"We have met the enemy and the enemy is us," Gregoire said. "Our robust population leads directly to the health problems of the Sound,"
Over the past decade, Washington's population has grown by about 1 million, a 20 percent increase that means more sewage, more road runoff and more pressure on sensitive resources, she said.
Perhaps anticipating objections from the business community, Gregoire underscored the value of Washington's quality of life as a lure to enterprise.
She praised the work of scientists who have focused on both problems and solutions.
"Real science has got to be the key to our decisions with respect to the environment," she said. "Every time we make decisions based on science, the environment is always the winner."
Also Monday, she announced the reappointment of Brad Ack as director of the Puget Sound Action Team, which sets the state's environmental protection priorities for Puget Sound.
During her speech, she endorsed the team's seven-point plan for 2005-2007, which was released last December.
Gregoire told the Seattle audience her first brush with international environmental controversy came in 1988 when she was in charge of the state Department of Ecology. The barge "Nestucca" spilled 230,000 gallons of fuel oil that contaminated beaches from Grays Harbor County to Vancouver Island.
The oil spill roused the state's attention to the damage associated with the risks of oil transport. It also affected Gregoire's family, she said.
The governor recalled bringing her daughter Michelle, now 20, along when she visited a bird rescue operation.
It was "heart-wrenching," Gregoire said.
But the grim scene also influenced Michelle, who is now a college student majoring in environmental science.
What the plan would do
To view the strategy endorsed by Gov. Christine Gregoire to restore and conserve Puget Sound, go to www.psat.wa.gov/Publications/priorities_05/ Priorities_05_review.htm.
Gregoire made a commitment Monday to fund a two-year, seven-point action plan developed last year by the Puget Sound Action Team.
The team was created in 1996 to set priorities for Puget Sound environmental protection.
Susan Gordon: 253-597-8756
By Christopher Dunagan, Sun Staff
SEATTLE-- With science as a guiding light, political leaders must "redouble" their efforts to reverse a dangerous decline in the Puget Sound ecosystem, Gov. Christine Gregoire said Tuesday.
Gregoire expressed concerns about the deadly low-oxygen conditions that plague Hood Canal, and she said similar "dead zones" could develop in southern Puget Sound if people don't take appropriate action."
"We can do better," the governor said, addressing the Puget Sound and Georgia Strait Research Conference. "My friends, we have no choice. We have to do a lot better. It is not too late - but only if we redouble our efforts."
More than 700 scientists, policy makers and concerned individuals attended the first day of a three-day conference addressing scientific issues in Puget Sound and Canada's Georgia Strait. Close to 200 separate research topics are on tap for discussion at the event, which takes place every two years.
Gov. Christine Gregoire says pollution will create more 'dead zones' in Puget Sound unless action is taken now.
(AP Photo/John Froschauer)
Gregoire, former director of the Washington Department of Ecology, said Washington state residents are engaged in a fight against pollution, habitat destruction and declining fish and wildlife populations. But it simply isn't enough. Over the past 20 years, the state's own studies show that for every environmental success, there are new or growing problems for Puget Sound.
"We have a million more people putting demands on that fragile ecosystem," she said, "... and we will add a million more people."
Business owners want to come to Washington because they love the quality of life here, she said. But the challenge is for everyone to work together to improve the environment and leave things better for the next generation.
Gregoire told the scientists that research is essential. Because of dedicated scientific work, "we have a grasp today of the problems and some of the solutions."
She has called on the Legislature to create a new Washington Academy of Sciences to bring together the best minds in the state to provide answers to vexing questions.
"There were bright people who preceded me," she said, "and they couldn't solve the problem. We need new thinking ... When we make our decisions based on science, the environment is always the winner."
But Gregoire does not want to wait for the scientists to answer all the questions - which is why she demanded that the "action plan" for Hood Canal include projects for reducing nitrogen, believed to be at the heart of the problem.
The research conference, held at the Washington State Convention and Trade Center, has been one of the few venues to bring together a cross-section of the scientific community studying Puget Sound. Issues range from killer whale behavior to the chemistry of sewage.
One group of researchers at Tuesday's session described an intensive effort to characterize the existing ecosystem in the Elwha River on the Olympic Peninsula. It will be important, they said, to study the changes after two dams on the river are removed in 2007.
One thing the research has revealed, said Jonathan Warrick of the U.S. Geological Survey, is that the river above the dams is starved for nutrients, essential to the entire food chain. In rivers without blockages, adult salmon carry nutrients in their bodies from the ocean to the upper watershed.
When salmon die, they feed organisms from the bottom of the food chain, as well as eagles and bears that then distribute the nutrients over a broader area.
Other sessions on Tuesday included a discussion of how climate change could alter salmon populations, a talk about gray whales and humpback whales visiting Puget Sound in recent years, and a presentation about an advanced computer model used to describe the movement of pollutants in Bremerton's Sinclair Inlet.
Reach Christopher Dunagan at (360) 792-9207 or e-mail [email protected].
Copyright 2005, kitsapsun.com. All Rights Reserved.
By Larry Pynn
The shared waters of the Strait of Georgia and Puget Sound are home to 63 marine species at risk, with over-harvesting, habitat loss, and pollution rated as the biggest threats, according to a research study being released at an international conference starting today.
The study by Joseph Gaydos and Nicholas Brown also finds that the four jurisdictions responsible for protecting marine species -- B.C., Washington state, and the Canadian and U.S. governments -- cannot reach consensus on the level of threat facing all of those 63 species.
Of the 63 species, Washington officially considered 73 per cent of them at risk, B.C. 50 per cent, the Canadian government 36 per cent, and the U.S. government 31 per cent.
As an example, B.C. lists 12 seabirds that neighbouring Washington state does not list, even though it is common for various species to fly back and forth across the international boundary.
The high number of species at risk in the region's marine waters are evidence of "ecosystem decay," the report's authors conclude, and reflect the need for the various levels of governments to work harder on conservation and to adopt an international ecosystem approach.
Gaydos and Brown are with the SeaDoc Society, a marine ecosystem health program administered through the University of California, Davis, Wildlife Health Centre, and based in Washington's San Juan Islands.
As of September 2004, the 63 species at risk consisted of 27 fish, 23 birds, nine mammals (including the grey whale, harbour porpoise, humpback whale, and killer whale), three invertebrates, and one reptile.
Within the Puget Sound-Georgia Basin marine ecosystem, the number of invertebrate species is much greater than vertebrate species, yet only three invertebrates are listed at risk -- Newcomb's littorine snail, Olympic oyster, and northern abalone -- suggesting the category is not receiving as much attention as it should.
The results of the study are being presented at the Puget Sound Georgia Basin Research Conference running today through Thursday in Seattle and co-sponsored by Environment Canada.
Commenting on the study, Tony Pitcher, a professor at the University of B.C. Fisheries Centre, said in Vancouver that governments have been slow to adopt an ecosystem approach to marine management.
And while states and provinces can have different mandates, he agreed that the international border poses a political obstacle to good management of marine species, not just between B.C. and Washington, but between B.C. and Alaska on our north coast.
Pitcher also agreed that more research is needed on invertebrate species such as crabs, squid and octopus, and the roles they play in the greater ecosystem.
He added that despite the need for more work by Canadian and American authorities to reverse a decline in the health of our marine ecosystem, local waters are still in relatively good shape compared with other coastal areas in the Pacific Rim, including China, Vietnam, and Indonesia.
RISK TO SPECIES BY JURISDICTION:
The shared waters of Puget Sound and the Strait of Georgia are home to 63 marine species that are at risk, with overharvesting, habitat loss and pollution rated as the biggest threats, according to a study being released at an international conference today.
The results show "ecosystem decay" and reflect the need for B.C., Washington state, Canada and the U.S. to work together to adopt an international, cooperative ecosystem approach. The statistics below show the differing levels of risk to some species, assigned by just two of those jurisdictions.
Source: The SeaDoc Society, The Vancouver Sun FISH, REPTILES, BIRDS AND MAMMALS ON THE AT-RISK LIST:
Also posted on:
March 24, 2005
Tacoma, WA, Mar. 24 (UPI) -- Concentrations of the banned chemical PCB are at least three times higher in Puget Sound chinook salmon than in that from other areas, a report says.
That finding, from Sandie O'Neill, a scientist with the Washington State Department of Fish and Wildlife, measured chinook salmon from Alaska, British Columbia, Oregon, coastal Washington and the Columbia River.
Her report prompted the state to begin its own research. Officials say there is no immediate cause for alarm, the Tacoma News-Tribune said Thursday.
O'Neill presented preliminary data to the state Fish & Wildlife Commission last October and plans to unveil more comprehensive research at the 2005 Puget Sound Georgia Basin Research Conference next week in Seattle.
"The food chain in Puget Sound is significantly contaminated with PCBs and flame retardants," said Jim West, another state scientist.
PCBs, or polychlorinated biphenyls, are banned industrial compounds that build up in the food chain and can cause developmental and behavioral problems in children.
SUSAN GORDON; The News Tribune
Concentrations of banned chemicals that are particularly threatening to children are at least three times higher in Puget Sound chinook salmon than in chinook from other areas.
In light of that finding by a state Department of Fish and Wildlife scientist, state Health Department officials are conducting their own research. While they say there is no cause for alarm, health officials acknowledge they might revise fish consumption warnings in a few months.
"I don't think the data is clear enough yet," said Rob Duff, the Health Department's environmental health director.
Sandie O'Neill, a state Department of Fish and Wildlife scientist, has found PCB concentrations in Puget Sound chinook are three times higher than what others have measured in chinook salmon from Alaska, British Columbia, Oregon, coastal Washington and the Columbia River.
O'Neill has studied PCBs in salmon since 1992. But comparable data from other researchers weren't available until recently, she said.
She first presented preliminary data to the state Fish & Wildlife Commission last October and plans to unveil more comprehensive research at the 2005 Puget Sound Georgia Basin Research Conference next week in Seattle.
O'Neill's results underscore the persistence of dangerous contaminants in Puget Sound.
"The food chain in Puget Sound is significantly contaminated with PCBs and flame retardants," said Jim West, another state Fish and Wildlife Department scientist.
He recently discovered both pollutants in herring, a key component of the salmon diet.
PCBs, or polychlorinated biphenyls, are banned industrial compounds found worldwide that build up in the food chain and can cause developmental and behavioral problems in children.
Testing store-bought fish
Although PCBs are found in meat and dairy products, some health experts believe humans are most at risk from eating contaminated fish.
However, because fish are nutritious and contain fatty acids that lower cholesterol, many experts are reluctant to suggest consumption limits based on PCBs.
"These contaminants are in every fish and every person on the planet," Duff said.
Current state Health Department advisories warn about contaminated fish or shellfish in eight tainted locations around Puget Sound, including Tacoma's Commencement Bay.
But that advice, which doesn't mention salmon, is complicated and might not be sufficient, Duff said.
So Health Department researchers are testing store-bought fish for PCBs, mercury and flame retardants. The sampling list includes chinook salmon, catfish, pollack, red snapper, halibut, cod and flounder, Duff said.
After that analysis, due in about three months, state health officials could revise statewide fish consumption recommendations, Duff said.
PCBs, which cause cancer, are highly toxic compounds that can be transferred from mothers to children through breast milk. Once used to cool and insulate transformers and other electrical equipment, PCBs have been banned in the United States since 1977.
Because PCBs don't break down over time, they persist in air, water and soil. The PCBs also build up in the food chain, so top predators harbor high concentrations. Because of PCBs, orca whales are some of the world's most contaminated marine mammals.
In Puget Sound chinook, O'Neill measured average PCB concentrations of 53 parts per billion. That's like a spoonful of poison in a railroad tanker car full of water, but scientists believe the toxicity of the compound makes it notable.
In Puget Sound coho, O'Neill measured average PCB concentrations of 31 parts per billion.
"These are not screamingly high levels," Duff said.
Concentrations found in Great Lakes salmon have been many times higher.
But Puget Sound chinook, also known as king salmon, are far more contaminated than other types of salmon, such as pinks, sockeye and chum, O'Neill said. That might be because young chinook spend more time in the estuaries than other young salmon, which also feed lower on the food web.
Also, O'Neill said concentrations of PCBs in Puget Sound chinook are comparable to what others have measured in farmed Atlantic salmon from Norway and Scotland.
For years, scientists have known about excessive concentrations of PCBs in bottom-dwelling Puget Sound fish, particularly those inhabiting polluted industrial areas such as Commencement Bay in Tacoma and the Seattle waterfront.
For example, state researchers have found PCBs in concentrations of 121 parts per billion in rockfish and 62 parts per billion in English sole. Both were caught in Seattle.
Harbor seals also are contaminated.
The new research suggests that efforts to confine contaminated sediments in polluted areas such as Commencement Bay might not prevent PCBs from recycling through plankton and fish, said West, O'Neill's colleague at the Fish and Wildlife Department.
"We need to better understand the dynamic between contaminants trapped in sediments and those entrained in the (salmon) food web," O'Neill said.
Bill Sullivan, environmental director for the Puyallup Tribe of Indians, said he wouldn't be surprised if contaminants leak out of disposal sites.
"Obviously, we have something very wrong in the interior Puget Sound," he said.
If state officials revamp fish consumption recommendations, Duff said special outreach efforts will be made to tribes and immigrant groups of Asians and Pacific Islanders. They often eat lots of fish and might be more vulnerable to injury than the mainstream population, he said.
Most Washington residents eat no more than two fish meals a week, and that's probably not enough to cause harm, he said.
On the net:
For state Health Department fish consumption recommendations, visit www.doh.wa.gov/ehp/oehas/EHA_fish_adv.htm.
Susan Gordon: 253-597-8756
March. 23-29, 2005
Puget Sound Georgia Basin Research Conference: Literally hundreds of scientists and scholars converge on the Washington Convention and Trade Center for this environmental confab. The Wednesday evening forum, led by a panel of researchers and policymakers, is open to the public. 800 Convention Pl., 206-694-5000. Free. 7-9 p.m. Wed., March 30.
By Warren Cornwall
A prolific and potentially toxic fire retardant is showing up in Puget Sound marine life ranging from tiny herring to massive killer whales, raising alarms among scientists who warn it could become the next big toxic threat to underwater animals.
"We've got fireproof killer whales," said Peter Ross, a research scientist with the Institute of Ocean Sciences in Canada and an expert in toxic chemicals in marine animals. "We're concerned about this."
The problem appears greatest in south and central Puget Sound - where fish, seals and whales had higher levels of chemicals called polybrominated diphenyl ethers, or PBDEs.
Since the early 1980s, levels of those chemicals in southern Puget Sound harbor seals have soared, a sign of an emerging threat to local killer whales that also feed on fish, Ross said. The whales are on the verge of being listed as a threatened species under the federal Endangered Species Act.
"I'm surprised at the rate of increase [of contamination]," said Sandie O'Neill, a research scientist with the state Department of Fish and Wildlife. "This is definitely an increasing concern, and that's what's getting everybody's attention."
Scientists are unsure how the chemicals are affecting marine life, or what threat is posed to people who eat contaminated fish. The state Department of Health hasn't established safety thresholds for food containing PBDEs.
A bromine-industry spokesman questioned whether the presence of PBDEs was cause for concern.
Production of some versions of the chemicals ended in 2004 because of health concerns. The most widespread version now is considered far less toxic, or not toxic at all, said John Kyte, executive director of the industry-backed Bromine Science and Environmental Forum.
"To simply say, 'We've found PBDEs' ... it's hard to make any meaningful judgment about whether this means anything."
But marine biologists worry the chemicals, used to fireproof everything from computers to mattresses, could interfere with neurological development or throw off an animal's hormones or immune system. PBDEs can linger in the environment for years, increasing the risk they will travel up the food chain as one animal eats another.
Toxic chemicals are considered one of the chief threats to the southern orcas. Their numbers have fallen from 99 in 1999 to 85 in 2004.
New research suggests those orcas may absorb much of the chemicals through the chinook salmon they eat. Puget Sound chinook had between three and five times higher levels of PBDEs and PCBs, a longstanding contaminant, compared with chinook from elsewhere, O'Neill said. This Puget Sound hot spot affects a number of marine creatures, according to studies by state, federal and Canadian agencies discussed yesterday at the Puget Sound Georgia Basin Research Conference, a Seattle meeting of scientists studying the waters shared by Washington and British Columbia.
The fire retardant may wind up in Puget Sound through storm-water runoff; or after floating into the air and then falling into the water, where they can be absorbed by animals scouring the sediment for food; or by plankton, O'Neill said. PBDEs also have been found in house dust and in women's breast milk.
The state Department of Ecology last year called for a ban on PBDEs, except in cases where no replacement flame retardant is available. But the ban proposal has stalled in the state Legislature this year.
Warren Cornwall: 206-464-2311 or [email protected]
Scientists find high concentrations of harmful flame retardants in Puget Sound fish and marine mammals. They say action is needed now.
SUSAN GORDON; The News Tribune
U.S. and Canadian scientists have found abnormal levels of harmful flame retardants in Puget Sound fish and marine mammals, including orca whales.
Scientists who presented their findings at the Puget Sound Georgia Basin Research Conference in Seattle on Wednesday said the results confirm the region's vulnerability to contamination from the unstable but increasingly common chemical compounds.
The findings also underscore the need for a safe substitute for the flame retardants frequently used in consumer electronics, upholstery and carpeting, they said.
The problem is polybrominated diphenyl ethers, also known as PBDEs. The chemicals cause learning and behavioral problems in laboratory rats and mice and might have a similar effect on people, health officials say.
Peter Ross, a Canadian marine mammal toxicologist, and Sandie O'Neill, a Washington fish biologist, said new research highlights the need for government action. O'Neill and Ross compared flame retardants to polychlorinated biphenyls, or PCBs, a banned industrial compound that poses similar health threats.
Similar research, first reported last week by The News Tribune, will be presented today at the conference that shows unusually high concentrations of PCBs in Puget Sound chinook salmon.
"It's a no-brainer. We banned PCBs and it's time to do something about PBDEs. If we wait to see health effects on fish, whales or people, it'll be too late," O'Neill said after her presentation. "We've got to turn off the tap now."
PBDEs break down over time, don't stick to the products in which they are used, attach to dust particles and wind up in foods such as fish and meat.
Ross, for his part, commended Washington state's effort to reduce the risks, saying action is necessary to protect the health of the region's dwindling population of orca whales, already heavily contaminated by PCBs.
Last year, then-Gov. Gary Locke ordered the state Department of Ecology to work with health experts to reduce the threat of harm from flame retardants.
Recently, state lawmakers introduced bills to ban PBDEs, but the measures have failed to move beyond legislative committees.
Earl Tower, a lobbyist for a coalition of chemical manufactures, said the two most controversial forms of the chemical - Penta and Octa - are no longer manufactured. The third, Deca-BDE, is used in the casings for computers, TVs and wiring. It is required by federal law to be used in airplanes and automobiles.
"Deca is not toxic. It's not bioaccumulative. There are no cases noted of any ill effects related to Deca," said Tower, who represents the industry-funded Bromine Science and Environmental Forum.
The proposal to ban Deca is "based on the precautionary principle that we don't know if it's a problem but it might be," Tower said, adding, "It's the most understood and most tested flame retardant."
O'Neill and Ross on Wednesday shared new evidence of abnormal levels of PBDEs in Puget Sound harbor seals, English sole, rockfish, herring, coho and chinook salmon.
O'Neill said she didn't find excessive amounts of the chemical in chum or pink salmon, which spend more time in the open ocean than in the Sound.
Ross presented results of research on harbor seals done in conjunction with Steven Jeffries, a state Fish and Wildlife Department marine mammal expert. Harbor seal pups captured on Gertrude Island, near Tacoma, also show higher levels of PBDE contamination than samples collected from other groups of seals in the north Puget Sound and British Columbia, Ross said.
Ross and O'Neill said their PBDE findings are consistent with a pattern of bioaccumulation high in the food chain previously seen in research on PCBs.
The United States banned PCBs almost 30 years ago because of the health risks.
Flame retardants are troublesome in part because they are unstable, said Denise Laflamme, a state Department of Health toxicologist who also spoke at the conference.
Flame retardants accumulate in fat, have been found in human breast milk and can be passed from mothers to their babies.
Since Locke's call for action in January 2004, Ecology Department officials have proposed a PBDE ban, but have not put it into place.
One lingering question is what would substitute for PBDEs now on the market, said Cheri Peele, an Ecology Department official working on the problem.
Flame retardant-to-human path unclear
Human health experts believe people are not exposed to the same high levels of flame retardants as have been proved to harm laboratory mice and rats, said Denise Laflamme, a state Department of Health toxicologist. But toxicologists also haven't figured out how the chemicals get into people, she said.
Polybrominated diphenyl ethers, known as PBDEs, are present in many consumer products. Because flame retardants easily bind to dust, good housekeeping can reduce exposure, Laflamme said.
While fish is the most likely dietary source of flame retardants, they also have been found in meat and dairy products, she said. And despite the presence of flame retardants in breast milk, health officials still recommend breast feeding.
Health officials are studying the presence of flame retardants and other chemicals in fish and say they might change their advisories about fish consumption in the next few months.
On the Net
Susan Gordon: 253-597-8756 | <urn:uuid:9a896490-858a-446e-9719-2c811279a311> | CC-MAIN-2013-20 | http://depts.washington.edu/uwconf/2005psgb/2005proceedings/press.html | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.950253 | 7,061 | 2.515625 | 3 | HuggingFaceFW/fineweb-edu/sample-100BT |
The cerebrum, the largest part of the brain, is separated into the right and left hemispheres. The right hemisphere is in charge of the functions on the left-side of the body, as well as many cognitive functions.
A right-side stroke happens when the brain’s blood supply is interrupted in this area. Without oxygen and nutrients from blood, the brain tissue quickly dies. A stroke is a serious condition. It requires emergency care.
There are two main types of stroke:
An ischemic stroke (the more common form) is caused by a sudden decrease in blood flow to a region of the brain, which may be due to:
- A clot that forms in another part of the body (eg, heart or neck) breaking off and blocking the flow in a blood vessel supplying the brain (embolus)
- A clot that forms in an artery that supplies blood to the brain (thrombus)
- A tear in an artery supplying blood to the brain (arterial dissection)
A hemorrhagic stroke is caused by a burst blood vessel that results in bleeding in the brain.
Examples of risk factors that you can control or treat include:
Certain conditions, such as:
- High blood pressure
- High cholesterol
- High levels of the amino acid homocysteine (may result in the formation of blood clots)
- Atherosclerosis (narrowing of the arteries due to build-up of plaque)
- Atrial fibrillation (abnormal heart rhythm)
- Metabolic syndrome
- Type 2 diabetes
- Alcohol or drug abuse
- Medicines (eg, long-term use of birth control pills )
- Lifestyle factors (eg, smoking , physical inactivity, diet)
Risk factors that you cannot control include:
- History of having a stroke, heart attack , or other type of cardiovascular disease
- History of having a transient ischemic attack (TIA)—With a TIA, stroke-like symptoms often resolve within minutes (always in 24 hours). They may signal a very high risk of having a stroke in the future.
- Age: 60 or older
- Family members who have had a stroke
- Gender: males
- Race: Black, Asian, Hispanic
- Blood disorder that increases clotting
- Heart valve disease (eg, mitral stenosis )
The immediate symptoms of a right-side stroke come on suddenly and may include:
- Weakness or numbness of face, arm, or leg, especially on the left side of the body
- Loss of balance, coordination problems
- Vision problems, especially on the left-side of vision in both eyes
- Difficulty swallowing
If you or someone you know has any of these symptoms, call 911 right away. A stroke needs to be treated as soon as possible.
Longer-lasting effects of the stroke may include problems with:
- Left-sided weakness and/or sensory problems
- Speaking and swallowing
- Vision (eg, inability for the brain to take in information from the left visual field)
- Perception and spatial relations
- Attention span, comprehension, problem solving, judgment
- Interactions with other people
- Activities of daily living (eg, going to the bathroom)
- Mental health (eg, depression , frustration, impulsivity)
The doctor will make a diagnosis as quickly as possible. Tests may include:
- Exam of nervous system
- Computed tomography (CT) scan —a type of x-ray that uses a computer to make pictures of the brain
- CT angiogram—a type of CT scan which evaluates the blood vessels in the brain and/or neck
- Magnetic resonance imaging (MRI) scan —a test that uses magnetic waves to make pictures of the brain
- Magnetic resonance angiography (MRA) scan —a type of MRI scan which evaluates the blood vessels in the brain and/or neck
- Angiogram —a test that uses a catheter (tube) and x-ray machine to assess the heart and its blood supply
- Heart function tests (eg, electrocardiogram , echocardiogram )
- Doppler ultrasound —a test that uses sound waves to examine the blood vessels
- Blood tests
- Tests to check the level of oxygen in the blood
- Kidney function tests
- Tests to evaluate the ability to swallow
Immediate treatment is needed to potentially:
- Dissolve a clot causing an ischemic stroke
- Stop the bleeding during a hemorrhagic stroke
In some cases, oxygen therapy is needed.
Medicines may be given right away for an ischemic stroke to:
- Dissolve clots and prevent new ones from forming
- Thin blood
- Control blood pressure
- Reduce brain swelling
- Treat an irregular heart rate
Cholesterol medicines called statins may also be given.
For a hemorrhagic stroke, the doctor may give medicines to:
- Work against any blood-thinning drugs that you may regularly take
- Reduce how your brain reacts to bleeding
- Control blood pressure
- Prevent seizures
For an ischemic stroke, procedures may be done to:
- Reroute blood supply around a blocked artery
- Remove the clot or deliver clot-dissolving medicine (embolectomy)
- Remove fatty deposits from a carotid artery (major arteries in the neck that lead to the brain) ( carotid artery endarterectomy )
- Widen carotid artery and add a mesh tube to keep it open ( angioplasty and stenting )
For a hemorrhagic stroke, the doctor may:
- Remove a piece of the skull ( craniotomy ) to relieve pressure on the brain and remove blood clot
- Place a clip on or a tiny coil in the aneurysm to stop it from bleeding
A rehabilitation program focuses on:
- Physical therapy—to regain as much movement as possible
- Occupational therapy—to assist in everyday tasks and self-care
- Speech therapy—to improve swallowing and speech challenges
- Psychological therapy—to help adjust to life after the stroke
To help reduce your chance of having a stroke, take the following steps:
- Exercise regularly .
- Eat a healthy diet that includes fruit, vegetables, whole grains, and fish.
- Maintain a healthy weight.
- If you drink alcohol , drink only in moderation (1-2 drinks per day).
- If you smoke, quit .
- If you have a chronic condition, like high blood pressure or diabetes, get proper treatment.
- If recommended by your doctor, take a low-dose aspirin every day.
- If you are at risk for having a stroke, talk to your doctor about taking statin medicines .
- Reviewer: Rimas Lukas, MD
- Review Date: 06/2012 -
- Update Date: 00/61/2012 - | <urn:uuid:6f093826-dc99-4b9c-9f16-033ec6f1ac6f> | CC-MAIN-2013-20 | http://doctors-hospital.net/your-health/?/645168/Right-hemisphere-stroke | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.885188 | 1,443 | 3.671875 | 4 | HuggingFaceFW/fineweb-edu/sample-100BT |
||This article needs additional citations for verification. (March 2011)|
Nuclear meltdown is an informal term for a severe nuclear reactor accident that results in core damage from overheating. The term is not officially defined by the International Atomic Energy Agency or by the U.S. Nuclear Regulatory Commission. However, it has been defined to mean the accidental melting of the core of a nuclear reactor, and is in common usage a reference to the core's either complete or partial collapse. "Core melt accident" and "partial core melt" are the analogous technical terms for a meltdown.
A core melt accident occurs when the heat generated by a nuclear reactor exceeds the heat removed by the cooling systems to the point where at least one nuclear fuel element exceeds its melting point. This differs from a fuel element failure, which is not caused by high temperatures. A meltdown may be caused by a loss of coolant, loss of coolant pressure, or low coolant flow rate or be the result of a criticality excursion in which the reactor is operated at a power level that exceeds its design limits. Alternately, in a reactor plant such as the RBMK-1000, an external fire may endanger the core, leading to a meltdown.
Once the fuel elements of a reactor begin to melt, the fuel cladding has been breached, and the nuclear fuel (such as uranium, plutonium, or thorium) and fission products (such as cesium-137, krypton-88, or iodine-131) within the fuel elements can leach out into the coolant. Subsequent failures can permit these radioisotopes to breach further layers of containment. Superheated steam and hot metal inside the core can lead to fuel-coolant interactions, hydrogen explosions, or water hammer, any of which could destroy parts of the containment. A meltdown is considered very serious because of the potential, however remote, that radioactive materials could breach all containment and escape (or be released) into the environment, resulting in radioactive contamination and fallout, and potentially leading to radiation poisoning of people and animals nearby.
Nuclear power plants generate electricity by heating fluid via a nuclear reaction to run a generator. If the heat from that reaction is not removed adequately, the fuel assemblies in a reactor core can melt. A core damage incident can occur even after a reactor is shut down because the fuel continues to produce decay heat.
A core damage accident is caused by the loss of sufficient cooling for the nuclear fuel within the reactor core. The reason may be one of several factors, including a loss-of-pressure-control accident, a loss-of-coolant accident (LOCA), an uncontrolled power excursion or, in reactors without a pressure vessel, a fire within the reactor core. Failures in control systems may cause a series of events resulting in loss of cooling. Contemporary safety principles of defense in depth ensure that multiple layers of safety systems are always present to make such accidents unlikely.
The containment building is the last of several safeguards that prevent the release of radioactivity to the environment. Many commercial reactors are contained within a 1.2-to-2.4-metre (3.9 to 7.9 ft) thick pre-stressed, steel-reinforced, air-tight concrete structure that can withstand hurricane-force winds and severe earthquakes.
- In a loss-of-coolant accident, either the physical loss of coolant (which is typically deionized water, an inert gas, NaK, or liquid sodium) or the loss of a method to ensure a sufficient flow rate of the coolant occurs. A loss-of-coolant accident and a loss-of-pressure-control accident are closely related in some reactors. In a pressurized water reactor, a LOCA can also cause a "steam bubble" to form in the core due to excessive heating of stalled coolant or by the subsequent loss-of-pressure-control accident caused by a rapid loss of coolant. In a loss-of-forced-circulation accident, a gas cooled reactor's circulators (generally motor or steam driven turbines) fail to circulate the gas coolant within the core, and heat transfer is impeded by this loss of forced circulation, though natural circulation through convection will keep the fuel cool as long as the reactor is not depressurized.
- In a loss-of-pressure-control accident, the pressure of the confined coolant falls below specification without the means to restore it. In some cases this may reduce the heat transfer efficiency (when using an inert gas as a coolant) and in others may form an insulating "bubble" of steam surrounding the fuel assemblies (for pressurized water reactors). In the latter case, due to localized heating of the "steam bubble" due to decay heat, the pressure required to collapse the "steam bubble" may exceed reactor design specifications until the reactor has had time to cool down. (This event is less likely to occur in boiling water reactors, where the core may be deliberately depressurized so that the Emergency Core Cooling System may be turned on). In a depressurization fault, a gas-cooled reactor loses gas pressure within the core, reducing heat transfer efficiency and posing a challenge to the cooling of fuel; however, as long as at least one gas circulator is available, the fuel will be kept cool.
- In an uncontrolled power excursion accident, a sudden power spike in the reactor exceeds reactor design specifications due to a sudden increase in reactor reactivity. An uncontrolled power excursion occurs due to significantly altering a parameter that affects the neutron multiplication rate of a chain reaction (examples include ejecting a control rod or significantly altering the nuclear characteristics of the moderator, such as by rapid cooling). In extreme cases the reactor may proceed to a condition known as prompt critical. This is especially a problem in reactors that have a positive void coefficient of reactivity, a positive temperature coefficient, are overmoderated, or can trap excess quantities of deleterious fission products within their fuel or moderators. Many of these characteristics are present in the RBMK design, and the Chernobyl disaster was caused by such deficiencies as well as by severe operator negligence. Western light water reactors are not subject to very large uncontrolled power excursions because loss of coolant decreases, rather than increases, core reactivity (a negative void coefficient of reactivity); "transients," as the minor power fluctuations within Western light water reactors are called, are limited to momentary increases in reactivity that will rapidly decrease with time (approximately 200% - 250% of maximum neutronic power for a few seconds in the event of a complete rapid shutdown failure combined with a transient).
- Core-based fires endanger the core and can cause the fuel assemblies to melt. A fire may be caused by air entering a graphite moderated reactor, or a liquid-sodium cooled reactor. Graphite is also subject to accumulation of Wigner energy, which can overheat the graphite (as happened at the Windscale fire). Light water reactors do not have flammable cores or moderators and are not subject to core fires. Gas-cooled civilian reactors, such as the Magnox, UNGG, and AGCR type reactors, keep their cores blanketed with non reactive carbon dioxide gas, which cannot support a fire. Modern gas-cooled civilian reactors use helium, which cannot burn, and have fuel that can withstand high temperatures without melting (such as the High Temperature Gas Cooled Reactor and the Pebble Bed Modular Reactor).
- Byzantine faults and cascading failures within instrumentation and control systems may cause severe problems in reactor operation, potentially leading to core damage if not mitigated. For example, the Browns Ferry fire damaged control cables and required the plant operators to manually activate cooling systems. The Three Mile Island accident was caused by a stuck-open pilot-operated pressure relief valve combined with a deceptive water level gauge that misled reactor operators, which resulted in core damage.
Light water reactors (LWRs)
Before the core of a light water nuclear reactor can be damaged, two precursor events must have already occurred:
- A limiting fault (or a set of compounded emergency conditions) that leads to the failure of heat removal within the core (the loss of cooling). Low water level uncovers the core, allowing it to heat up.
- Failure of the Emergency Core Cooling System (ECCS). The ECCS is designed to rapidly cool the core and make it safe in the event of the maximum fault (the design basis accident) that nuclear regulators and plant engineers could imagine. There are at least two copies of the ECCS built for every reactor. Each division (copy) of the ECCS is capable, by itself, of responding to the design basis accident. The latest reactors have as many as four divisions of the ECCS. This is the principle of redundancy, or duplication. As long as at least one ECCS division functions, no core damage can occur. Each of the several divisions of the ECCS has several internal "trains" of components. Thus the ECCS divisions themselves have internal redundancy – and can withstand failures of components within them.
The Three Mile Island accident was a compounded group of emergencies that led to core damage. What led to this was an erroneous decision by operators to shut down the ECCS during an emergency condition due to gauge readings that were either incorrect or misinterpreted; this caused another emergency condition that, several hours after the fact, led to core exposure and a core damage incident. If the ECCS had been allowed to function, it would have prevented both exposure and core damage. During the Fukushima incident the emergency cooling system had also been manually shut down several minutes after it started.
If such a limiting fault were to occur, and a complete failure of all ECCS divisions were to occur, both Kuan, et al and Haskin, et al describe six stages between the start of the limiting fault (the loss of cooling) and the potential escape of molten corium into the containment (a so-called "full meltdown"):
- Uncovering of the Core – In the event of a transient, upset, emergency, or limiting fault, LWRs are designed to automatically SCRAM (a SCRAM being the immediate and full insertion of all control rods) and spin up the ECCS. This greatly reduces reactor thermal power (but does not remove it completely); this delays core becoming uncovered, which is defined as the point when the fuel rods are no longer covered by coolant and can begin to heat up. As Kuan states: "In a small-break LOCA with no emergency core coolant injection, core uncovery [sic] generally begins approximately an hour after the initiation of the break. If the reactor coolant pumps are not running, the upper part of the core will be exposed to a steam environment and heatup of the core will begin. However, if the coolant pumps are running, the core will be cooled by a two-phase mixture of steam and water, and heatup of the fuel rods will be delayed until almost all of the water in the two-phase mixture is vaporized. The TMI-2 accident showed that operation of reactor coolant pumps may be sustained for up to approximately two hours to deliver a two phase mixture that can prevent core heatup."
- Pre-damage heat up – "In the absence of a two-phase mixture going through the core or of water addition to the core to compensate water boiloff, the fuel rods in a steam environment will heat up at a rate between 0.3 °C/s (0.5 °F/s) and 1 °C/s (1.8 °F/s) (3)."
- Fuel ballooning and bursting – "In less than half an hour, the peak core temperature would reach 1,100 K (1,520 °F). At this temperature the zircaloy cladding of the fuel rods may balloon and burst. This is the first stage of core damage. Cladding ballooning may block a substantial portion of the flow area of the core and restrict the flow of coolant. However complete blockage of the core is unlikely because not all fuel rods balloon at the same axial location. In this case, sufficient water addition can cool the core and stop core damage progression."
- Rapid oxidation – "The next stage of core damage, beginning at approximately 1,500 K (2,240 °F), is the rapid oxidation of the Zircaloy by steam. In the oxidation process, hydrogen is produced and a large amount of heat is released. Above 1,500 K (2,240 °F), the power from oxidation exceeds that from decay heat (4,5) unless the oxidation rate is limited by the supply of either zircaloy or steam."
- Debris bed formation – "When the temperature in the core reaches about 1,700 K (2,600 °F), molten control materials [1,6] will flow to and solidify in the space between the lower parts of the fuel rods where the temperature is comparatively low. Above 1,700 K (2,600 °F), the core temperature may escalate in a few minutes to the melting point of zircaloy [2,150 K (3,410 °F)] due to increased oxidation rate. When the oxidized cladding breaks, the molten zircaloy, along with dissolved UO2 [1,7] would flow downward and freeze in the cooler, lower region of the core. Together with solidified control materials from earlier down-flows, the relocated zircaloy and UO2 would form the lower crust of a developing cohesive debris bed."
- (Corium) Relocation to the lower plenum – "In scenarios of small-break LOCAs, there is generally a pool of water in the lower plenum of the vessel at the time of core relocation. Release of molten core materials into water always generates large amounts of steam. If the molten stream of core materials breaks up rapidly in water, there is also a possibility of a steam explosion. During relocation, any unoxidized zirconium in the molten material may also be oxidized by steam, and in the process hydrogen is produced. Recriticality also may be a concern if the control materials are left behind in the core and the relocated material breaks up in unborated water in the lower plenum."
At the point at which the corium relocates to the lower plenum, Haskin, et al relate that the possibility exists for an incident called a fuel-coolant interaction (FCI) to substantially stress or breach the primary pressure boundary when the corium relocates to the lower plenum of the reactor pressure vessel ("RPV"). This is because the lower plenum of the RPV may have a substantial quantity of water - the reactor coolant - in it, and, assuming the primary system has not been depressurized, the water will likely be in the liquid phase, and consequently dense, and at a vastly lower temperature than the corium. Since corium is a liquid metal-ceramic eutectic at temperatures of 2,200 to 3,200 K (3,500 to 5,300 °F), its fall into liquid water at 550 to 600 K (530 to 620 °F) may cause an extremely rapid evolution of steam that could cause a sudden extreme overpressure and consequent gross structural failure of the primary system or RPV. Though most modern studies hold that it is physically infeasible, or at least extraordinarily unlikely, Haskin, et al state that that there exists a remote possibility of an extremely violent FCI leading to something referred to as an alpha-mode failure, or the gross failure of the RPV itself, and subsequent ejection of the upper plenum of the RPV as a missile against the inside of the containment, which would likely lead to the failure of the containment and release of the fission products of the core to the outside environment without any substantial decay having taken place.
Breach of the Primary Pressure Boundary
There are several possibilities as to how the primary pressure boundary could be breached by corium.
- Steam Explosion
As previously described, FCI could lead to an overpressure event leading to RPV fail, and thus, primary pressure boundary fail. Haskin, et al. report that in the event of a steam explosion, failure of the lower plenum is far more likely than ejection of the upper plenum in the alpha-mode. In the even of lower plenum failure, debris at varied temperatures can be expected to be projected into the cavity below the core. The containment may be subject to overpressure, though this is not likely to fail the containment. The alpha-mode failure will lead to the consequences previously discussed.
- Pressurized Melt Ejection (PME)
It is quite possible, especially in pressurized water reactors, that the primary loop will remain pressurized following corium relocation to the lower plenum. As such, pressure stresses on the RPV will be present in addition to the weight stress that the molten corium places on the lower plenum of the RPV; when the metal of the RPV weakens sufficiently due to the heat of the molten corium, it is likely that the liquid corium will be discharged under pressure out of the bottom of the RPV in a pressurized stream, together with entrained gases. This mode of corium ejection may lead to direct containment heating (DCH).
Severe Accident Ex-Vessel Interactions and Challenges to Containment
Haskin, et al identify six modes by which the containment could be credibly challenged; some of these modes are not applicable to core melt accidents.
- Dynamic pressure (shockwaves)
- Internal missiles
- External missiles (not applicable to core melt accidents)
Standard failure modes
If the melted core penetrates the pressure vessel, there are theories and speculations as to what may then occur.
In modern Russian plants, there is a "core catching device" in the bottom of the containment building, the melted core is supposed to hit a thick layer of a "sacrificial metal" which would melt, dilute the core and increase the heat conductivity, and finally the diluted core can be cooled down by water circulating in the floor. However there has never been any full-scale testing of this device.
In Western plants there is an airtight containment building. Though radiation would be at a high level within the containment, doses outside of it would be lower. Containment buildings are designed for the orderly release of pressure without releasing radionuclides, through a pressure release valve and filters. Hydrogen/oxygen recombiners also are installed within the containment to prevent gas explosions.
In a melting event, one spot or area on the RPV will become hotter than other areas, and will eventually melt. When it melts, corium will pour into the cavity under the reactor. Though the cavity is designed to remain dry, several NUREG-class documents advise operators to flood the cavity in the event of a fuel melt incident. This water will become steam and pressurize the containment. Automatic water sprays will pump large quantities of water into the steamy environment to keep the pressure down. Catalytic recombiners will rapidly convert the hydrogen and oxygen back into water. One positive effect of the corium falling into water is that it is cooled and returns to a solid state.
Extensive water spray systems within the containment along with the ECCS, when it is reactivated, will allow operators to spray water within the containment to cool the core on the floor and reduce it to a low temperature.
These procedures are intended to prevent release of radiation. In the Three Mile Island event in 1979, a theoretical person standing at the plant property line during the entire event would have received a dose of approximately 2 millisieverts (200 millirem), between a chest X-ray's and a CT scan's worth of radiation. This was due to outgassing by an uncontrolled system that, today, would have been backfitted with activated carbon and HEPA filters to prevent radionuclide release.
However in case of Fukushima incident this design also at least partially failed: large amounts of highly radioactive water were produced and nuclear fuel has possibly melted through the base of the pressure vessels.
Cooling will take quite a while, until the natural decay heat of the corium reduces to the point where natural convection and conduction of heat to the containment walls and re-radiation of heat from the containment allows for water spray systems to be shut down and the reactor put into safe storage. The containment can be sealed with release of extremely limited offsite radioactivity and release of pressure within the containment. After a number of years for fission products to decay - probably around a decade - the containment can be reopened for decontamination and demolition.
Unexpected failure modes
Another scenario sees a buildup of hydrogen, which may lead to a detonation event, as happened for three reactors during Fukushima incident. Catalytic hydrogen recombiners located within containment are designed to prevent this from occurring; however, prior to the installation of these recombiners in the 1980s, the Three Mile Island containment (in 1979) suffered a massive hydrogen explosion event in the accident there. The containment withstood the pressure and no radioactivity was released. However, in Fukushima recombiners did not work due the absence of power and hydrogen detonation breached the containment.
Speculative failure modes
One scenario consists of the reactor pressure vessel failing all at once, with the entire mass of corium dropping into a pool of water (for example, coolant or moderator) and causing extremely rapid generation of steam. The pressure rise within the containment could threaten integrity if rupture disks could not relieve the stress. Exposed flammable substances could burn, but there are few, if any, flammable substances within the containment.
Another theory called an 'alpha mode' failure by the 1975 Rasmussen (WASH-1400) study asserted steam could produce enough pressure to blow the head off the reactor pressure vessel (RPV). The containment could be threatened if the RPV head collided with it. (The WASH-1400 report was replaced by better-based[original research?] newer studies, and now the Nuclear Regulatory Commission has disavowed them all and is preparing the overarching State-of-the-Art Reactor Consequence Analyses [SOARCA] study - see the Disclaimer in NUREG-1150.)
It has not been determined to what extent a molten mass can melt through a structure (although that was tested in the Loss-of-Fluid-Test Reactor described in Test Area North's fact sheet). The Three Mile Island accident provided some real-life experience, with an actual molten core within an actual structure; the molten corium failed to melt through the Reactor Pressure Vessel after over six hours of exposure, due to dilution of the melt by the control rods and other reactor internals, validating the emphasis on defense in depth against core damage incidents. Some believe a molten reactor core could actually penetrate the reactor pressure vessel and containment structure and burn downwards into the earth beneath, to the level of the groundwater.
By 1970, there were doubts about the ability of the emergency cooling systems of a nuclear reactor to prevent a loss of coolant accident and the consequent meltdown of the fuel core; the subject proved popular in the technical and the popular presses. In 1971, in the article Thoughts on Nuclear Plumbing, former Manhattan Project (1942–1946) nuclear physicist Ralph Lapp used the term "China syndrome" to describe a possible burn-through, after a loss of coolant accident, of the nuclear fuel rods and core components melting the containment structures, and the subsequent escape of radioactive material(s) into the atmosphere and environment; the hypothesis derived from a 1967 report by a group of nuclear physicists, headed by W. K. Ergen.
The geographic, planet-piercing concept of the China syndrome derives from the misperception that China is the antipode of the United States; to many Americans, it is the “the other side of the world”. Moreover, the hypothetical transit of a meltdown product to the other side of the Earth (i.e. China) ignores the fact that the Earth's gravity tends to pull all masses towards its center. Assuming a meltdown product could persist in a mobile molten form for long enough to reach the center of the Earth; gravity would prevent it continuing to the other side.
Other reactor types
Other types of reactors have different capabilities and safety profiles than the LWR does. Advanced varieties of several of these reactors have the potential to be inherently safe.
CANDU reactors
CANDU reactors, Canadian-invented deuterium-uranium design, are designed with at least one, and generally two, large low-temperature and low-pressure water reservoirs around their fuel/coolant channels. The first is the bulk heavy-water moderator (a separate system from the coolant), and the second is the light-water-filled shield tank. These backup heat sinks are sufficient to prevent either the fuel meltdown in the first place (using the moderator heat sink), or the breaching of the core vessel should the moderator eventually boil off (using the shield tank heat sink). Other failure modes aside from fuel melt will probably occur in a CANDU rather than a meltdown, such as deformation of the calandria into a non-critical configuration. All CANDU reactors are located within standard Western containments as well.
Gas-cooled reactors
One type of Western reactor, known as the advanced gas-cooled reactor (or AGCR), built by the United Kingdom, is not very vulnerable to loss-of-cooling accidents or to core damage except in the most extreme of circumstances. By virtue of the relatively inert coolant (carbon dioxide), the large volume and high pressure of the coolant, and the relatively high heat transfer efficiency of the reactor, the time frame for core damage in the event of a limiting fault is measured in days. Restoration of some means of coolant flow will prevent core damage from occurring.
Other types of highly advanced gas cooled reactors, generally known as high-temperature gas-cooled reactors (HTGRs) such as the Japanese High Temperature Test Reactor and the United States' Very High Temperature Reactor, are inherently safe, meaning that meltdown or other forms of core damage are physically impossible, due to the structure of the core, which consists of hexagonal prismatic blocks of silicon carbide reinforced graphite infused with TRISO or QUADRISO pellets of uranium, thorium, or mixed oxide buried underground in a helium-filled steel pressure vessel within a concrete containment. Though this type of reactor is not susceptible to meltdown, additional capabilities of heat removal are provided by using regular atmospheric airflow as a means of backup heat removal, by having it pass through a heat exchanger and rising into the atmosphere due to convection, achieving full residual heat removal. The VHTR is scheduled to be prototyped and tested at Idaho National Laboratory within the next decade (as of 2009) as the design selected for the Next Generation Nuclear Plant by the US Department of Energy. This reactor will use a gas as a coolant, which can then be used for process heat (such as in hydrogen production) or for the driving of gas turbines and the generation of electricity.
A similar highly advanced gas cooled reactor originally designed by West Germany (the AVR reactor) and now developed by South Africa is known as the Pebble Bed Modular Reactor. It is an inherently safe design, meaning that core damage is physically impossible, due to the design of the fuel (spherical graphite "pebbles" arranged in a bed within a metal RPV and filled with TRISO (or QUADRISO) pellets of uranium, thorium, or mixed oxide within). A prototype of a very similar type of reactor has been built by the Chinese, HTR-10, and has worked beyond researchers' expectations, leading the Chinese to announce plans to build a pair of follow-on, full-scale 250 MWe, inherently safe, power production reactors based on the same concept. (See Nuclear power in the People's Republic of China for more information.)
Experimental or conceptual designs
Some design concepts for nuclear reactors emphasize resistance to meltdown and operating safety.
The PIUS (process inherent ultimate safety) designs, originally engineered by the Swedes in the late 1970s and early 1980s, are LWRs that by virtue of their design are resistant to core damage. No units have ever been built.
Power reactors, including the Deployable Electrical Energy Reactor, a larger-scale mobile version of the TRIGA for power generation in disaster areas and on military missions, and the TRIGA Power System, a small power plant and heat source for small and remote community use, have been put forward by interested engineers, and share the safety characteristics of the TRIGA due to the uranium zirconium hydride fuel used.
The Hydrogen Moderated Self-regulating Nuclear Power Module, a reactor that uses uranium hydride as a moderator and fuel, similar in chemistry and safety to the TRIGA, also possesses these extreme safety and stability characteristics, and has attracted a good deal of interest in recent times.
The liquid fluoride thermal reactor is designed to naturally have its core in a molten state, as a eutectic mix of thorium and fluorine salts. As such, a molten core is reflective of the normal and safe state of operation of this reactor type. In the event the core overheats, a metal plug will melt, and the molten salt core will drain into tanks where it will cool in a non-critical configuration. Since the core is liquid, and already melted, it cannot be damaged.
Advanced liquid metal reactors, such as the U.S. Integral Fast Reactor and the Russian BN-350, BN-600, and BN-800, all have a coolant with very high heat capacity, sodium metal. As such, they can withstand a loss of cooling without SCRAM and a loss of heat sink without SCRAM, qualifying them as inherently safe.
Soviet Union-designed reactors
Soviet designed RBMKs, found only in Russia and the CIS and now shut down everywhere except Russia, do not have containment buildings, are naturally unstable (tending to dangerous power fluctuations), and also have ECCS systems that are considered grossly inadequate by Western safety standards. The reactor from the Chernobyl Disaster was a RBMK reactor.
RBMK ECCS systems only have one division and have less than sufficient redundancy within that division. Though the large core size of the RBMK makes it less energy-dense than the Western LWR core, it makes it harder to cool. The RBMK is moderated by graphite. In the presence of both steam and oxygen, at high temperatures, graphite forms synthesis gas and with the water gas shift reaction the resultant hydrogen burns explosively. If oxygen contacts hot graphite, it will burn. The RBMK tends towards dangerous power fluctuations. Control rods used to be tipped with graphite, a material that slows neutrons and thus speeds up the chain reaction. Water is used as a coolant, but not a moderator. If the water boils away, cooling is lost, but moderation continues. This is termed a positive void coefficient of reactivity.
Control rods can become stuck if the reactor suddenly heats up and they are moving. Xenon-135, a neutron absorbent fission product, has a tendency to build up in the core and burn off unpredictably in the event of low power operation. This can lead to inaccurate neutronic and thermal power ratings.
The RBMK does not have any containment above the core. The only substantial solid barrier above the fuel is the upper part of the core, called the upper biological shield, which is a piece of concrete interpenetrated with control rods and with access holes for refueling while online. Other parts of the RBMK were shielded better than the core itself. Rapid shutdown (SCRAM) takes 10 to 15 seconds. Western reactors take 1 - 2.5 seconds.
Western aid has been given to provide certain real-time safety monitoring capacities to the human staff. Whether this extends to automatic initiation of emergency cooling is not known. Training has been provided in safety assessment from Western sources, and Russian reactors have evolved in result to the weaknesses that were in the RBMK. However, numerous RBMKs still operate.
It is safe to say that it might be possible to stop a loss-of-coolant event prior to core damage occurring, but that any core damage incidents will probably assure massive release of radioactive materials. Further, dangerous power fluctuations are natural to the design.
Lithuania joined the EU recently, and upon acceding, it has been required to shut the two RBMKs that it has at Ignalina NPP, as such reactors are totally incompatible with the nuclear safety standards of Europe. It will be replacing them with some safer form of reactor.
The MKER is a modern Russian-engineered channel type reactor that is a distant descendant of the RBMK. It approaches the concept from a different and superior direction, optimizing the benefits, and fixing the flaws of the original RBMK design.
There are several unique features of the MKER's design that make it a credible and interesting option: One unique benefit of the MKER's design is that in the event of a challenge to cooling within the core - a pipe break of a channel, the channel can be isolated from the plenums supplying water, decreasing the potential for common-mode failures.
The lower power density of the core greatly enhances thermal regulation. Graphite moderation enhances neutronic characteristics beyond light water ranges. The passive emergency cooling system provides a high level of protection by using natural phenomena to cool the core rather than depending on motor-driven pumps. The containment structure is modern and designed to withstand a very high level of punishment.
Refueling is accomplished while online, ensuring that outages are for maintenance only and are very few and far between. 97-99% uptime is a definite possibility. Lower enrichment fuels can be used, and high burnup can be achieved due to the moderator design. Neutronics characteristics have been revamped to optimize for purely civilian fuel fertilization and recycling.
Due to the enhanced quality control of parts, advanced computer controls, comprehensive passive emergency core cooling system, and very strong containment structure, along with a negative void coefficient and a fast acting rapid shutdown system, the MKER's safety can generally be regarded as being in the range of the Western Generation III reactors, and the unique benefits of the design may enhance its competitiveness in countries considering full fuel-cycle options for nuclear development.
The VVER is a pressurized light water reactor that is far more stable and safe than the RBMK. This is because it uses light water as a moderator (rather than graphite), has well understood operating characteristics, and has a negative void coefficient of reactivity. In addition, some have been built with more than marginal containments, some have quality ECCS systems, and some have been upgraded to international standards of control and instrumentation. Present generations of VVERs (the VVER-1000) are built to Western-equivalent levels of instrumentation, control, and containment systems.
However, even with these positive developments, certain older VVER models raise a high level of concern, especially the VVER-440 V230.
The VVER-440 V230 has no containment building, but only has a structure capable of confining steam surrounding the RPV. This is a volume of thin steel, perhaps an inch or two in thickness, grossly insufficient by Western standards.
- Has no ECCS. Can survive at most one 4 inch pipe break (there are many pipes greater than 4 inches within the design).
- Has six steam generator loops, adding unnecessary complexity.
- However, apparently steam generator loops can be isolated, in the event that a break occurs in one of these loops. The plant can remain operating with one isolated loop - a feature found in few Western reactors.
The interior of the pressure vessel is plain alloy steel, exposed to water. This can lead to rust, if the reactor is exposed to water. One point of distinction in which the VVER surpasses the West is the reactor water cleanup facility - built, no doubt, to deal with the enormous volume of rust within the primary coolant loop - the product of the slow corrosion of the RPV. This model is viewed as having inadequate process control systems.
Bulgaria had a number of VVER-440 V230 models, but they opted to shut them down upon joining the EU rather than backfit them, and are instead building new VVER-1000 models. Many non-EU states maintain V230 models, including Russia and the CIS. Many of these states - rather than abandoning the reactors entirely - have opted to install an ECCS, develop standard procedures, and install proper instrumentation and control systems. Though confinements cannot be transformed into containments, the risk of a limiting fault resulting in core damage can be greatly reduced.
The VVER-440 V213 model was built to the first set of Soviet nuclear safety standards. It possesses a modest containment building, and the ECCS systems, though not completely to Western standards, are reasonably comprehensive. Many VVER-440 V213 models possessed by former Soviet bloc countries have been upgraded to fully automated Western-style instrumentation and control systems, improving safety to Western levels for accident prevention - but not for accident containment, which is of a modest level compared to Western plants. These reactors are regarded as "safe enough" by Western standards to continue operation without major modifications, though most owners have performed major modifications to bring them up to generally equivalent levels of nuclear safety.
During the 1970s, Finland built two VVER-440 V213 models to Western standards with a large-volume full containment and world-class instrumentation, control standards and an ECCS with multiply redundant and diversified components. In addition, passive safety features such as 900-tonne ice condensers have been installed, making these two units safety-wise the most advanced VVER-440's in the world.
The VVER-1000 type has a definitely adequate Western-style containment, the ECCS is sufficient by Western standards, and instrumentation and control has been markedly improved to Western 1970s-era levels.
Chernobyl disaster
In the Chernobyl disaster the fuel became non-critical when it melted and flowed away from the graphite moderator - however, it took considerable time to cool. The molten core of Chernobyl (that part that did not vaporize in the fire) flowed in a channel created by the structure of its reactor building and froze in place before a core-concrete interaction could happen. In the basement of the reactor at Chernobyl, a large "elephant's foot" of congealed core material was found. Time delay, and prevention of direct emission to the atmosphere, would have reduced the radiological release. If the basement of the reactor building had been penetrated, the groundwater would be severely contaminated, and its flow could carry the contamination far afield.
The Chernobyl reactor was an RBMK type. The disaster was caused by a power excursion that led to a meltdown and extensive offsite consequences. Operator error and a faulty shutdown system led to a sudden, massive spike in the neutron multiplication rate, a sudden decrease in the neutron period, and a consequent increase in neutron population; thus, core heat flux very rapidly increased to unsafe levels. This caused the water coolant to flash to steam, causing a sudden overpressure within the reactor pressure vessel (RPV), leading to granulation of the upper portion of the core and the ejection of the upper plenum of said pressure vessel along with core debris from the reactor building in a widely dispersed pattern. The lower portion of the reactor remained somewhat intact; the graphite neutron moderator was exposed to oxygen containing air; heat from the power excursion in addition to residual heat flux from the remaining fuel rods left without coolant induced oxidation in the moderator; this in turn evolved more heat and contributed to the melting of the fuel rods and the outgassing of the fission products contained therein. The liquefied remains of the fuel rods flowed through a drainage pipe into the basement of the reactor building and solidified in a mass later dubbed corium, though the primary threat to the public safety was the dispersed core ejecta and the gasses evolved from the oxidation of the moderator.
Although the Chernobyl accident had dire off-site effects, much of the radioactivity remained within the building. If the building were to fail and dust was to be released into the environment then the release of a given mass of fission products which have aged for twenty years would have a smaller effect than the release of the same mass of fission products (in the same chemical and physical form) which had only undergone a short cooling time (such as one hour) after the nuclear reaction has been terminated. However, if a nuclear reaction was to occur again within the Chernobyl plant (for instance if rainwater was to collect and act as a moderator) then the new fission products would have a higher specific activity and thus pose a greater threat if they were released. To prevent a post-accident nuclear reaction, steps have been taken, such as adding neutron poisons to key parts of the basement.
The effects of a nuclear meltdown depend on the safety features designed into a reactor. A modern reactor is designed both to make a meltdown unlikely, and to contain one should it occur.
In a modern reactor, a nuclear meltdown, whether partial or total, should be contained inside the reactor's containment structure. Thus (assuming that no other major disasters occur) while the meltdown will severely damage the reactor itself, possibly contaminating the whole structure with highly radioactive material, a meltdown alone should not lead to significant radiation release or danger to the public.
In practice, however, a nuclear meltdown is often part of a larger chain of disasters (although there have been so few meltdowns in the history of nuclear power that there is not a large pool of statistical information from which to draw a credible conclusion as to what "often" happens in such circumstances). For example, in the Chernobyl accident, by the time the core melted, there had already been a large steam explosion and graphite fire and major release of radioactive contamination (as with almost all Soviet reactors, there was no containment structure at Chernobyl). Also, before a possible meltdown occurs, pressure can already be rising in the reactor, and to prevent a meltdown by restoring the cooling of the core, operators are allowed to reduce the pressure in the reactor by releasing (radioactive) steam into the environment. This enables them to inject additional cooling water into the reactor again.
Reactor design
Although pressurized water reactors are more susceptible to nuclear meltdown in the absence of active safety measures, this is not a universal feature of civilian nuclear reactors. Much of the research in civilian nuclear reactors is for designs with passive nuclear safety features that may be less susceptible to meltdown, even if all emergency systems failed. For example, pebble bed reactors are designed so that complete loss of coolant for an indefinite period does not result in the reactor overheating. The General Electric ESBWR and Westinghouse AP1000 have passively activated safety systems. The CANDU reactor has two low-temperature and low-pressure water systems surrounding the fuel (i.e. moderator and shield tank) that act as back-up heat sinks and preclude meltdowns and core-breaching scenarios.
Fast breeder reactors are more susceptible to meltdown than other reactor types, due to the larger quantity of fissile material and the higher neutron flux inside the reactor core, which makes it more difficult to control the reaction.
Accidental fires are widely acknowledged to be risk factors that can contribute to a nuclear meltdown.
United States
There have been at least eight meltdowns in the history of the United States. All are widely called "partial meltdowns."
- BORAX-I was a test reactor designed to explore criticality excursions and observe if a reactor would self limit. In the final test, it was deliberately destroyed and revealed that the reactor reached much higher temperatures than were predicted at the time.
- The reactor at EBR-I suffered a partial meltdown during a coolant flow test on November 29, 1955.
- The Sodium Reactor Experiment in Santa Susana Field Laboratory was an experimental nuclear reactor which operated from 1957 to 1964 and was the first commercial power plant in the world to experience a core meltdown in July 1959.
- Stationary Low-Power Reactor Number One (SL-1) was a United States Army experimental nuclear power reactor which underwent a criticality excursion, a steam explosion, and a meltdown on January 3, 1961, killing three operators.
- The SNAP8ER reactor at the Santa Susana Field Laboratory experienced damage to 80% of its fuel in an accident in 1964.
- The partial meltdown at the Fermi 1 experimental fast breeder reactor, in 1966, required the reactor to be repaired, though it never achieved full operation afterward.
- The SNAP8DR reactor at the Santa Susana Field Laboratory experienced damage to approximately a third of its fuel in an accident in 1969.
- The Three Mile Island accident, in 1979, referred to in the press as a "partial core melt," led to the permanent shutdown of that reactor.
Soviet Union
In the most serious example, the Chernobyl disaster, design flaws and operator negligence led to a power excursion that subsequently caused a meltdown. According to a report released by the Chernobyl Forum (consisting of numerous United Nations agencies, including the International Atomic Energy Agency and the World Health Organization; the World Bank; and the Governments of Ukraine, Belarus, and Russia) the disaster killed twenty-eight people due to acute radiation syndrome, could possibly result in up to four thousand fatal cancers at an unknown time in the future and required the permanent evacuation of an exclusion zone around the reactor.
During the Fukushima I nuclear accidents, three of the power plant's six reactors reportedly suffered meltdowns. Most of the fuel in the reactor No. 1 Nuclear Power Plant melted. TEPCO believes No.2 and No.3 reactors were similarly affected. On May 24, 2011, TEPCO reported that all three reactors melted down.
Meltdown incidents
- There was also a fatal core meltdown at SL-1, an experimental U.S. military reactor in Idaho.
Large-scale nuclear meltdowns at civilian nuclear power plants include:
- the Lucens reactor, Switzerland, in 1969.
- the Three Mile Island accident in Pennsylvania, U.S.A., in 1979.
- the Chernobyl disaster at Chernobyl Nuclear Power Plant, Ukraine, USSR, in 1986.
- the Fukushima I nuclear accidents following the earthquake and tsunami in Japan, March 2011.
Other core meltdowns have occurred at:
- NRX (military), Ontario, Canada, in 1952
- BORAX-I (experimental), Idaho, U.S.A., in 1954
- EBR-I (military), Idaho, U.S.A., in 1955
- Windscale (military), Sellafield, England, in 1957 (see Windscale fire)
- Sodium Reactor Experiment, (civilian), California, U.S.A., in 1959
- Fermi 1 (civilian), Michigan, U.S.A., in 1966
- Chapelcross nuclear power station (civilian), Scotland, in 1967
- Saint-Laurent Nuclear Power Plant (civilian), France, in 1969
- A1 plant, (civilian) at Jaslovské Bohunice, Czechoslovakia, in 1977
- Saint-Laurent Nuclear Power Plant (civilian), France, in 1980
China Syndrome
The China syndrome (loss-of-coolant accident) is a fictional nuclear reactor operations accident characterized by the severe meltdown of the core components of the reactor, which then burn through the containment vessel and the housing building, then notionally through the crust and body of the Earth until reaching the other side, which in the United States is jokingly referred to as being China.
The system design of the nuclear power plants built in the late 1960s raised questions of operational safety, and raised the concern that a severe reactor accident could release large quantities of radioactive materials into the atmosphere and environment. By 1970, there were doubts about the ability of the emergency cooling systems of a nuclear reactor to prevent a loss of coolant accident and the consequent meltdown of the fuel core; the subject proved popular in the technical and the popular presses. In 1971, in the article Thoughts on Nuclear Plumbing, former Manhattan Project (1942–1946) nuclear physicist Ralph Lapp used the term "China syndrome" to describe a possible burn-through, after a loss of coolant accident, of the nuclear fuel rods and core components melting the containment structures, and the subsequent escape of radioactive material(s) into the atmosphere and environment; the hypothesis derived from a 1967 report by a group of nuclear physicists, headed by W. K. Ergen. In the event, Lapp’s hypothetical nuclear accident was cinematically adapted as The China Syndrome (1979).
The geographic, planet-piercing concept of the China syndrome derives from the misperception that China is the antipode of the United States; to many Americans, it is the “the other side of the world”. Moreover, the hypothetical transit of a meltdown product to the other side of the Earth (i.e. China) ignores the fact that the Earth's gravity tends to pull all masses towards its center. Assuming a meltdown product could persist in a mobile molten form for long enough to reach the center of the Earth; momentum loss due to friction (fluid viscosity) would prevent it continuing to the other side.
See also
- Behavior of nuclear fuel during a reactor accident
- Chernobyl compared to other radioactivity releases
- Chernobyl disaster effects
- High-level radioactive waste management
- International Nuclear Event Scale
- List of civilian nuclear accidents
- Lists of nuclear disasters and radioactive incidents
- Nuclear fuel response to reactor accidents
- Nuclear safety
- Nuclear power
- Nuclear power debate
- Martin Fackler (June 1, 2011). "Report Finds Japan Underestimated Tsunami Danger". New York Times.
- International Atomic Energy Agency (IAEA) (2007). IAEA Safety Glossary: Terminology Used in Nuclear Safety and Radiation Protection (2007edition ed.). Vienna, Austria: International Atomic Energy Agency. ISBN 92-0-100707-8. Retrieved 2009-08-17.
- United States Nuclear Regulatory Commission (NRC) (2009-09-14). "Glossary". Website. Rockville, Maryland, USA: Federal Government of the United States. pp. See Entries for Letter M and Entries for Letter N. Retrieved 2009-10-03.
- Reactor safety study: an assessment of accident risks in U.S. commercial nuclear power plants, Volume 1
- Hewitt, Geoffrey Frederick; Collier, John Gordon (2000). "4.6.1 Design Basis Accident for the AGR: Depressurization Fault". Introduction to nuclear power (in Technical English). London, UK: Taylor & Francis. p. 133. ISBN 978-1-56032-454-6. Retrieved 2010-06-05.
- "Earthquake Report No. 91". JAIF. May 25, 2011. Retrieved May 25, 2011.
- Kuan, P.; Hanson, D. J., Odar, F. (1991). Managing water addition to a degraded core. Retrieved 2010-11-22.
- Haskin, F.E.; Camp, A.L. (1994). Perspectives on Reactor Safety (NUREG/CR-6042) (Reactor Safety Course R-800), 1st Edition. Beltsville, MD: U.S. Nuclear Regulatory Commission. p. 3.1–5. Retrieved 2010-11-23.
- Haskin, F.E.; Camp, A.L. (1994). Perspectives on Reactor Safety (NUREG/CR-6042) (Reactor Safety Course R-800), 1st Edition. Beltsville, MD: U.S. Nuclear Regulatory Commission. pp. 3.5–1 to 3.5–4. Retrieved 2010-12-24.
- Haskin, F.E.; Camp, A.L. (1994). Perspectives on Reactor Safety (NUREG/CR-6042) (Reactor Safety Course R-800), 1st Edition. Beltsville, MD: U.S. Nuclear Regulatory Commission. pp. 3.5–4 to 3.5–5. Retrieved 2010-12-24.
- ANS : Public Information : Resources : Special Topics : History at Three Mile Island : What Happened and What Didn't in the TMI-2 Accident
- Nuclear Industry in Russia Sells Safety, Taught by Chernobyl
- 'Melt-through' at Fukushima? / Govt. suggests situation worse than meltdown http://www.yomiuri.co.jp/dy/national/T110607005367.htm
- Test Area North
- Walker, J. Samuel (2004). Three Mile Island: A Nuclear Crisis in Historical Perspective (Berkeley: University of California Press), p. 11.
- Lapp, Ralph E. "Thoughts on nuclear plumbing." The New York Times, 12 December 1971, pg. E11.
- "China Syndrome". Merriam-Webster. Retrieved December 11, 2012.
- Presenter: Martha Raddatz (15 March 2011). "ABC World News". ABC.
- Allen, P.J.; J.Q. Howieson, H.S. Shapiro, J.T. Rogers, P. Mostert and R.W. van Otterloo (April–June 1990). "Summary of CANDU 6 Probabilistic Safety Assessment Study Results". Nuclear Safety 31 (2): 202–214.
- http://www.insc.anl.gov/neisb/neisb4/NEISB_1.1.html INL VVER Sourcebook
- Partial Fuel Meltdown Events
- ANL-W Reactor History: BORAX I
- Wald, Matthew L. (2011-03-11). "Japan Expands Evacuation Around Nuclear Plant". The New York Times.
- The Chernobyl Forum: 2003-2005 (2006-04). "Chernobyl’s Legacy: Health, Environmental and Socio-economic Impacts". International Atomic Energy Agency. p. 14. Retrieved 2011-01-26.
- The Chernobyl Forum: 2003-2005 (2006-04). "Chernobyl’s Legacy: Health, Environmental and Socio-Economic Impacts". International Atomic Energy Agency. p. 16. Retrieved 2011-01-26.
- Hiroko Tabuchi (May 24, 2011). "Company Believes 3 Reactors Melted Down in Japan". The New York Times. Retrieved 2011-05-25. | <urn:uuid:593ff668-f2a3-43a3-a234-69537b1789d6> | CC-MAIN-2013-20 | http://en.wikipedia.org/wiki/Nuclear_meltdown | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.934809 | 11,510 | 4.1875 | 4 | HuggingFaceFW/fineweb-edu/sample-100BT |
By Serena Gordon
MONDAY, Nov. 16 (HealthDay News) -- Pediatric food allergies, which can sometimes be life-threatening, are increasing at a dramatic rate in the United States, new research shows.
But the study authors aren't sure if the rise in reports of food allergies reflects an increase in actual prevalence or if better awareness has led more people to seek treatment for their symptoms.
Whatever the cause, it's clear that the number of children with food allergies has gone up 18 percent and the number seeking treatment for food allergy at emergency departments or hospitals has tripled since 1993.
"People are more aware of food allergies today, and that could have something to do with it," said study author, Amy Branum, a health statistician for the U.S. Centers for Disease Control and Prevention. "But, when we looked at health-care surveys filled out by parents and those from the health-care sector, we saw the increase across the surveys so this may be more than just increased awareness."
Results of the study were published online Nov. 16 and will appear in the December print issue of Pediatrics.
Although many people think of allergies as more of a nuisance than a serious health issue, food allergy in particular can be very serious, even life-threatening. The most common foods that people are allergic to include peanuts, tree nuts, milk, eggs, soy, shellfish, fish and wheat, according to the Food Allergy & Anaphylaxis Network.
Symptoms often appear minutes after people eat a food that they're allergic to, but it can sometimes take several hours before a reaction begins, according to the network. Typical symptoms of a food allergy include a tingling sensation in the mouth, swelling of the tongue or throat, trouble breathing, hives, stomach cramping, vomiting or diarrhea.
In the current study, the researchers used information from four different national data sources to assess the current rate of food allergies in the United States. The surveys included information from parents and from health-care providers, according to Branum.
The researchers found that between 1997 and 2007, the incidence of food allergy went up by 18 percent. Parents of almost 4 percent of U.S. children reported a food or digestive allergy in their child, the study authors noted.
There was also an increase in the rates of parent-reported skin allergy (eczema) during the same time period. Approximately 8.9 percent of U.S. children had experienced skin allergy in 2007, compared with 7.9 percent in 1997.
Health-care providers, on the other hand, reported that the number of children being treated for food allergies had tripled, the study found. Data from health-care providers was from 1993 to 2006.
Data included testing for immunoglobulin E, or IgE, antibodies in the blood for various foodstuffs, which can indicate an allergy. The percentage of children who tested positive for IgE antibodies for peanut allergy was 9 percent; for egg allergy, 7 percent; milk, 12 percent; and shrimp, 5 percent, the study found.
Though IgE antibodies can indicate a potential food allergy, the test is often better at ruling out who does not have an allergy, Branum said. A positive test doesn't mean that someone definitely has a food allergy, but suggests that the potential is there.
The researchers also noted that Hispanic children had the lowest overall prevalence of food allergy but the greatest increases over time of parent-reported incidences of food allergy.
"People should be aware that food allergy may really be increasing," Branum said. "If small children have symptoms when they eat a particular food, have that child checked out, particularly if they have co-occurring conditions like asthma and eczema."
"Food allergies are real," said Dr. Jennifer Appleyard, chief of allergy and immunology at St. John Hospital and Medical Center in Detroit. "And it appears that the prevalence is rising."
This will present various challenges, she noted. One is that there's already a shortage of allergy specialists in many areas, Appleyard said. Another is that schools will have to gear up to take care of additional children with food allergy to ensure their safety during the school day and on field trips, she said.
Parents who suspect their child has a food allergy should first talk with the child's primary care physician about symptoms. The problem could be a food intolerance rather than an allergy, she said, but the child might need to be tested by an allergy specialist to get a definitive diagnosis.
The Food Allergy & Anaphylaxis Network has more on food allergies.
Copyright © 2011 HealthDay. All rights reserved. | <urn:uuid:4d96b209-69f3-4f25-ae7c-19c6c36e05c2> | CC-MAIN-2013-20 | http://health.usnews.com/health-news/family-health/allergy-and-asthma/articles/2009/11/16/child-food-allergies-on-the-rise-in-us_print.html | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.969983 | 962 | 2.71875 | 3 | HuggingFaceFW/fineweb-edu/sample-100BT |
Classification of Burns
What are the classifications of burns?
Burns are classified as first-, second-, or third-degree, depending on how deep and severe they penetrate the skin's surface.
First-degree (superficial) burns
First-degree burns affect only the epidermis, or outer layer of skin. The burn site is red, painful, dry, and with no blisters. Mild sunburn is an example. Long-term tissue damage is rare and usually consists of an increase or decrease in the skin color.
Second-degree (partial thickness) burns
Second-degree burns involve the epidermis and part of the dermis layer of skin. The burn site appears red, blistered, and may be swollen and painful.
Third-degree (full thickness) burns
Third-degree burns destroy the epidermis and dermis. Third-degree burns may also damage the underlying bones, muscles, and tendons. The burn site appears white or charred. There is no sensation in the area since the nerve endings are destroyed. | <urn:uuid:d3e51a07-18ee-4328-b77c-1bb70f80bd53> | CC-MAIN-2013-20 | http://healthcare.utah.edu/healthlibrary/library/diseases/pediatric/doc.php?type=90&id=P09575 | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.899866 | 219 | 3.90625 | 4 | HuggingFaceFW/fineweb-edu/sample-100BT |
These energy saving tips will show you how to reduce your impact on both the environment and your energy bills.
1. Turn out the lights when you leave a room. This is probably the easiest thing you can do to save energy – just flip the switch on your way out.
2. Turn off electronic devices when you are not using them. This includes TVs, computers, DVD players, stereos, and any other electronic devices. If you’re not using it, why leave it on? To take this one step further, you can also unplug devices like cell phone chargers when they are not in use. They still use a small amount unless they are completely unplugged, even if they do not appear to be in use.
3. Invest in energy-efficient light bulbs, such as compact fluorescent bulbs. These are slowly replacing regular incandescent bulbs, and they use significantly less energy.
4. In the winter, wear extra layers around the house. Simply turning up the thermostat seems so easy, but putting on a sweater uses a lot less energy. Try to have an extra-comfy sweater or sweatshirt handy that you can throw on over anything when you get cold. Wearing long underwear, tights or leggings under your pants can also be helpful if it is particularly chilly.
5. During the winter, open shades during the day; during the summer, close them. Sunlight can have a powerful effect on the temperature of a room, especially if that room faces south and receives more sun. Use this to your advantage by keeping your living space warmer in the winter and cooler in the summer.
6. Block the bottoms of doors in winter to prevent heat from escaping. You can buy a door draft snake or make one yourself.
7. Hang clothes to dry rather than use a dryer. This is very easy to do and will even make your clothes last longer; using a dryer causes clothes to shrink and lose their shape. You can buy a rack to hang them on at places like Bed Bath and Beyond. Be careful about using hangers to dry clothes, as they may stretch some fabrics.
Hope these tips help you to save energy! | <urn:uuid:69590e66-f62a-4b25-a7d4-838b351c0879> | CC-MAIN-2013-20 | http://insidemix.com/energy-saving-tips/ | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.944393 | 451 | 2.53125 | 3 | HuggingFaceFW/fineweb-edu/sample-100BT |
Hot Weather Gets Scientists' Attention
Originally published on Wed July 11, 2012 5:30 am
RENEE MONTAGNE, HOST:
Across America people are sweltering through extreme heat this year, continuing a long-term trend of rising temperatures. Inevitably, many are wondering if the scorching heat is due to global warming. Scientists are expected to dig into the data and grapple with that in the months to come. They've already taken a stab at a possible connection with last year's extreme weather events, like the blistering drought in Texas. NPR's Richard Harris reports.
RICHARD HARRIS, BYLINE: Weather researchers from around the world are now taking stock of what happened in 2011. It was not the hottest year on record, but it was still in the top 15. Jessica Blunden from the National Climatic Data Center says 2011 had its own memorable characteristics.
JESSICA BLUNDEN: People may very well remember this year as a year of extreme weather and climate.
HARRIS: There were devastating droughts in Africa, Mexico, and Texas. In Thailand, massive flooding kept people's houses underwater for two months.
BLUNDEN: Here in the United States, we had one of our busiest and most destructive seasons on record in 2011. There were seven different tornado and severe weather outbreaks that each caused more than a billion dollars in damages.
HARRIS: So what's going on here? Federal climate scientist, Tom Karl, said one major feature of the global weather last year was a La Nina event. That's a period of cooler Pacific Ocean temperatures and it has effects around the globe, primarily in producing floods in some parts of the world and droughts in others.
TOM KARL: By no means did it explain all of the activity in 2011, but it certainly influenced a considerable part of the climate and weather.
HARRIS: Karl and Blunden are part of a huge multinational effort to sum up last year's weather and say what it all means. They provided an update by conference call. Clearly, long-term temperature trends are climbing as you'd expect as a result of global warming. Tom Peterson from the Federal Climate Data Center says the effort now is to look more closely at individual events.
TOM PETERSON: You've probably all heard the term you can't attribute any single event to global warming, and while that's true, the focus of the science now is evolving and moving onto how is the probability of event change.
HARRIS: And there researchers report some progress. For example, last year's record-breaking drought in Texas wasn't simply the result of La Nina. Peter Stott from the British Meteorology Office says today's much warmer planet played a huge role as well, according to the study the group released on Tuesday.
PETER STOTT: The result that they find is really quite striking, in that they find that such a heat wave is now about 20 times more likely during a La Nina year than it was during the 1960s.
HARRIS: A second study found that an extraordinary warm spell in London last November was 60 times more likely to occur on our warming planet than it would have been over the last 350 years. But that's not to say everything is related to climate change. There's no clear link between the spate of tornadoes and global warming, and devastating floods in Thailand last year, turn out to be the result of poor land use practices.
Even so, Kate Willett of the British Weather Service says there is a global trend consistent with what scientists expect climate change to bring.
KATE WILLETT: So, in simple terms, we can say that the dry regions are getting drier and the wet regions are getting wetter.
HARRIS: This year's extreme events are different from last year's, but they all fit into a coherent picture of global change. Richard Harris, NPR News. Transcript provided by NPR, Copyright NPR. | <urn:uuid:e8e46237-1e26-4326-b62c-a25477bd0d59> | CC-MAIN-2013-20 | http://kacu.org/post/hot-weather-gets-scientists-attention | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.95642 | 822 | 3.15625 | 3 | HuggingFaceFW/fineweb-edu/sample-100BT |
From airport food that is roughly 100 percent sodium to the roof of the plane potentially coming off to a sleeping air traffic controller guiding a plane, you’d think air travel is getting dangerous.
A new study says it may be. Although not in the convertible-plane or narcoleptic-controller sort of way.
The study, released Monday in the Journal of Occupational Health and Environmental Medicine, says that frequent business travelers were more likely to describe their health as “fair” or “poor.”
More than 13,000 subjects were studied from data supplied by a corporate wellness program. It looked at three groups: Non-travelers, occasional travelers (80 percent of those surveyed) and “extensive travelers” who run at the George Clooney in “Up in the Air” pace of 20 or more nights a month on the road.
Those Clooney-esque road warriors are not a healthy bunch. And they certainly don’t look like him. They are 92 percent more likely to be obese, with high blood pressure and unfavorable cholesterol levels.
Several factors could contribute to this, the researchers said, including poor sleep, fattening foods and long periods of inactivity.
We’re no scientists, but we’d guess that doubles-for-$1-extra, migraine-inducing flight delays and blood-pressure-raising bag fees also have something to do with it.
How do you try to stay healthy on the road? | <urn:uuid:7cb7de38-7613-4348-a286-c2638952862e> | CC-MAIN-2013-20 | http://lifeinc.today.com/_news/2011/04/26/6525547-an-airport-is-no-substitute-for-the-gym | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.95552 | 309 | 2.53125 | 3 | HuggingFaceFW/fineweb-edu/sample-100BT |
Like other pulmonate land snails, most slugs have two pairs of 'feelers' or tentacles on their head. The upper pair is light sensing, while the lower pair provides the sense of smell. Both pairs are retractable, and can be regrown if lost.
On top of the slug, behind the head, is the saddle-shaped mantle, and under this are the genital opening and anus. On one side (almost always the right hand side) of the mantle is a respiratory opening, which is easy to see when open, but difficult to see when closed. This opening is known as the pneumostome. Within the mantle in some species is a very small, rather flat shell.
Like other snails, a slug moves by rhythmic waves of muscular contraction on the underside of its foot. It simultaneously secretes a layer of mucus on which it travels, which helps prevent damage to the foot tissues. Some slug species hibernate underground during the winter in temperate climates, but in other species, the adults die in the autumn.
In rural southern Italy, the garden slug Arion hortensis was used to treat gastritis, stomach ulcers or peptic ulcers by swallowing it whole and alive. Given that it is now known that most peptic ulcers are caused by Helicobacter pylori, the merit of swallowing a live slug is questionable. A clear mucus produced by the slug is also used to treat various skin conditions including dermatitis, warts, inflammations, calluses, acne and wounds. | <urn:uuid:4f92c42b-35b4-439c-8f11-a29c761e704a> | CC-MAIN-2013-20 | http://melvynyeo.deviantart.com/art/Slug-258511210 | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.950218 | 318 | 3.4375 | 3 | HuggingFaceFW/fineweb-edu/sample-100BT |
Our main goal here is to give a quick visual summary that is at once convincing and data rich. These employ some of the most basic tools of visual data analysis and should probably become form part of the basic vocabulary of an experimental mathematician. Note that traditionally one would run a test such as the Anderson-Darling test (which we have done) for the continuous uniform distribution and associate a particular probability with each of our sets of probability, but unless the probability values are extremely high or low it is difficult to interpret these statistics.
Experimentally, we want to test graphically the hypothesis of normality and randomness (or non-periodicity) for our numbers. Because the statistics themselves do not fall into the nicest of distributions, we have chosen to plot only the associated probabilities. We include two different types of graphs here. A quantile-quantile plot is used to examine the distribution of our data and scatter plots are used to check for correlations between statistics.
The first is a quantile-quantile plot of the chi square base 10 probability values versus a a discrete uniform distribution. For this graph we have placed the probabilities obtained from our square roots and plotted them against a perfectly uniform distribution. Finding nothing here is equivalent to seeing that the graph is a straight line with slope 1. This is a crude but effective way of seeing the data. The disadvantage is that the data are really plotted along a one dimensional curve and as such it may be impossible to see more subtle patterns.
The other graphs are examples of scatter plots. The first scatter plot shows that nothing interesting is occurring. We are again looking at probability values this time derived from the discrete Cramer-von Mises (CVM) test base 10,000. For each cube root we have plotted the point , where is the CVM base 10,000 probability associated with the first 2500 digits of the cube root of i and is the probability associated with the next 2500 digits. A look at the graph reveals that we have now plotted our data on a two dimensional surface and there is a lot more `structure' to be seen. Still, it is not hard to convince oneself that there is little or no relationship between the probabilities of the first 2500 digits and the second 2500 digits.
The last graph is similar to the second. Here we have plotted the probabilities associated with the Anderson-Stephens statistic of the first 10,000 digits versus the first 20,000 digits. We expect to find a correlation between these tests since there is a 10,000 digit overlap. In fact, although the effect is slight, one can definitely see the thinning out of points from the upper left hand corner and lower right hand corner.
Figure 1: Graphs 1-3 | <urn:uuid:6697aede-f5b6-4d7b-b653-9cc6d6586fb4> | CC-MAIN-2013-20 | http://oldweb.cecm.sfu.ca/organics/vault/expmath/expmath/html/node15.html | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.939863 | 554 | 3.5625 | 4 | HuggingFaceFW/fineweb-edu/sample-100BT |
Vol. 17 Issue 6
One-Legged (Single Limb) Stance Test
The One-Legged Stance Test (OLST)1,2 is a simple, easy and effective method to screen for balance impairments in the older adult population.
You may be asking yourself, "how can standing on one leg provide you with any information about balance, after all, we do not go around for extended periods of time standing on one leg?"
True, as a rule we are a dynamic people, always moving, our world always in motion, but there are instances were we do need to maintain single limb support. The most obvious times are when we are performing our everyday functional activities.
Stepping into a bath tub or up onto a curb would be difficult, if not impossible to do without the ability to maintain single limb support for a given amount of time. The ability to switch from two- to one-leg standing is required to perform turns, climb stairs and dress.
As we know, the gait cycle requires a certain amount of single limb support in order to be able to progress ourselves along in a normal pattern. When the dynamics of the cycle are disrupted, loss of balance leading to falls may occur.
This is especially true in older individuals whose gait cycle is altered due to normal and potentially abnormal changes that occur as a result of aging.
The One-Legged Stance Test measures postural stability (i.e., balance) and is more difficult to perform due to the narrow base of support required to do the test. Along with five other tests of balance and mobility, reliability of the One-Legged Stance Test was examined for 45 healthy females 55 to 71 years old and found to have "good" intraclass correlations coefficients (ICC range = .95 to .099). Within raters ICC ranged from 0.73 to 0.93.3
To perform the test, the patient is instructed to stand on one leg without support of the upper extremities or bracing of the unweighted leg against the stance leg. The patient begins the test with the eyes open, practicing once or twice on each side with his gaze fixed straight ahead.
The patient is then instructed to close his eyes and maintain balance for up to 30 seconds.1
The number of seconds that the patient/client is able to maintain this position is recorded. Termination or a fail test is recorded if 1) the foot touches the support leg; 2) hopping occurs; 3) the foot touches the floor, or 4) the arms touch something for support.
Normal ranges with eyes open are: 60-69 yrs/22.5 ± 8.6s, 70-79 yrs/14.2 ± 9.3s. Normal ranges for eyes closed are: 60-69 yrs/10.2 ± 8.6s, 70-79 yrs/4.3 ± 3.0s.4 Briggs and colleagues reported balance times on the One-Legged Stance Test in females age 60 to 86 years for dominant and nondominant legs.
Given the results of this data, there appears to be some difference in whether individuals use their dominant versus their nondominant leg in the youngest and oldest age groups.
When using this test, having patients choose what leg they would like to stand on would be appropriate as you want to record their "best" performance.
It has been reported in the literature that individuals increase their chances of sustaining an injury due to a fall by two times if they are unable to perform a One-Legged Stance Test for five seconds.5 Other studies utilizing the One-Legged Stance Test have been conducted in older adults to assess static balance after strength training,6 performance of activities of daily living and platform sway tests.7
Interestingly, subscales of other balance measures such as the Tinetti Performance Oriented Mobility Assessment8 and Berg Balance Scale9 utilize unsupported single limb stance times of 10 seconds and 5 seconds respectively, for older individuals to be considered to have "normal" balance.
Thirty percent to 60 percent of community-dwelling elderly individuals fall each year, with many experiencing multiple falls.10 Because falls are the leading cause of injury-related deaths in older adults and a significant cause of disability in this population, prevention of falls and subsequent injuries is a worthwhile endeavor.11
The One-Legged Stance Test can be used as a quick, reliable and easy way for clinicians to screen their patients/clients for fall risks and is easily incorporated into a comprehensive functional evaluation for older adults.
1. Briggs, R., Gossman, M., Birch, R., Drews, J., & Shaddeau, S. (1989). Balance performance among noninstitutionalized elderly women. Physical Therapy, 69(9), 748-756.
2. Anemaet, W., & Moffa-Trotter, M. (1999). Functional tools for assessing balance and gait impairments. Topics in Geriatric Rehab, 15(1), 66-83.
3. Franchignoni, F., Tesio, L., Martino, M., & Ricupero, C. (1998). Reliability of four simple, quantitative tests of balance and mobility in healthy elderly females. Aging (Milan), 10(1), 26-31.
4. Bohannon, R., Larkin, P., Cook, A., & Singer, J. (1984). Decrease in timed balance test scores with aging. Physical Therapy, 64, 1067-1070.
5. Vellas, B., Wayne, S., Romero, L., Baumgartner, R., et al. (1997). One-leg balance is an important predictor of injurious falls in older persons. Journal of the American Geriatric Society, 45, 735-738.
6. Schlicht, J., Camaione, D., & Owen, S. (2001). Effect of intense strength training on standing balance, walking speed, and sit-to-stand performance in older adults. Journal of Gerontological Medicine and Science, 56A(5), M281-M286.
7. Frandin, K., Sonn, U., Svantesson, U., & Grimby, G. (1996). Functional balance tests in 76-year-olds in relation to performance, activities of daily living and platform tests. Scandinavian Journal of Rehabilitative Medicine, 27(4), 231-241.
8. Tinetti, M., Williams, T., & Mayewski, R. (1986). Fall risk index for elderly patients based on number of chronic disabilities. American Journal of Medicine, 80, 429-434.
9. Berg, K., et al. (1989). Measuring balance in the elderly: Preliminary development of an instrument. Physio Therapy Canada, 41(6), 304-311.
10. Rubenstein, L., & Josephson, K. (2002). The epidemiology of falls and syncope. Clinical Geriatric Medicine, 18, 141-158.
11. National Safety Council. (2004). Injury Facts. Itasca, IL: Author.
Dr. Lewis is a physical therapist in private practice and president of Premier Physical Therapy of Washington, DC. She lectures exclusively for GREAT Seminars and Books, Inc. Dr. Lewis is also the author of numerous textbooks. Her Website address is www.greatseminarsandbooks.com. Dr. Shaw is an assistant professor in the physical therapy program at the University of South Florida dedicated to the area of geriatric rehabilitation. She lectures exclusively for GREAT Seminars and Books in the area of geriatric function.
APTA Encouraged by Cap Exceptions
New process grants automatic exceptions to beneficiaries needing care the most
Calling it "a good first step toward ensuring that Medicare beneficiaries continue to have coverage for the physical therapy they need," Ben F Massey, Jr, PT, MA, president of the American Physical Therapy Association (APTA), expressed optimism that the new exceptions process will allow a significant number of Medicare patients to receive services exceeding the $1,740 annual financial cap on Medicare therapy coverage. The new procedure, authorized by Congress in the recently enacted Deficit Reduction Act (PL 109-171), will be available to Medicare beneficiaries on March 13 under rules released this week by the Centers for Medicare and Medicaid Services (CMS).
"APTA is encouraged by the new therapy cap exceptions process," Massey said. "CMS has made a good effort to ensure that Medicare beneficiaries who need the most care are not harmed by an arbitrary cap."
As APTA recommended, the process includes automatic exceptions and also grants exceptions to beneficiaries who are receiving both physical therapy and speech language pathology (the services are currently combined under one $1,740 cap).
"We have yet to see how well Medicare contractors will be able to implement and apply this process. Even if it works well, Congress only authorized this new process through 2006. Congress must address this issue again this year, and we are confident that this experience will demonstrate to legislators that they must completely repeal the caps and provide a more permanent solution for Medicare beneficiaries needing physical therapy," Massey continued.
The therapy caps went into effect on Jan. 1, 2006, limiting Medicare coverage on outpatient rehabilitation services to $1,740 for physical therapy and speech therapy combined and $1,740 for occupational therapy.
The American Physical Therapy Association is a national professional organization representing more than 65,000 members. Its goal is to foster advancements in physical therapy practice, research and education.
New Mouthwash Helps With Pain
Doctors in Italy are studying whether a new type of mouthwash will help alleviate pain for patients suffering from head and neck cancer who were treated with radiation therapy, according to a new study (International Journal of Radiation Oncology*Biology*Physics, Feb. 1, 2006).
Fifty patients, suffering from various forms of head and neck cancer and who received radiation therapy, were observed during the course of their radiation treatment. Mucositis, or inflammation of the mucous membrane in the mouth, is the most common side effect yet no additional therapy has been identified that successfully reduces the pain.
This study sought to discover if a mouthwash made from the local anesthetic tetracaine was able to alleviate the discomfort associated with head and neck cancer and if there would be any negative side effects of the mouthwash. The doctors chose to concoct a tetracaine-based mouthwash instead of a lidocaine-based version because it was found to be four times more effective, worked faster and produced a prolonged relief.
The tetracaine was administered by a mouthwash approximately 30 minutes before and after meals, or roughly six times a day. Relief of oral pain was reported in 48 of the 50 patients. Sixteen patients reported that the mouthwash had an unpleasant taste or altered the taste of their food. | <urn:uuid:f8131c7f-1b2a-41bd-9eaa-951dad06e313> | CC-MAIN-2013-20 | http://physical-therapy.advanceweb.com/Article/One-Legged-Single-Limb-Stance-Test.aspx | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.919898 | 2,250 | 3.078125 | 3 | HuggingFaceFW/fineweb-edu/sample-100BT |
Hypertension is often diagnosed during a visit to your doctor. Blood pressure is measured using a cuff around your arm and a device called a sphygmomanometer. Your doctor may ask you to sit quietly for five minutes before checking your blood pressure.
If your blood pressure reading is high, you will probably be asked to come back for repeat blood pressure checks. If you have three visits with readings over 140/90 mmHG, you will be diagnosed with high blood pressure.
Some people’s blood pressure goes up when they are at the doctor’s office. If your doctor suspects that may be occurring, he or she may ask you to get some blood pressure readings at home. In some cases, he or she may recommend that you wear an ambulatory blood pressure monitor. This device measures your blood pressure regularly throughout the day as you go about your activities. It is usually worn for 24 hours, even while sleeping.
- Reviewer: Michael J. Fucci, DO
- Review Date: 09/2012 -
- Update Date: 00/91/2012 - | <urn:uuid:9dafa879-7f20-47af-9000-4507980185f0> | CC-MAIN-2013-20 | http://portsmouthhospital.com/your-health/?/19601/Next | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.945516 | 223 | 2.75 | 3 | HuggingFaceFW/fineweb-edu/sample-100BT |
When faced with the possibility of cooperating for mutual gain, states that feel insecure must ask how the gain will be divided..
A state worries about a division of possible gains that may favor others more than itself. That is the first way in which the structure of international politics limits the cooperation of states. A state also worries lest it become dependent on others through cooperative endeavors and exchanges of goods and services... The world's well-being would be increased if an ever more elaborate divisions of labor were developed, but states would thereby place themselves in situations of ever closer interdependence...
In an unorganized realm each unit's incentive is to put itself in a position to be able to take care of itself since no one can be counted on to do so. The international imperative is "take care of yourself"! Some leaders of nations may understand that the well-being of all of them would increase in their participation in a fuller division of labor. But to act on the idea would be to act on a domestic imperative, an imperative that does not run internationally...Waltz's argument may not apply perfectly to the EU, since he claims that the fundamental impediment to cooperation is the threat of conflict. The era of European wars is long over, but there may nonetheless be something to Waltz's argument. In the midst of economic crisis, tensions between European countries have hardened, and the crisis has caused bickering among EU member nations.
On another note, a recent FT/Harris poll of the British public found that only one in three Brits wants to remain in the EU. | <urn:uuid:2e536669-015e-4499-9d77-47d6823c94e1> | CC-MAIN-2013-20 | http://realdealecon.blogspot.com/2013/02/ken-waltz-realism-and-european-union.html | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.962663 | 318 | 2.609375 | 3 | HuggingFaceFW/fineweb-edu/sample-100BT |
Analog Input Channels
Temperature is a measure of the average kinetic energy of the particles in a sample of matter expressed in units of degrees on a standard scale. You can measure temperature in many different ways that vary in equipment cost and accuracy. The most common types of sensors are thermocouples, RTDs, and thermistors.
Figure 1. Thermocouples are inexpensive and can operate over a wide range of temperatures.
Thermocouples are the most commonly used temperature sensors because they are relatively inexpensive yet accurate sensors that can operate over a wide range of temperatures. A thermocouple is created when two dissimilar metals touch and the contact point produces a small open-circuit voltage as a function of temperature. You can use this thermoelectric voltage, known as Seebeck voltage, to calculate temperature. For small changes in temperature, the voltage is approximately linear.
You can choose from different types of thermocouples designated by capital letters that indicate their compositions according to American National Standards Institute (ANSI) conventions. The most common types of thermocouples include B, E, K, N, R, S, and T.
For more information on thermocouples, read The Engineer's Toolbox for Thermocouples.
Figure 2. RTDs are made of metal coils and can measure temperatures up to 850 °C.
A platinum RTD is a device made of coils or films of metal (usually platinum). When heated, the resistance of the metal increases; when cooled, the resistance decreases. Passing current through an RTD generates a voltage across the RTD. By measuring this voltage, you can determine its resistance and, thus, its temperature. The relationship between resistance and temperature is relatively linear. Typically, RTDs have a resistance of 100 Ω at 0 °C and can measure temperatures up to 850 °C.
For more information on RTDs, read The Engineer's Toolbox for RTDs.
Figure 3. Passing current through a thermistor generates a voltage proportional to temperature.
A thermistor is a piece of semiconductor made from metal oxides that are pressed into a small bead, disk, wafer, or other shape and sintered at high temperatures. Lastly, they are coated with epoxy or glass. As with RTDs, you can pass a current through a thermistor to read the voltage across the thermistor and determine its temperature. However, unlike RTDs, thermistors have a higher resistance (2,000 to 10,000 Ω) and a much higher sensitivity (~200 Ω/°C), allowing them to achieve higher sensitivity within a limited temperature range (up to 300 °C).
For information on thermistors, read The Engineer's Toolbox for Thermistors. | <urn:uuid:e3d9f26b-9215-49bf-a296-3724a4a14b64> | CC-MAIN-2013-20 | http://sine.ni.com/np/app/main/p/ap/daq/lang/en/pg/1/sn/n17:daq,n21:11/fmid/2999/ | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.917819 | 569 | 4.21875 | 4 | HuggingFaceFW/fineweb-edu/sample-100BT |