content
stringlengths 0
1.88M
| url
stringlengths 0
5.28k
|
---|---|
[Operationalizing a design] [Conditions]
Once you have a research question, hypothesis, and prediction, the next step is to develop the experimental design. In this section, we talk about the basics of experimental design, participants, and items.
Experimental design is the means by which hypotheses are operationalized. In other words, experimental design is how you collect controlled observations (i.e., data) about the relationship among the variables of interest. Note that our hypotheses often involve predictors that are continuous variables. The predictor variable we've discussed on the previous page—the relative frequency of words—is an example of a continuous predictor. For the purpose of experimental designs, continuous predictors are often binned into a small number of bins (though this is not always desirable, it is a common convention and for the current purpose, it will make it easier to talk about the design options you have for your experiment). Binning turns continuous variables into categorical variables. For example, we might bin words based on their relative frequency into "high" and "low" frequency words.
There are also variables that are naturally considered categorical. For example, if we hypothesized that nouns are read more quickly than verbs, "noun" vs. "verb" would be a categorical predictor.
Just like predictors, outcomes can be continuous or categorical. Reading times are an example of a continous outcome. The accuracy of an answer to a comprehension question (which is either true or false) is an example of a categorical outcome.
The first step in experimental design is to specify the conditions (e.g., test conditions and control conditions) that allow you to test your hypothesis. The conditions describe how we plan to manipulate our predictor variables. The goal of this manipulation is to test whether we will observe the change in the outcome variables predicted by our hypothesis.
To continue the example introduced above, consider that we plan to bin relative frequency into "high" and "low" frequency words. We would then design our experiment to have a high-frequency condition and a low-frequency condition. Our prediction would be that reading times will be faster in the high-frequency condition than in the low-frequency condition.
Without further additions, we would call this a by-2 design, because we have one predictor variable in two conditions and there are no other manipulations in the experiment. We will go through a more detailed example in Materials section, but first we'll establish a little bit more terminology.
1. What is a condition?
2. True or False: Many variables should be simultaneously changed to differentiate between conditions.
3. Researchers want to test whether a time delay between presentation of the first stimuli and the second stimuli will affect how fast and how accurately participants match them. Which type of design should they use?
4. Researchers want to test whether students who participated in sports performed better on different types of reading tasks than students who did not. Which type of design should they use? | https://www.hlp.rochester.edu/resources/BCS152-Tutorial/Design.html |
Great Burkhan Khaldun Mountain and its surrounding landscape, lies in the central part of the Khentii mountains chain that forms the watershed between the Arctic and Pacific Oceans, where the vast Central Asian steppe meets the coniferous forests of the Siberian taiga. Water from the permanently snow-capped mountains feed significant rivers flowing both to the north and south. High up the mountains are forests and lower down mountain steppe, while in the valley below are open grasslands dissected by rivers feeding swampy meadows.
Burkhan Khaldun is associated with Chinggis Khan, as his reputed burial site and more widely with his establishment of the Mongol Empire in 1206. It is one of four sacred mountains he designated during his lifetime, as part of the official status he gave to the traditions of mountain worship, based on long standing shamanic traditions associated with nomadic peoples. Traditions of mountain worship declined as Buddhism was adopted in the late 15th century and there appears to have been a lack of continuity of traditions and associations thereafter. Since the 1990s, the revival of mountain worship has been encouraged and old shamanist rituals are being revived and integrated with Buddhist rituals. State sponsored celebrations now take place at the mountain each summer around rivers and three stone ovoo-s (or rock cairns).
The Great Burkhan Khaldun Mountain has few structures other that three major stone ovoo-s alongside paths connected to a pilgrimage route. The cairns were apparently destroyed in the 17th century but have now been re-constructed with timber posts on top. The pilgrimage path starts some 20km from the mountain by a bridge over the Kherlen River at the Threshold Pass where there is also a major ovoo. Pilgrims ride on horseback from there to the large Beliin ovoo made of tree trunks and adorned with blue silk prayer scarves and thence to the main ovoo of heaven at the summit of the mountain. The sacredness of the mountain is strongly associated with its sense of isolation, and its perceived ‘pristine’ nature.
The Great Burkhan Khaldun Mountain and its surrounding sacred landscape, as a sacred mountain, were at the centre of events that profoundly changed Asia and Europe between the 12th and 14th and centuries and have direct links with Chinggis Khan and his formal recognition of mountain worship.
Criterion (iv): Burkhan Khaldun Sacred Mountain reflects the formalisation of mountain worship by Chinggis Khan, a key factor in his success in unifying the Mongol peoples during the creation of the Mongolian Empire, an event of vital historical significance for Asian and world history.
Criterion (vi): The Burkhan Khaldun Sacred Mountain is directly and tangibly associated with The Secret History of the Mongols, an historical and literary epic recognised as of world importance in its entry in the Memory of the World Register. The Secret History records the links between the mountain and Chinggis Khan, his formal recognition of mountain worship, and the formal status of Burkhan Khaldun as one of four sacred mountains designated during his lifetime.
The property has adequate attributes within its boundaries to reflect the scale and scope of the scared mountain, although the boundary needs to be marked in relation to natural features. An on-going programme of work needs to be undertaken on documenting and mapping archaeological sites that might strengthen associations with Chinggis Khan or traditions of mountain worship, and lead to their protection.
All the natural and cultural attributes of the Burkhan Khaldun Mountain display their value. Various parts of the mountain are vulnerable to an increase in tourism which could profoundly change its sense of isolation if not well managed, and to over-grazing that could impact on its ‘perceived’ pristine nature and on archaeological sites.
Although the majority of the Great Burkhan Khaldun Mountain is situated on the territory of the Khan Kentii Special Protected Area (KK SPA), a small area to the north-west and a much large area to the south lie outside this protected zone. There are plans to include the whole property and its buffer zone in the territory of the KK SPA in 2015. The KK SPA offers legal protection, but this is for natural and environmental protection rather than cultural heritage protection. Further protection needs to be established for cultural heritage and to ensure that no mining or extractive industry will be permitted within the property. The buffer zone is included within the buffer zone of the KK SPA. Currently the property buffer zone has no protection for cultural attributes nor does it have any regulatory procedures related to land-use or new construction and both need to be put in place.
Since 1990 and the renewal of older Mongolian practices related to sacred mountains, national traditions and customs of nature protection in Mongolia and the laws associated with “Khalkh Juram” have been revived and are now incorporated into State policy. On 16 May 1995, the first President of Mongolia issued a new Decree “Supporting initiatives to revive the tradition of worshiping Bogd Khan Khairkhan, Burkhan Khaldun (Khan Khentii), and Otgontenger Mountains”. The Decree pronounced the State’s support for initiatives to revive Mountain worship as described in the original Mongolian Legal Document and as “set out according to the official Decree”. A further Decree of the President on “Regulation of ceremony of worshipping and offering of state sacred mountains and ovoos” provides legal tools for visitor organization during the large state worshipping ceremonies. Any activity on Burkhan Khaldun Mountain itself, other than worshipping rituals, is traditionally forbidden. The KK reserve staff do however undertake fire-fighting, forest protection, forest clearing and renovation, and address illegal hunting and wood cutting.
At the national level, management of the site is under the responsibility of the Ministry of Nature, Environment and Green, and of the Ministry of Culture, Sports and Tourism. At the local level, local authorities at the levels of aimak-s, soum-s and bag-s have responsibility for providing local protection. Although soum administrations have people responsible for environmental protection, there appears not to be any formal arrangement for cultural heritage work. An Administration for the Protection of the World Heritage property responsible for both natural and cultural protection and conservation of the property is to be established, although no timescale has been provided for this, nor a commitment to the provision of adequate resources. Traditional protection is supported through the long standing tradition of worshipping nature and sacred places. For example, it is forbidden to disturb earth, waters, trees and all plants, animals and birds in sacred places, or hunt or cut wood for trading.
A draft Management Plan was submitted as part of the nomination dossier. This will run from 2015-2025 and covers both cultural and natural heritage. It includes both long-term (2015-2025), and medium-term (2015-2020) plans. The draft Management Plan has not yet been approved or implemented. Before completion and adoption, more work is needed to augment the Plan to allow it to provide an appropriate framework for management of the property and necessary funding has still to be put in place from stakeholder organisations together with further support from aid and international donor organizations. Archaeological sites on the mountain that may contribute to a wider understanding of mountain worship and have not been formally identified nor are they actively conserved. Both of these aspects should be addressed in the Plan.
Although a management plan exits for the Khan Khentii protected area and this is implemented by the Administration of Khan Khentii Special Protected Area, this is restricted to conservation of the natural environment and it appears that there is currently no active management for its cultural attributes, nor is work guided by specific cultural strategies and policies. These omissions need to be addressed. | https://whc.unesco.org/en/list/1440 |
Introduction Globalisation is the process that brings together the complaints nations of the world under a unique global village that takes different social & economic cultures in to consideration. First this essay will analyse globalisation in a broader term, second the history and foundation of globalisation that were intended to address poverty and inequality, third the causes that lead to globalisation and the impact that globalisation has on the world’s economy. The participation in the global economy was to solve economic problem such as poverty and inequality between the developed and developing nations. What is Globalisation?
When introduced to U.S. products and forced to accept them into their daily lives, it gives the U.S. another distinct advantage. Consumerism. Foreign citizens begin to grow attached to these U.S. products and when they have no need to receive them through aid any longer, they look elsewhere to find them. Consequently, businesses are encouraged to expand worldwide and promote globalization. Businesses go where demand is highest.
How has globalization influenced Canada's economy and cultural development? Trev Lau - Milliken Mills High School, York Region District School Board Globalization allows international companies free access to any country's marketplace. Countries like Canada pursue globalization because they want their businesses to have open access to other countries' marketplaces and be able to sell to more customers. In turn, Canada must open access to their markets.
It can be great and help the society because it can help advance the technology. For example Americans have devised straws that can instantly purify water, so they give this to many people n Africa. But globalization could also be bad because it can take away from cultural traditions. Part 2 1.
The Canadian GDP heavily relies on the contribution of foreign investments and therefore globalization is very important to the Canadian economy. Sometimes, Canadian owned businesses have to lower their prices due to foreign competition. There are also laws in Canada are put in place to ensure the protection of young (infant) industries, to make sure they survive the domestic competition that is as a result of foreign investments and globalization. Globalization affects almost all Canadian business decisions. Consumers benefit the most from this because they are the ones who get to make use of these products and they get to have a wider range of product variety due to globalization, this allows them to decide which of the many products
Globalization makes way for international investment and trade, there by establishing trade and bilateral relations between countries. 3. Technological diffusion and the distribution of economic development from rich to poor countries. 4. Globalization creates freedom to choose markets among globalized economy.
The founding of WTO in 1995 increased the conflict between economic globalisation and the protection of social norms until now because of WTO aims at further trade liberalisations. While there is no universally agreed definition of globalization, economists typically use the term to refer to international integration in commodity, capital and labour markets. There are many impacts that existed after the introducing of WTO. Firstly, the globalisation has changed the way of economic nowadays.
Moving on from a theory example, there are some other crucial factors that impact the progression of globalization. Important factors in globalization include transportation, technology, and an international currency system (Lecture 9-2). These are all important things needed for globalization. Transportation wise, the best method for transporting goods is through the use of containerized shipping. This method allows for ships to hold thousands of containers containing thousands of goods within those containers themselves (Lecture 9-2).
First of all, the most obvious advantage that the globalization brings about is that goods (such as car, laptop, smartphone, etc.) produced in one country can be sold in other countries .For the developed countries, now the can easily export their products and services to other countries to earn money. And for the developing countries, it can create opportunities of employment and reduce poverty, which is very good for the economy. The next positive aspect which is taken into consideration is that the developing countries now can receive sources of capital, new technologies from developed countries, which is very essential for the growth of a country. And in return, the developing countries let the developed countries’ companies do business in their countries.
Globalization allows countries in the world are free in trading without any barriers about tax, not only that the cost of domestic and imported goods are not too many differences cause major competition about commodity (The Impact, n.d.). That force developing nations have to make their product quality better, improve design of goods and reduce production cost. The next point is the pressure on the natural environment. Promoting the exploitation of natural resources in developing world depletes resources. The world is facing the fear of running out of natural resources like oil, natural gas, petroleum and coal because of overexploitation to meet the development needs (SÀIGÒN, 2010).
MINI REPORT ARE THE BENEFITS OF GLOBALISATION GREATER THAN THE DRAWBACKS? In my perspective, globalisation is a practice by which the world is becoming progressively connected as a result of immensely increased trade and cross culture diversity. Globalisation enhances the use of outsourcing and offshoring products.
Through globalization, people around the world share information as well as goods and services. As a result of globalization, consumers around the world enjoy a broader selection of products than they would have if they only had access to domestically made products. International trade has stimulated tremendous economic growth across the globe-creating jobs and reducing price. As globalization accelerates change in technology, more jobs are created and as a result more people are employed thus increasing their purchasing power. As the demand of consumers rise, more and more products are produced to suit the needs and wants of the people.
2. Main causes and drivers of globalization The treaty of Westphalia in 1648, has been known to be the beginning of the system of sovereign states. Unlike the previous treaties, the treaty of Westphalia drew up a list of core principles, which re-defined the conception of the state; territories were defined, and the lands uninfringeable. Supremacy of the nation-state became accepted as the norm and hence allowed growth of international relations (Pant, 2011). | https://www.ipl.org/essay/Factors-Influencing-The-Global-Economy-P396U4WBG5FV |
- We are seeking an experienced banker to help us develop and grow our Private Banking business across the North West of England.
- The role holder will be principally based in the Manchester office but is expected to work flexibly and remotely across the North West of England. They will report to the Regional Head, North and Midlands.
- When this role is filled, the team in the North West will comprise 2 Relationship managers
- The target market for the role holder will be UK Resident and UK Domiciled clients with investable assets of between $5m and $30m.
- The principal purpose is to provide wealth management services to these high-net-worth individual clients through the marketing of HSBC Private Bank services.
- It is a requirement to establish new client relationships and manage existing clients in order to generate new Assets Under Management (AUM) and increase overall portfolio revenues.
- The role holder will achieve this via a mixture of internal referrals as well as self generating leads from their own external introducer network
- The role holder will work alongside Investment Specialists, Strategic Financial Planners and Credit Specialists to deliver outstanding service and product solutions to the clients of HSBC Private Bank in Scotland
Key Accountabilities
Impact on the Business:
- Grow revenue, increase AUM of portfolio and increase inflows of net new money (NNM), achieving growth by leveraging brand, internal collaboration channels and RM’s own referral network.
Customers / Stakeholders
- Has regular contact and meetings with clients.
- Provides support to client’s portfolio management.
- Works with the Investment Specialist to ensures the client’s objectives are met through the portfolio allocation.
- Ensures clients understand and are aware of their level of risk.
- With the Investment Specialist, reviews portfolios on a regular basis and develops client’s profitability.
- Identifies opportunities to meet the credit needs of clients seeking secured and unsecured loans for a variety of purposes.
- Resolve basic enquiries, answers questions and provides documentation on client activities.
- Delivers fair outcomes for our clients and ensure own conduct maintains the orderly and transparent operation of financial markets.
- Identifies client’s financial planning needs and makes introductions to the Strategic Financial Planning (SFP) team.
Leadership & Teamwork
- Lead by example through financial results with excellent corporate behaviour.
- Provide assistance to fellow Relationship Managers.
- Bring motivation and support to the Regional Head.
- Work closely with other Bank Departments.
Operational Effectiveness & Control
- Works closely with other Bank Departments to maintain HSBC internal control standards, including timely implementation of internal and external audit points together with any issues raised by external regulators.
- Ensures follow-up on all risk and compliance issues (such as CCR, KYC, Compliance requests, Risk Management requests, Internal Audit requests).
- Minimize operational losses
Role Requirements
- Highly organized with a willingness to work under own supervision and remote from immediate line manager.
- Manage a portfolio of between 10 and 50 family Relationships.
- Ability to travel to meet clients and internal stakeholders.
- Manage own time to meet demanding new client acquisition objectives.
- Represent the Private Bank with internal stakeholders and in the professional community across the North West of England
- Work as part of a highly effective client team.
Observation of Internal Controls
- The jobholder will adhere to, and be able to demonstrate adherence to, internal controls and will implement the Group compliance policy by adhering to all relevant processes/procedures.
- The term ‘compliance’ embraces all relevant financial services laws, rules and codes with which the business has to comply. This will be achieved by adherence to all relevant procedures, keeping appropriate records and, where appropriate, by the timely implementation of internal and external audit points, including issues raised by external regulators.
- The following statement is only for roles with managerial or specific Compliance responsibilities
- The jobholder will implement measures to contain compliance risk across the business area. This will be achieved by liaising with Compliance department about business initiatives at the earliest opportunity. Also and when applicable, by ensuring adequate resources are in place and training is provided, fostering a compliance culture and optimising relations with regulators.
Management Risk
- The jobholder will ensure the fair treatment of our customers is at the heart of everything we do, both personally and as an organisation.
- This will be achieved by consistently displaying the behaviours that form part of the HSBC Values and culture and adhering to HSBC risk policies and procedures, including notification and escalation of any concerns and taking required action in relation to points raised by audit and/or external regulators.
- The jobholder is responsible for managing and mitigating operational risks in their day to day operations. In executing these responsibilities, the Group has adopted risk management and internal control structure referred to as the ‘Three Lines of Defence’. The jobholder should ensure they understand their position within the Three Lines of Defence, and act accordingly in line with operational risk policy, escalating in a timely manner where they are unsure of actions required.
- Through the implementation the Global AML, Sanctions and ABC Policies, supporting Guidance, and Line of Business Procedures the jobholder will make informed decisions in accordance with the core principles of HSBC's Financial Crime Risk Appetite.
- The following statement is only for roles with core responsibilities in Operational Risk Management (Risk Owner, Control Owner, Risk Steward, BRCM, and Operational Risk Function
- The jobholder has responsibility for overseeing and ensuring that Operational risks are managed in accordance with the Group Standards Manual, Risk FIM, & relevant guidelines & standards. The jobholder should comply with the detailed expectations and responsibilities for their core role in operational risk management through ensuring all actions take account of operational risks, and through using the Operational Risk Management Framework appropriately to manage those risks.
- This will be achieved by:
- Continuously reassessing risks associated with the role and inherent in the business, taking account of changing economic or market conditions, legal and regulatory requirements, operating procedures and practices, management restructurings, and the impact of new technology.
- Ensuring all actions take account of the likelihood of operational risk occurring, addressing areas of concern in conjunction with Risk and relevant line colleagues, and also by ensuring that actions resulting from points raised by internal or external audits, and external regulators, are correctly implemented in a timely fashion. | https://uk.work180.co/job/185474/relationship-manager |
LITTLE ROCK, Ark. — Arkansas Boys State has named Andrew Brodsky as Director of Staff for the program, which has transformed the lives of young men throughout the state and beyond since 1940.
Brodsky has been part of Arkansas Boys State since his delegate year in 2013, in which he represented Lakeside High School and was elected as Arkansas Boys State Speaker of the House. He has served the program since as a junior counselor, state counselor, senior counselor and, most recently, as assistant director of operations.
“Arkansas Boys State is able to provide a world-class program because of the staff who volunteer their time and return year after year to mentor our state’s next generation of leaders,” Brodsky said. “It is these volunteers who facilitate every aspect of the program so that it can truly be a week that shapes a lifetime. By ensuring that our staff is supported, trained and well-resourced, we can continue to excel as our state’s premier youth leadership and civic engagement program.”
Brodsky is a Higher Education Consulting Associate for Huron Consulting Group and holds a Master of Education in Higher Education Administration and a Bachelor of Science in Human and Organizational Development, both from Vanderbilt University.
“Arkansas Boys State alumni and staff hold an array of experiences, backgrounds and interests, and each provides exceptional perspective and skills — often with new and innovative approaches to our program,” Brodsky said. “My goal is to create a structure where every member of our staff is empowered to revolutionize Arkansas Boys State in a way that only they are able to do.”
Lloyd Jackson, executive director of Arkansas Boys State, said he looks forward to seeing the program’s top-tier staff grow and succeed under Brodsky’s leadership.
“Young men who attend Arkansas Boys State know that one of the most memorable and life-changing experiences of the program is being mentored, challenged and celebrated by our incredible staff, which develops relationships that last long beyond our week-long program,” Jackson said. “Andrew’s vision for supporting and growing our staff will ensure that Arkansas Boys State’s legacy of excellence and transformation thrives well into the future. We’re excited to have him on board.”
Arkansas Boys State is an immersive program in civics education designed for high school juniors. Since 1940, the week-long summer program has transformed the next generation of leaders throughout the state and beyond. These men have become state, national, and international leaders, including Pres. Bill Clinton, former Arkansas Gov. Mike Huckabee, former White House Chiefs of Staff Mack McLarty and Jack Watson Jr., Sen. Tom Cotton, Sen. John Boozman and Arkansas Chief Justice John Dan Kemp.
During their week at Arkansas Boys State, delegates are assigned a political party, city, and county. Throughout the week, delegates, from the ground up, administer this mock government as if it were real: they run for office, draft and pass legislation, solve municipal challenges, and engage constituents. By the week’s end, the delegates have experienced civic responsibility and engagement firsthand while making life-long memories and friends — all with the guiding principle that “Democracy Depends on Me.” Learn more at arboysstate.org. | https://arboysstate.org/brodsky-appointed-to-arkansas-boys-state-leadership/ |
The Example of Ingrid Schroffner: Any Exemplary Government Lawyer, Advocate for Diversity, and Mentor by Renée M. Landers
This series paying tribute to exemplary government workers is a fitting way for the Section and Notice & Comment to observe Public Service Recognition Week. Coverage of excessive use of force by police nationally, too often resulting in death, has shared the headlines in Massachusetts with police fraud in accounting for overtime. The attention these failures receive makes taking the time to remind the public of the countless public servants who are essential to effective government really important. Because they perform their responsibilities with integrity and dedication, the work of these impressive individuals often goes unnoticed. In addition, the recent examination of the role of systemic racism in the legal system and the larger society has caused us once again to consider and address the continued lack of racial and ethnic diversity in the ranks of the legal profession and the legal academy.
I am grateful for this opportunity to express appreciation for my friend Ingrid Schroffner’s distinguished contributions to the legal profession and public service. Ingrid’s career in government embodies a record of mentoring generations of law students, and reflects her commitment to advancing diversity, equity, and inclusion in the legal profession and society.
I have known Ingrid since we both served as members of the Steering Committee of the Boston Bar Association’s Diversity, Equity, and Inclusion Section for which I served as Co-Chair from 2010-2012 after having served as the first legal academic and woman of color as BBA President. For more than a decade, Ingrid was a leader in the General Counsel’s Office of the Massachusetts Executive Office of Human Services (EHS)—advancing from Assistant General Counsel to Associate General Counsel to Acting Deputy General Counsel. During that time, I engaged with Ingrid on a regular basis as she considered applications from Suffolk University Law students for internship positions with the EHS Internship Program. From these associations, I learned that Ingrid possesses the dynamic personal qualities, engaging communication skills, and deep experience in the legal profession and community service which are an example for other public servants and allow the public to have confidence in government.
Ingrid’s professional career reflects substantial experience in government for which she prepared by spending significant time in private practice. She was an effective advocate and manager for a litigation unit for MassHealth, the Massachusetts Medicaid program, in which she provided advice on the administration of the program and providing quality services to MassHealth members. In her role as Chair of the EHS Diversity Council, she provided extensive leadership on efforts to create a diverse staff and to promote agency policies aimed at meeting the needs and concerns of immigrants, religious minorities, veterans, and other vulnerable populations, and addressing disparities and implicit bias. This spirit of making a place at the table for everyone carries through her involvement in the Boston Bar Association mentioned earlier. In addition, she served as a member of the Board of Directors and President of the Asian American Lawyers Association of Massachusetts. That she rose to leadership positions in the organizations in which she has been involved is a testament to the authenticity of her commitment to issues of diversity, equity, and inclusion, and her ability to inspire the confidence of other lawyers.
Ingrid brought these prodigious skills and sensibilities to bear in her leadership of the EHS Legal Intern Program. First, with a colleague, she revived a dormant program in 2010 and applied rigor to working with law schools to promote the internships, the process of recruiting interns, and screening interns. Second, having selected interns, Ingrid insured that all received a comprehensive variety of work assignments, effective supervision, constructive evaluation, and supportive mentoring. During the summer she arranged for speakers for the Legal Intern Speaker Series. To institutionalize program principles, she created and updated a Legal Intern Protocol and Handbook for the program. She actively coordinated with the area law schools and their clinics to provide internship opportunities during the school year as well as the summer, for both students and new attorneys.
The internship program she managed was important for students enrolled in the Health and Biomedical Law Concentration for which I serve as the Faculty Director because it gave students exposure to the public sector considerations relevant in the field. The structure and rigor of the program made it easy for law professors like me to recommend the EHS Internship Program as an essential experience for students aspiring to practice in the field of health law. The internship provides valuable experience for students in a professional environment and imparts important lawyering skills and values. Every student who participated found Ingrid to be an attentive and exacting presence and valued the role the internship played in their individual development. Following the internships, Ingrid was a reliable and supportive source of recommendations for these students as they sought future internships or job opportunities. The structure and regularity of the program is unusual in the universe of internship opportunities available to the students.
As the COVID-19 pandemic, as well as the focus on police misconduct, have focused the national attention on disparities, providing more information about Ingrid’s work as the Chair and Co-Chair of the EHS Diversity Council is especially relevant to understanding her impact on the profession. This work demonstrated the importance of focused attention on fostering inclusive and respectful work environments. In Ingrid’s words, such environments recognize “the progress that results from employees’ diversity . . . that reflects the population served.” Making available training opportunities and education and identifying best practices helped MassHealth programs serve the members better. Under Ingrid’s leadership, the EHS Diversity Council regularly created publications entitled “Prisms of Diversity” featuring various geographic areas with cultural history and cuisine. Beginning in 2015, the Council provided quarterly brown bag speaker events for EHS employees on “hot topics” relating to health care, including Immigrant Advancement, Muslim Access to Health Care, Unconscious Bias, Veterans’ Issues, the Opioid Crisis, Aging Population, Vulnerable Workers, and Health and Racial Health Disparities. Ingrid also has written regularly about diversity and cultural competency and is a sought-after speaker on these topics—as well as on the work of litigating estate recovery for the MassHealth program.
Ingrid is now continuing her career in public service as a Senior Associate Attorney in the Office of Management at the University of Massachusetts Medical School. Massachusetts state government is so fortunate that she will continue to impart her experiences working in government programs serving low-income and vulnerable populations and with a large public academic medical center—as well as her service with bar and other organizations serving the legal and larger communities, to inculcate a culture of public service among future generations of law students and recent law graduates. She is an exemplary government lawyer and mentor. In The Measure of Our Success: A Letter to My Children and Yours,” Marian Wright Edelman, the founder of the Children’s Defense Fund, wrote that you do not need “to be a big dog to make a difference . . . . You just need to be a flea for justice bent on building a more decent home life, neighborhood, work place, and America.” Ingrid has lived this exhortation through the example of her career in public service.
Renée M. Landers is Professor of Law at Suffolk University Law School and Faculty Director of the Health and Biomedical Law Concentration and Master of Science in Law: Life Sciences Program. This post is part of the ABA Administrative Law Section Series Celebrating Public Service; all the posts in the series are collected here. | https://www.yalejreg.com/nc/the-example-of-ingrid-schroffner-any-exemplary-government-lawyer-advocate-for-diversity-and-mentor-by-renee-m-landers/ |
Well, can you reuse coffee grounds?
Sure, you can, but it’s not really the best option if you’re crazy about coffee. You found this article, which leads me to believe you deeply care about your coffee and how it tastes.
Let’s discuss what’s going to happen when you reuse grounds, and see if it’s something that’s worth doing for its budget effectiveness, or just not worth sacrificing the flavor for.
Contents
- 1 Can You Reuse Coffee Grounds?
- 2 Benefits of Reusing Coffee Grounds
- 3 Drawbacks of Reusing Coffee Grounds
- 4 What Type of Coffee is Best for Reusing?
- 5 Being Economic About Your Coffee
Can You Reuse Coffee Grounds?
You can reuse grinds, and this is how you are going to do it.
Your grinds are going to come out of the brew basket completely wet, but you don’t want to just leave those lying around so build up mildew and bacterial growth.
While coffee is very acidic and it’s not likely that bacteria will grow on it in a short amount of time, it’s still a process that begins once the grinds exit the temperature safe zone of about 140° F.
To continue, you have to dry out those grinds. First thing’s first: lay them out on a pan and put them in the oven for about fifteen minutes on 275° F.
This starts getting most of the excess moisture out, just don’t layer the pan too high. You want about half an inch worth of depth.
Thing is, you don’t want to over roast the grounds for too long. Eventually, they’ll just end up tasting burnt. Take them out of the oven at this point and prepare to transfer them to a pan that’s lined with tin foil.
That pan is going to sit out in the sunlight for about two to three days.
If you’re going to do this on a routine basis, you should have a spot where you can put up to four pans at a time (since you don’t want wet grounds sitting around until you can build up a big batch).
Slap a timestamp on each of them so you know how long they’ve been there. Another thing to consider is having the upcoming weather forecast handy.
You might not be able to reuse all of the grounds due to inclement weather, but reusing about 70% of your grounds isn’t uncommon.
If you’re wondering how it’s going to work out in your favor, we’ve listed the benefits of reusing coffee grounds below.
Benefits of Reusing Coffee Grounds
While it’s not everyone’s preferred method of saving on coffee, reusing your grounds has a few major benefits for your wallet, and the environment as well.
Saving Money
Everyone likes saving money. If you buy a 16 oz bag of coffee beans, and let’s say that’s about $8.99 for a medium-quality roast, then you can double down on the money you’ve spent.
You grind them up, use them, and let’s say that you’re using about 1 oz per cup in a single-cup machine.
That’s sixteen cups of coffee at about $0.57 each, but if you’re able to salvage enough for another 12 cups (you can’t save all the grinds), you’d only be spending about $0.33 per cup.
That could equal roughly a hundred dollars a year in savings, depending on how much coffee you drink.
Environmentally Conscious
Coffee grounds can either help the soil in ways that we almost can’t image or add methane gas to the atmosphere if we let them rot in the landfills.
When it gets mixed in with other kitchen food scrap waste, it ends up being a blight on the earth.
If you just spread wet coffee grounds across your lawn, or more specifically, use them for gardening, they’re aerated enough to actually provide nutritional benefits to the soil.
Try New Flavors
Once you’re familiar enough with the taste of reused coffee grounds, you should be able to start toying with how they taste.
You’ll see in our drawbacks that reused grounds, as you might expect, lack the flavor of freshly ground coffee.
That’s okay; you can spice it up by adding flavor components into your brewed coffee.
Or, you can actually spice it up by adding spices into the reused grinds to bring some of that coffee zing back to life.
Try cinnamon, nutmeg, and even pumpkin pie spice if you’re feeling a bit adventurous. It makes a world of difference without wasting a full cup of freshly ground coffee.
Drawbacks of Reusing Coffee Grounds
There are some good reasons to reuse coffee grounds, but if you’re like us, sometimes these drawbacks are just a bit too much to trade-off in exchange for the economic boost.
Less Flavor
It’s an undeniable fact, and the very thing you expected to see when you came here: there’s just a lot less flavor in reused coffee grounds.
While that might not come as a shock, the flavor will diminish in accordance with your brewing method. If you’re making strong coffee in a percolator, the grinds are going to have almost no flavor left.
If you use a French press, your grinds are bigger, so they might actually have some more life to them.
For the most common type of brewing method, being a drip coffee maker, the results rest somewhere in the middle.
You get a mix between the flavor being sapped out, and the grinds being big enough to still have just a little bit more life in them.
Easy to Mess Up
If you don’t do this properly, your grounds are going to gather mildew and mold. It’s not good. You can’t just scoop out that one part that’s starting to grow green fuzz.
Since coffee grounds are acidic, they’re harder for bacteria to start growing on.
That means that it’s severe, and the other grounds are likely a day or two away from completely molding as well.
Time-Consuming
It’s not exactly a simple task to constantly be rotating your coffee out while trying to dry it. Carving out that time, and doing it properly, is something you have to commit to.
Sure, you can save some money every year, but in your busy life is it really the best way to be spending your time?
We don’t particularly advocate for or against reusing your own coffee grinds, but it’s important to note how long it can take, especially at the beginning.
Bye-Bye to Caffeine
Caffeine is water-soluble, meaning once water is introduced to the grounds and run through them, the caffeine comes rushing out.
That’s good for your first cup or pot of coffee, but you’re just going to be drinking coffee-flavored water after the fact.
The health benefits are basically microscopic in comparison to what they were before, and you won’t get a jolt or a buzz off of your coffee the same way that you’re used to.
If you immediately rebrew your coffee in the coffeemaker, you might still get a hint of caffeine in that second pot.
What Type of Coffee is Best for Reusing?
For starters, you need to make sure it’s freshly ground.
If you’re reusing pre-ground coffee, you’ll be left with a bitter flavor in your cup.
If you’re grinding it yourself every morning, you can most certainly reuse your grounds with little to no problem.
But what type of coffee beans should you use?
You can use arabica, robusta, or even Liberian coffee beans if you want—there’s no major difference between reusing these.
That being said, you should aim to reuse light roast or medium roast coffee grounds only, and avoid dark roast altogether.
During the roasting process, oils and co2 are extracted from each coffee bean (commonly referred to as the first and second “crack,” which indicates the cracking sound it makes during roasting).
When this happens, there’s fewer oils and hydration to each bean. That’s good for storage, but it’s not good for reusing them.
The beans are cooked at higher temperatures for longer. There is more flavor, to some, but it’s quickly diminished after the grounds are used one time.
Since light roast and medium roast beans are roasted for less time, there’s arguably more coffee flavor and actual oils/co2 in each bean. That’s going to contribute to flavor.
Being Economic About Your Coffee
If you’re eccentric about your coffee flavor, then we don’t recommend reusing your grounds.
It’s just not going to create the same flavor profiles, and it will certainly give you inconsistency in your daily coffee regimen.
Every other cup of coffee you have will be watered down or lacking in flavor.
However, if you want to be economic about your coffee grounds, you can simply use a little less at a time when you make your coffee in the morning.
If you usually use five teaspoons in your drip brewer, use four, or four-and-a-half, then put the additional amount of coffee grounds in a separate container.
Somewhere between five to ten times of doing this, you’ll have enough for another pot. | https://insidethehub.com/can-you-reuse-coffee-grounds/ |
|Sem 2 2010,
|
Sem 1 2011,
Sem 2 2011,
Sem 1 2012,
Sem 1 2013,
Sem 1 2014
Course Coordinator: Tony Skinner
Course Coordinator Phone: +61 3 9925 4444
Course Coordinator Email: [email protected]
Pre-requisite Courses and Assumed Knowledge and Capabilities
Nil.
Course Description
This course illustrates the effects on the earth’s environment of various types of pollution, the impact of engineering works and investigates management techniques to minimise degradation.
Objectives/Learning Outcomes/Capability Development
You will gain or improve capabilities in:
1. Technical competence
Ability to establish technical skills appropriate to the diversity of practice in the engineering industry.
• Ability to demonstrate critical thinking in relation to industry structure and practices
• Ability to apply knowledge of basic science and industry practice fundamentals
2. Integrative perspective
Developed broader perspectives, within the engineering industry and external to it, and also to demonstrate ability to work with complex systems.
• Ability to undertake problem identification, formulation and solution
3. Professional skills
Recognise and apply engineering industry skills, attitudes and professional standards.
• Ability to communicate effectively, not only within the engineering sector, but also with the community at large
• Ability to function effectively as an individual and in multi-disciplinary and multi-cultural teams, with the capacity to be an effective team member as well as to take on roles of responsibility
• Understanding of the social, cultural, global and environmental responsibilities associated with the engineering industry, and the principles of sustainability
• Understanding of, and commitment to, professional and ethical responsibilities
• Expectation and capacity to undertake lifelong learning
On successful completion of this course, you will be able to :
• describe the major components of the earth’s environment and their inter-relationships
• identify environmental degradation problems and solutions
• compare the benefits and adverse effects of engineering works
• describe the role of government authorities in monitoring and controlling the environment
• explain the principles involved in rehabilitation, restoration and reclamation
• demonstrate an understanding of ecologically sustainable development
Overview of Learning Activities
The learning activities included in this course are:
• attendance at lectures and laboratory sessions where syllabus material will be presented and explained, and the subject will be illustrated with demonstrations and examples;
• completion of tutorial questions and workshop projects designed to give further practice in the application of theory and procedures, and to give feedback on student progress and understanding;
• completion of written and practical assignments, individually or in teams intended to develop effectiveness in a team environment; this will consist of numerical and other problems requiring an integrated understanding of the subject matter; and
• private study, working through the course as presented in classes, laboratories and supplementary learning materials, and gaining practice at solving conceptual and numerical problems.
Overview of Learning Resources
Students will be able to access course information and learning materials through the Learning Hub (also known as online@RMIT) and will be provided with copies of additional materials in class. Lists of relevant reference texts, resources in the library and accessible Internet sites will be provided where applicable. Students will also use laboratory equipment for experiments within the School during project and assignment work.
Overview of Assessment
The assessment for this course comprises of a mixture of written and assessable practical assignments, progress tests and a final exam in either short answer, multi choice or a mixture of the two. Written assignments and combined team project work will be used to provide feedback on progress through the course. | http://www1.rmit.edu.au/browse;ID=041997heparta |
To ask Her Majesty's Government what assessment they have made of the impact of light pollution on wildlife and the environment.
Answered on
20 January 2021
Defra has published or contributed to a range of assessments of the impact of artificial light on insects and wider biodiversity, as well as global and national assessments of the drivers of biodiversity loss more generally.
Following publication of the Royal Commission on Environmental Pollution’s report, ‘Artificial light in the environment’ in 2009, Defra has supported assessments of impacts of artificial light on insects and on other organisms such as bats. These are published on our science website. Defra has also funded or co-funded national and international assessments of drivers of change on insects and wider biodiversity such as the global IPBES Assessment Report on Pollinators, Pollination and Food Production, which notes effects of light on nocturnal insects may be growing and identifies the need for further study.
There have been a number of externally funded studies which have highlighted potential impacts of artificial light pollution on insects, but based on the current available evidence, artificial light is not considered one of the main drivers of species decline. We are confident that we are focusing and taking action on the issues that will make a real difference to insect pollinators.
We recognise that there is ongoing research into the topic and together with our academic partners, we will keep this under review. | https://questions-statements.parliament.uk/written-questions/detail/2021-01-06/HL11814/ |
The National Educational Psychological Service (NEPS) of the Department provides a comprehensive, school-based psychological service to all primary and post primary schools through the application of psychological theory and practice to support the wellbeing, academic, social and emotional development of all learners. NEPS provides its service to schools through casework and through support and development work for schools. Individual casework service involves a high level of psychologist collaboration with teachers and parents, often also working directly with the child/young person. NEPS may become involved with supporting individual students where the school’s Special Education Teaching team or Student Support Team feels that the involvement of the psychologist is needed. Psychologists may provide consultation in relation to appropriate therapeutic interventions to be delivered in the school setting and engage in direct work with an individual student as appropriate.
At post primary level, counselling is a key part of the role of the Guidance Counsellor, offered on an individual or group basis as part of a developmental learning process, at moments of personal crisis but also at key transition points. Each post primary school currently receives an allocation in respect of guidance provision, calculated by reference to the approved enrolment. Guidance allocations for all schools were increased in the 2020/21 school year in response to Covid 19. The Guidance Counsellor also identifies and supports the referral of students to external counselling agencies and professionals, as required. The Guidance Counsellor is key in developing and implementing innovative approaches to wellbeing promotion on a whole schools basis though the school’s Guidance Plan.
In the event that the need for a more specialised intervention or counselling is identified by the NEPS psychologist, a referral is made to an outside agency for evaluation and ongoing support. The NEPS psychologist can in consultation with the Guidance Counsellor identify the most appropriate referral pathway and support schools with the onward referral to Child and Adolescent Mental Health Team (CAMHS), HSE Primary Care/Community Psychology teams, or an identified local community based specialist mental health service.
In addition to casework NEPS psychologists work with teachers to build their capacity/capability to promote the wellbeing and mental health of children and young people in schools. NEPS teams offer training and guidance for teachers in the provision of universal and targeted evidence-informed approaches and early intervention to promote children’s wellbeing, social, emotional and academic development. Initiatives such as the Incredible Years Social Emotional learning Programmes and the FRIENDS Resilience Programmes. These programmes have been welcomed by schools and their impact positively evaluated.
NEPS is currently developing a range of workshops on the promotion of wellbeing and resilience in schools which includes trauma informed approaches. The approaches outlined in the workshops are based on research findings, on the experience of experts in their fields and on the experience of practicing psychologists working in schools. The workshops will be available to build the capability of school staff in both primary and post-primary settings, including for school leaders, teachers and SNAs. Work is underway to identify schools for inclusion in a pilot of the workshops.In selecting schools, a mix of DEIS, non DEIS and urban and rural schools will be included. Following the pilot a national roll-out is planned during the next academic year.
Pádraig O'Sullivan (Cork North Central, Fianna Fail)
Link to this: Individually | In context | Oireachtas source
207. To ask the Minister for Education and Skills the current level of service from NEPS at primary and post-primary level; and her plans to increase the NEPS support to schools. [17729/21]
Norma Foley (Kerry, Fianna Fail)
Link to this: Individually | In context | Oireachtas source
NEPS’ sanctioned psychologist numbers have grown from a base of 173 whole-time equivalent psychologists (w.t.e.) in 2016, through the intervening Budget increases in 2017-2019 to 204 w.t.e. psychologist posts. This Government remains firmly committed to the maintenance of a robust and effective educational psychological service. In this connection, as part of a package of measures to support the reopening of our schools the provision of an additional seventeen psychologist posts to NEPS was announced bringing overall sanctioned numbers to 221 w.t.e. psychologist posts.
Currently NEPS has 206 w.t.e. psychologist. NEPS is actively recruiting to achieve sanctioned numbers in NEPS. The Department has engaged with teh Public Appointments Service who are planning a recruitment compeition for NEPS in Q2 of 2021. | https://www.kildarestreet.com/wrans/?id=2021-04-01a.551 |
The Energy Drone Coalition and DRONEII.COM recently released a joint report, “Drones in the Energy Industry,” an industry-specific benchmark of the use of UAVs in the energy industry and analysis of potential future developments.
A mix of UAV leaders from the energy and engineering industries, including energy asset owners using drones as well as drone-as-a-service providers (DSPs), participated in this study.
“This survey was created in July 2018 and distributed within the Energy Drone Coalition network in the North American region with 214 total surveys completed,” said director, Sean Guerre. “The survey is planned to be conducted again in 2019 to assess operational changes, market growth and technological developments of commercial drones in the energy industry, and we look forward to having even more respondents share their insights.”
Key areas include:
- The Drone Market (time in market, organization UAV setup)
- Operational Setup (in-house, outsource or hybrid)
- Technology Stack (hardware, software, payloads)
- Operations (types of UAV use cases, frequency of flights)
The report discovered that about two-thirds of the energy companies who responded to the survey are currently operating drones, 50% of which are in-house operations, but that the majority of current drone operations during 2018 are in the “proof of concept” phase (60% have fewer than ten flights per month). Multi-copters are the most utilized drone configuration and the use of infrared sensors is prevalent in comparison to other industry sectors. Endurance, range, reliability, and flexible utilization are cited as the core developments required by the respondents to grow or scale their drone operations.
The complimentary report is available for download here. | https://defensetechconnect.com/2019/04/05/the-energy-drone-coalition-releases-joint-report-drones-in-the-energy-industry/ |
Updated Friday November 9, 10:48am:
Feminine prettiness was in the air last night as stars adopted the most romantic of dresses for their public appearances. The Duchess of Cambridge attended a gala celebrating the birthday of St Andrews University wearing a floor-length Temperley London lace dress, while Laetitia Casta chose a Chanel tulle confection to support Karl Lagerfeld at the Paris opening of Chanel's touring exhibition The Little Black Jacket.
Contrarily, Kristen Stewart wore a form-fitting A.L.C. black leather dress to promote On The Road and Hilary Rhoda adopted the boyish trouser suit trend for a dinner in New York.
Updated Thursday November 8, 11.09am:
Red carpet film premieres took centre stage last night. Cameron Diaz arrived in a monochrome Stella McCartney dress - on the arm of co-star Colin Firth - for the London premiere of Gambit. In New York, Keira Knightley chose a lace Valentino Couture autumn/winter 2012-13 gown to attend a screening of Anna Karenina, while burgeoning fashion icon Elle Fanning chose a strapless Oscar de la Renta gown for a Ginger and Rosa screening in LA.
Updated Wednesday November 7, 10.22am:
President Barack Obama won his second term in office and America's stars ensured they all had their say, arriving in droves at the polling stations; Sarah Jessica Parker professed her Obama loyalty wearing a slogan T-shirt, while Milla Jovovich and January Jones took a more laid back approach - opting for skinny jeans and flats.
Anticipation continues to build around tonight's Victoria's Secret show - Miranda Kerr was spotted arriving for rehearsals looking impeccably groomed - while Rihanna, emerging after an intense rehearsal of her live performance, left the venue in a signature streetwear and heels ensemble.
Updated Tuesday November 6, 10.53am:
Kristen Stewart flew the flag for British style while appearing on The Tonight Show with Jay Leno, wearing a Peter Pilotto pre-spring/summer 2013 dress to face the famed interviewer.
Meanwhile, it was a night of premieres - Marion Cotillard chose a Dior gown for the AFI Fest screening of Rust And Bone, Halle Berry was in Helmut Lang for the Cloud Atlas Berlin premiere and Penelope Cruz opted for Versace to walk the red carpet in Rome for Venuto Al Mondo.
Updated Monday November 5, 11.17am:
Feminine style came to the fore over the weekend as an array of stars stepped out in the girliest of designs. Jessica Alba attended a charity gala in a pale pink gown by Valentino, covered in intricate embroidery, while Amy Adams wore a full-skirted Dolce & Gabbana prom dress to the LA premiere of On The Road and Rita Ora showed off McQ's autumn/winter 2012-13 florals at the Mobo Awards.
At the other end of the spectrum, Kristen Stewart erred on the more boyish side in a Balenciaga jumpsuit for her latest red carpet outing.
November 8 2012
To attend the University of St Andrews 600th Anniversary Fundraising Auction, the Duchess of Cambridge wore a Temperley London black lace dress and carried a red clutch.
© Rex Features
November 8 2012
She wore an A.L.C. black leather and jersey dress with strappy Barbara Bui sandals to a screening of On The Road in New York.
© Rex Features
November 8 2012
She wore a grey dress with a navy coat and Aperlai red velvet shoes for an appearance on Good Morning America.
© Rex Features
November 8 2012
Laetitia Casta wore a black feathered Chanel dress with stiletto heels to attend the Paris opening of Chanel’s The Little Black Jacket exhibition.
© Rex Features
November 8 2012
For a night out in New York, she wore a shearling coat with a camouflage shirt, skinny jeans and tan boots.
© Rex Features
November 8 2012
Pixie Geldof wore a long black dress to attend the Uniqlo and Conde Nast in-store event where she performed live with her band, Violet.
November 8 2012
Berenice Marlohe arrived to see Skyfall co-star Javier Bardem receive a star on LA's Hollywood Walk Of Fame wearing an Elie Saab nude lace dress and gold stiletto heels.
© Rex Features
November 8 2012
For an LA screening of Ginger & Rosa, Christina Hendricks wore a burgundy dress with black boots.
© Wenn
November 8 2012
She attended a special New York screening of Rust & Bone hosted by Christian Dior wearing a crystal-embellished gown and black stiletto heels by the fashion house.
© Rex Features
November 8 2012
To attend the LA premiere of Lincoln during the AFI Film Festival, Lucy Liu wore a printed Roberto Cavalli cape over a Roland Mouret dress and carried a clutch.
© Rex Features
November 8 2012
Elle Fanning wore a Dolce & Gabbana white shirt, a black skirt, lace court shoes and a beaded headband to attend a screening of Ginger & Rosa in LA.
© Wenn
November 8 2012
Hilary Rhoda attended a dinner to celebrate the Engine Blocks Installation by Shelter Serra wearing a mottled suit with black stiletto heels. | https://www.vogue.co.uk/gallery/12-14 |
Pieces from the Zuahir Murad Spring/Summer ’17 Collection where among the designs to be displayed at the “Haute Dentelle” designer lace exhibition at the Museum of Fashion and Lace Calais, France.
Zuahir Murad pieces showcased at the exhibition include, a see-through long sleeved metallic lace dress with crystal studded chromatic bursts and raised ruffles, held together by a bow shaped black belt; a tea length, champagne colored tulle dress with long sleeves and a neck and decorated with floral appliqués.
The “Haute Dentelle” exhibition is designed by the Museum of Fashion and Lace to offer “a unique insight into the contemporary uses by fashion designers of lace woven on Leavers loom. The exhibition is curated by Sylvie Marot and also displays works from Dior, Iris Van Herpen and Chanel. The exhibition runs from 9 June, 2018 to 6 January, 2019. | https://belijose.com/2018/09/11/the-museum-of-fashion-and-lace-showcases-zuhair-murads-beautiful-lace-designs/?shared=email&msg=fail |
It was a rainy day in autumn as I was sitting on the love seat by my window that looked out into the city. I was content and cozy drinking my hot cup of Earl Grey tea, watching the orange, yellow, and red leaves fall from the trees. My cat Izzy was happily purring in my lap. The only thing that was disconcerting was that I had been noticing more and more stuff disappearing from the apartment. None of my personal stuff, but stuff that my roommate and I communally shared, and stuff that was just hers.
Lily and I were best friends and had been best friends for years now. Lily had always been a very quirky person. She grew up poor, and at the age of 13 her mother killed herself and her father had abandoned Lily and her sister. After that, Lily basically fended for herself, living on the streets until her grandmother had heard that her daughter died and her granddaughters were wondering the streets of New York City.
Lily was tough, street-smart, and a very loyal friend. She was sweet but had a temper that could flicker to the other side faster than a blink. She was a hippie, a vegetarian, and a masseuse. She played guitar and sang odd songs.
Was I going crazy or had all of Lily's plants in the living room disappeared? Where were the creepy paintings that she had hung up, that she bought from a flea market? And shouldn't she be home by now, baking cookies and singing in the kitchen?
Maybe I was overthinking it. She was probably at the farmers' market, talking to some homeless people, or had met some cute guy at the coffee shop downstairs and was getting lucky.
I should do some things to distract myself from thinking maybe Lily got murdered and raped, I thought. So, I got up from the window seat, my cat annoyed that I woke him up from such a peaceful slumber and made another cup of tea. I went to my room, put on my coat and scarf, and left the apartment to take a walk around the city and maybe do some errands.
The autumn air was cool and crisp. Autumn was my favorite time of year. It's cool, but not freezing, and everything is so beautiful. The sky is the brightest blue I have ever seen it all year. I love smelling the smell of firewood, cinnamon, and the smell of hot coffee pouring out of the local coffee shops.
The cool air felt amazing on my face as I walked briskly past bookstores and restaurants. Geese were flying through the air in a V-formation, migrating for the winter. Squirrels were running about Central Park, gathering bundles of acorns for their upcoming hibernation. Leaves were falling everywhere and being blown in all directions. The wet smell of the leaves was intoxicating.
I went to the farmers' market and bought some apples to make apple pie, two pumpkins to carve with Lily, and the tomato sauce I always purchased for making spaghetti. I went to the bookstore down the street, bought two books that caught my eye, and walked home.
When I got into the apartment, I noticed that the fall-scented candles on the kitchen counter and living room table had been lit and that the dishwasher was running.
"Lily?" I called out, hoping this wasn't another chapter of Chloe Goes Crazy.
"I'm here, love!" Lily replied. "I'll be out there in a second. Just straightening up my room a bit."
I breathed out a sigh, relieved my mental state was intact and that my best friend and roommate was home safe.
Lily came out of her bedroom, smiling nervously.
"Hi hun," she said. "How was your day?"
"Good, typical day at work," I answered. I thought this would be an opportune time to bring up the subject of Lily's vanishing items. I was either going to risk appearing insane or get some helpful, reassuring answers.
"So Lily, I was wondering, where did all your plants go? And where is the curtain of beads that hung from your bedroom doorway? Where's the toaster, and half of the refrigerator magnets? Are you giving things away to charity?"
"Look, Chloe," she replied. "I bought an apartment a couple weeks ago and I've been moving my things out bit by bit over the last few weeks, hoping you wouldn't notice and hoping to wait until the last minute to bring the difficult subject of me moving out".
"What? But why? Why are you moving out? I thought you liked living together," I said, bewildered.
"I do like living with you Chloe, but sometimes I don't. We fight about roommate stuff like cleanliness and orderliness, and I'm just not as clean and orderly as you. I want us to stay friends, best friends, and I couldn't see that lasting with us being roommates. I love you."
"You're the best friend to be so considerate and sweet. I love you too, Lil'."
And we have the best friendship two women could ever have, and we always will. | https://www.theodysseyonline.com/chloe-and-lilys-autumn-in-manhattan |
Uric acid is a substance produced naturally by the breakdown of purine (a type of dietary protein) . When it is in excess in the body, crystals composed of these substances are formed.
These crystals are deposited in various parts of the body, mainly in the joints and kidneys, causing pain and other aggravations.
The lack or excess of uric acid in the body is caused by some diseases (such as leukemia , obesity , kidney diseases and anemia ) and factors related to lifestyle (consumption of alcohol and processed foods , for example).
Contents
- 1 Where does purine come from?
- 2 Where is uric acid found?
- 3 What is high uric acid?
- 4 What can high uric acid cause?
- 5 What is low uric acid?
- 6 Tests: how to know uric acid levels?
Where does purine come from?
Purine is produced and released into the bloodstream when an amino acid is broken down by digestion.
Therefore, it is produced naturally in the body.
Purine is also found in foods such as red meat, seafood, some types of grains ( beans , for example) and in alcoholic beverages (mainly beer).
Therefore, anyone who has high rates of uric acid should avoid eating foods that contain purine.
However, normal levels are necessary for the proper functioning of the body, as they are fundamental for the construction of genetic material (DNA), in addition to being responsible for the coloring of urine and the dilation of blood vessels.
Where is uric acid found?
Once produced in the body, part of the uric acid remains in the bloodstream and the other part is filtered by the kidneys and eliminated in the urine. Understand:
On urine
Uric acid is found in the urine due to the filtering process done by the kidneys. But when this substance is produced in excess, only urine cannot eliminate it from the body.
To avoid damage caused by excess of the substance, it is important to monitor the amount of uric acid present in the urine. This can be done through laboratory tests.
In addition to the test, drinking a lot of water during the day also helps to prevent uric acid crystals, because the liquid helps the kidneys to eliminate the substance that prevents crystallization.
In the blood
Urine eliminates some of the uric acid. The other part remains circulating throughout the body through the blood.
The levels of uric acid in the blood can be high or low. Studies have shown that if uric acid is high in the blood, a person is more likely to develop cardiovascular disease .
To identify the amount of uric acid in the blood, laboratory tests are also required.
What is high uric acid?
When the amount of uric acid is high in the body, it is called hyperuricemia. In general, excess is represented by values above 6 mg / dL (in women) and above 7 mg / dL (in men) .
This condition can happen for two reasons: the production of this acid has increased or the elimination, through the urine, is not being enough.
The uric acid highest is a result of a modern and stirred life, consumption of processed foods, little water intake and physical inactivity that atrophy of joints. The use of some medications can also have an influence.
The diagnosis is made by laboratory tests that measure the amount of uric acid in the blood.
What can high uric acid cause?
Uric acid, when missing or in excess, is related to various diseases and health complications. Between them:
Crystals in the kidneys or joints
When uric acid is in excess, the kidneys are unable to properly filter the blood and, therefore, crystals composed of this acid are formed.
They stay in the joints, kidneys and gallbladder, are difficult to eliminate and cause severe pain.
In general, they are eliminated naturally in the urine, which causes pain and discomfort. Therefore, consuming plenty of water can help eliminate these crystals more quickly.
Purine-free eating and exercise can also facilitate the process.
In some more complex situations, uric acid will be eliminated with the help of medications, which can be prescribed by a rheumatologist or nephrologist.
Kidney problems
Under normal conditions, the kidney filters uric acid and eliminates excess uric acid in the urine.
But when uric acid crystals form in the kidneys, there is a greater chance that diseases and complications will develop in this kidney, such as kidney stones and chronic or acute kidney failure.
This is because the crystallization of the substance remains inside the kidney, which prevents the organ from filtering the blood properly.
Gout and arthritis
The drop is a type of arthritis inflammation caused by uric acid crystals in the joint accumulation. This disease is prevalent among adult men.
About 20% of people who have hyperuricemia (excess uric acid) will develop gout.
The initial symptom is characterized by a crisis that lasts between 3 and 10 days with swelling in the joints of the foot accompanied by severe pain. Seizures naturally normalize within a week, on average.
Another gout crisis can take months or even years to happen again. Therefore, many people end up choosing not to have the treatment.
But if gout is not treated, the joints affected by the disease can suffer permanent deformations in the long run.
Heart problems
People who already have gout are more likely to develop cardiovascular disease.
This is because the condition stimulates constant inflammation that ends up damaging the arteries.
Blood pressure can also increase, as uric acid helps with sodium retention, causing pressure fluctuations.
What is low uric acid?
When there is a lack of uric acid in the body, it is called hypouricemia. The condition is usually characterized when the levels of the substance are below 2.4 mg / dL (in women) and 3.4 mg / dL (in men) .
This is a rare condition that usually does not cause any symptoms.
Low uric acid can happen when there is drug use or kidney and liver problems. It is divided into two types:
- Primary: when the disease is permanent. The hereditary factor is related to this type of hypouricemia;
- Acquired: when the condition is intermittent, that is, “come and go”. This type can be triggered by drug use or changes in the functioning of the kidneys and liver.
Tests: how to know uric acid levels?
Some tests may be ordered by your doctor to check the amount of uric acid in your body. To perform this analysis, laboratory tests of blood and urine collection can be done.
The blood test requires fasting for at least 3 hours. For the urine test, the first of the day should be collected or as instructed by the laboratory.
Before making a diagnosis, tell the doctor if you take any medication or dietary supplement. In some cases, it will be necessary to stop that treatment.
Results vary between laboratories and only a medical analysis can accurately interpret these tests.
Uric acid, in shortage or in excess, can cause a lot of damage to the body.
In addition to forming the crystals and causing pain and swelling in the joints, this substance is also related to gout, anemia and cardiovascular diseases.
The Healthy Minute also brings content about exams and medicines that can guarantee more quality of life! | https://hickeysolution.com/what-is-uric-acid/ |
One of our common goals in the rehabilitation of a multitude of conditions is to increase range of motion. In this months’ Research Refresh we looked at a systematic review of human research that compares strength training with stretching to improve range of motion.
According to the review, both strength training and stretching resulted in improvements in range of motion, in both short-term and long-term interventions. The decision with regard to which programme to pursue for an individual patient may hinge on the additional benefits, goals and requirements of that patient.
Do you know how strengthening and stretching programmes differ, yet can affect the same outcome measure?
A reduced range of motion
A reduction in range of motion can occur as a result of many conditions or pathologies, and can be divided into the following categories:
Mechanical
Muscle injury or pathology
Ligament/tendon injury or pathology
Osteoarthritis
Pain
Neurological
Loss of proprioception
Nerve injury or dysfunction
Central sensitisation
Hypertonicity
Muscle weakness or atrophy
Infection
Swelling
Pain
Joint infection
Range of motion can be reduced actively, referring to a reduced range of motion in the active movement of the patient. A restriction in active movement can be as a result of muscle weakness or restriction, pain, a physical inability to complete the full range of motion, or instability of the region.
Passive range can be restricted as a result of pain, contracture of the muscle or joint capsule, or restrictions within the joint. When we test range of motion in our patients, we are testing the passive range.
An understanding of the cause of the reduced range of motion can guide us in our choice of whether to opt for stretching or strengthening as a treatment intervention.
Stretching to increase range of motion
We will commonly use stretching in our patients to increase range of motion, both during treatment and as part of the home programme.
Static stretching is performed by the therapist or owner stretching out the limb or muscle group and holding it for a period of time. Dynamic stretching requires the patient to move into and hold a stretched position. This can be specific to the sport or function that the patient needs to perform. An example of a dynamic stretch would be placing the forelimbs on a raised surface such as a peanut ball and doing cookie stretches to increase the extension of the back and hips.
Dynamic stretching will impact the whole body, increasing full body circulation and cardiovascular rates, and improving neuromuscular control and proprioception. It can help us to achieve specific goals in addition to increasing range of motion.
Stretching will improve range of motion in a number of ways, increasing stretch tolerance through neuromodulation, and through histological changes at the level of the musculotendinous unit.
Strengthening to increase range of motion
Strengthening is primarily used to address muscle weakness or asymmetry, or to improve function. There are many ways to incorporate strength training into our patients’ routines. All that is required is the addition of resistance to an exercise, which can include increasing the load on a limb through lifting another limb/s, the use of the theraband, or hydrotherapy.
Strength training has been shown to increase fascicle length, improve muscle coordination, improve reciprocal inhibition, and improve stretch-shortening cycles, as well as to increase range of motion, but studies comparing strengthening to stretching have shown conflicting results.
Strengthening VS Stretching
In Strength Training versus Stretching for Improving Range of Motion: A Systematic Review and Meta-Analysis, 194 peer reviewed journal articles were evaluated, and 11 were finally eligible for inclusion in the review. In this review, no statistical difference could be shown between the use of strengthening or stretching to promote range of motion, and one protocol could not be favoured or recommended above another.
The articles reviewed were all human-based, and this certainly does add some complications as we try to extrapolate these findings to our animal patients. For one, there are limitations in both stretching and strengthening protocols in animals, with many of the techniques used in humans not relevant to our equine or small animal practices simply because they are impractical to perform.
Considering other factors
As we make decisions clinically to guide our treatment interventions, we must consider the preferences of the patient, the concurrent treatment goals, the cause of reduced range of motion, and the ability and compliance of the owner. In some cases, static stretching may be simple, easy and enjoyable for both owner and patient, and can be incorporated into a leisurely massage and cuddle session.
For other owners and patients, a more active approach will be more enjoyable, and incorporating active stretches into a training routine or exercise routine may be far more beneficial.
Strengthening exercises can have the additional advantages of improving neurological deficits, improving muscle symmetry, strengthening specific muscle groups, improving joint health, and addressing functional or competition goals.
Conclusion
Both strengthening protocols and stretching protocols can improve range of motion in our patients. It is up to us to use sound clinical reasoning to establish the cause of the dysfunction and the goals of the patient when deciding which intervention to prioritise.
Resources
If you would like to learn more about clinical reasoning or exercise protocols, there are some phenomenal webinars in our members platforms for you to learn from:
- Understanding Exercise Physiology, a four-part series with Lesie Eide
- Therapeutic Exercise for Every Dog, with Debbie Gross Torraca
- Force-Free Exercise in the Canine, with Robert J Porter
- Biomechanics of Tissue Healing and its Relationship to Therapeutic Exercise, with Carrie Adrian
References
JOIN OUR
FACEBOOK GROUP
for Vetrehabbers ONLY
JOIN OUR
FACEBOOK GROUP
for Vetrehabbers ONLY
JOIN OUR
FACEBOOK GROUP
for Vetrehabbers ONLY
JOIN OUR
FACEBOOK GROUP
for Vetrehabbers ONLY
#VETREHABBERSSHARE: DID YOU ENJOY THIS BLOG?
If you found this blog interesting please share it with your friends and vetrehabber colleagues.
Use the share buttons below: | https://onlinepethealth.com/2021/10/14/stretching-or-strengthening-for-improving-range-of-motion/ |
McGrath, Alister E. Christian Theology: An Introduction (5th ed.). Wiley-Blackwell, 2011.
This book was comprehensive but not exhaustive. Christian Theology is indisputably a “textbook” proper, i.e., “a book used as a standard work for the study of a particular subject.” What the book lacks in imagination, i.e., style, tone, voice, allusion, etc., McGrath compensates for with factoid and reference factors, e.g., a useful (shock!!!) index, glossary, and primary source citations. Such being the case, the book will sit on my bookcase next to other useful theological reference works, e.g., Lewis and DeMarest’s Integrative Theology (3 vol.), Beeke and Ferguson’s Reformed Confessions Harmonized, Hodge’s Outlines of Theology, Beeke and Jones’s A Puritan Theology, etc.
The book is truly comprehensive: “The present volume therefore assumes that its reader knows nothing about Christian theology. . . . This book is ideally placed to help its reader gain an appreciation of the rich resources of the Christian tradition. Although this is not a work of Catholic, Orthodox, or Protestant theology, great care has been taken to ensure that Catholic, Orthodox, and Protestant perspectives and insights are represented and explored” (xxii-xxiii). McGrath gladly admits “My aim in this work has not be to persuade but to explain” (xxiii). That is good in one sense, but bad in the sense that McGrath does not provide direction on what ideas the reader’s mind-trap ought to go “slam!” on.
In addition to the already mentioned “factoid and reference factors” the book’s structure was very helpful. McGrath wants to expose readers the “themes of Christian theology” but he also wants to “enable them to understand them” (xxvii). Thus, the book is split into three parts: Part I covers the “landmarks” of Christian theology, i.e., the historical development of Christian theology neatly broken into four parts (Patristic c. 100-700; Middle Ages/Renaissance, c. 700-1500; Reformation, c. 1500-1750; Modern, c. 1750-the Present); Part 2 covers “Sources and Methods,” i.e., Prolegomena; the quadrilateral of Scripture, Tradition, Reason, Religious Experience; the ideas/categories of divine revelation and natural theology; and a high overview of different approaches to the relationship between Philosophy and Theology); Part 3 covers “Christian Theology” in its traditional creedal outline, i.e., “We shall use the structure of the traditional Christian creeds as a framework for our exploration of the leading topics of Christian theology” (197). This structure is where the book is at its strongest–the author’s aim for his readers to know and understand the themes of Christian theology.
My undergraduate degree is in Religion and Philosophy, so I enjoyed Part 2 – Chapter 8, “Philosophy and Theology: Dialogue and Debate,” and Part 3 – Chapter 17, “Christianity and the World Religions.” Nothing new therein, but thoroughly enjoyable–like shaking up and searching through the catch-all, “junk drawer” in a home, revisiting those chapters stirred up a bunch lost, “junk drawer — “wellwouldyoulookatthat” and “ha, cool!” — philosophic ideas and memories. 😉
It was a bit of a chore to trudge through 450+ pages of dry academic prose that attempted to be objective, but it was well worth it. To have another good reference work that has been thumbed through and heavily underlined with marginal notes is always a good thing to have; later on in life when the brain-gears are getting rusty and the recall and recollection skills are taxed with the weight of decades I will be even more thankful.
My 8 word aphoristic review: A non-scintillating but thorough rehearsal of Christian theology. As McGrath may say, Cheers! | http://treeandtheseed.com/reading-notes-christian-theology-by-alister-e-mcgrath/ |
From phone camera snapshots to lifesaving medical scans, digital images play an important role in the way humans communicate information. But digital images are subject to a range of imperfections such as blurriness, grainy noise, missing pixels and color corruption.
A group led by a University of Maryland computer scientist has designed a new algorithm that incorporates artificial neural networks to simultaneously apply a wide range of fixes to corrupted digital images. Because the algorithm can be “trained” to recognize what an ideal, uncorrupted image should look like, it is able to address multiple flaws in a single image.
The research team, which included members from the University of Bern in Switzerland, tested their algorithm by taking high-quality, uncorrupted images, purposely introducing severe degradations, then using the algorithm to repair the damage. In many cases, the algorithm outperformed competitors’ techniques, very nearly returning the images to their original state.
The researchers presented their findings on December 5, 2017, at the 31st Conference on Neural Information Processing Systems in Long Beach, California.
Artificial neural networks are a type of artificial intelligence algorithm inspired by the structure of the human brain. They can assemble patterns of behavior based on input data, in a process that resembles the way a human brain learns new information. For example, human brains can learn a new language through repeated exposure to words and sentences in specific contexts.
Zwicker and his colleagues can “train” their algorithm by exposing it to a large database of high-quality, uncorrupted images widely used for research with artificial neural networks. Because the algorithm can take in a large amount of data and extrapolate the complex parameters that define images—including variations in texture, color, light, shadows and edges—it is able to predict what an ideal, uncorrupted image should look like. Then, it can recognize and fix deviations from these ideal parameters in a new image.
Zwicker noted that several other research groups are working along the same lines and have designed algorithms that achieve similar results. Many of the research groups noticed that if their algorithms were tasked with only removing noise (or graininess) from an image, the algorithm would automatically address many of the other imperfections as well. But Zwicker’s group proposed a new theoretical explanation for this effect that leads to a very simple and effective algorithm.
Zwicker also said that the new algorithm, while powerful, still has room for improvement. Currently, the algorithm works well for fixing easily recognizable “low-level” structures in images, such as sharp edges. The researchers hope to push the algorithm to recognize and repair “high-level” features, including complex textures such as hair and water. | https://jpralves.net/post/2017/12/11/new-algorithm-repairs-corrupted-digital-images-in-one-step.html |
We have a new obsession around here. It’s Monopoly.
At any given opportunity, my children will pull it out and begin playing. It’s surprising to me, really, how often they will beg me to play it with them and I find myself saying, “Are you kidding? We have to leave in thirty minutes!”
“Please?” they will whine then, “please, please, please?” with the blithe carelessness children have for time.
I usually cave and we end up forgetting lunch and extend bedtime. I play with them, but not only because I usually win. (And they still want to play. I’m in awe.) And not even because I’ve waited a long time to find anyone else as in love with it as I was as a kid.
I agree to play it so much because Monopoly has some fantastic lessons. (And before you roll your eyes, let me say, of course it’s okay to play it just for fun. Not everything has to have a lesson.)
However, if you’re an overthinker like me and you appreciate myriad reminders of frugality, budgeting, cash reserves, you’ll know where I’m coming from. Otherwise, maybe it’s best to go read about how to win at Monopoly each and every time.
Here are three lessons “the world’s most popular game” has taught me.
Pay attention
Children (did I say that out loud? I meant people – in general, but let’s stay focused) tend to have tunnel vision, especially when something looks fun. I find that Monopoly is a fantastic reminder to get them to be aware of their surroundings.
When a property goes to an auction, my children almost always reject if they’re not actively seeking it out as a monopoly or if they think it’s unimportant for whatever reason. (The light blue properties, for instance, are treated like trash and sold back to the bank with the least hesitation.) Here’s where I remind them.
“Look, I’m picking it up for a song.”
Shrug.
“No, look!” I insist, as I turn back around and resell the property to the bank and make some extra cash or hold it until it becomes obvious that it’s valuable to someone else wanting a monopoly. It’s been a hard lesson for my children to learn that even if they’re not interested in a property and it isn’t as expensive or high rent as Park Place or Boardwalk, it’s still a great way to make some money by what we now call “flipping.”
Also related to the auction is keeping an eye on what the other players have in terms of money and / or properties. Many a time, it is a good idea to let a property go to auction and not buy it for asking price if the other players don’t have ready cash available. My children rarely notice this and happily pay asking price if they’re excited about landing on a past favorite.
In terms of developing the art of paying attention, Monopoly is as good as a game as the Where’s Waldo puzzle books or playing Spot It and Spot It Jr. with younger children.
It teaches them that gathering information at all stages of the game – not just when it’s your turn – is a fantastic skill to develop.
Currency is not Value
My children never, ever want to part with their hundred dollar notes. Never. Ever. And this is not an exaggeration.
If there is ever a time that they have to pay fifty dollars, they would rather gather up all their change in five and one dollar notes rather than break the hundred dollar notes.
Also, once they own a specific property, even if they owe another player rent, they will get rid of all their cash and refuse to liquidate it, claiming they have “no money.”
Indeed, they will make all kinds of arrangements to simply keep playing. It’s fascinating to watch the odd combinations and permutations they come up with – including debts, forgiveness of said debts, even paying each others’ rents!
At some point, my husband declares, they’re not even playing Monopoly; they’re playing “rotten economy,” if such a game exists.
“So what is money?” my daughter finally asked at the dinner table the other day after a long conversation with my husband trying to explain the concepts of money, price, value and currency.
She may not have got it all, but at least the conversation had begun. And I understood that based on the classical model of education, they are still in the grammar stage and money versus currency is definitely a logic stage conversation, but there had been a hint in that direction.
“What is money, then?” she asked. I wanted to applaud. She’s only eight. It took me until I was in my mid-twenties to ask that question.
Fortunes change, be kind
This is one we all stumble on, but one specific child (I won’t mention who) really, really likes to win. I mean, really. And this specific child likes to rub our noses in the dirt when such a victory is about to take place, takes place and after it takes place.
I’m all for celebrating, but learning to be kind has been one of the best lessons from this game. And yes, while I will say that there is a tipping point after which fortunes certainly can not change, we have had some very interesting reversals.
Helping my children to manage their emotions and temper both their wins and losses has been challenging, to say the least. What are the chances that I would get one of each child who loves to win and one who hates to lose? (That sounds redundant, but I assure you, it’s not.)
So we have to learn, I guess, in one word, humility. Me too.
This is one subject with no lesson plan. I can’t put “kindness” in our daily planner. So we practice when we play. And when the winner loses, we remember the quote I had glued above my desk when I was much, much younger, a quote from Kipling’s poem If that I still recall with fondness. | http://purvabrown.com/three-lessons-monopoly/ |
What Is Behavioral Economics?
1:58
What is Behavioural Economics?
0:15
Behavioral Economics, Explained
3:43
Behavioural Economics - Introduction
2:45
Economic Psychology V Behavioral Economics
Behavioral economics studies the effects of psychological, cognitive, emotional, cultural and social factors on the economic decisions of individuals and institutions and how those decisions vary from those implied by classical theory.
Explore contextually related video stories in a new eye-catching way. Try Combster now!
Open web
History
Classical economics
Writers
Adam Smith
The Theory of Moral Sentiments
load more
Criticism
Applied issues
Related fields
Notable people
Video encyclopedia
About us
|
Privacy
Home
Flashback
Categories
About us
Privacy | https://www.spectroom.com/10254212-behavioral-economics |
The US Centers for Disease Control and Prevention (CDC) today announced a $77 million investment in efforts to track and fight antibiotic resistance.
The money will be distributed to public health departments in all 50 states and Puerto Rico to help them combat antibiotic resistant bacteria in food, healthcare facilities, and communities, with a particular focus on enhancing testing capabilities in the agency's regional antibiotic resistance labs. It will also help fund a new surveillance center for tuberculosis (TB).
The money comes from the CDC's Epidemiology and Laboratory Capacity for Infectious Diseases (ELC) Cooperative Agreement, which is awarding more than $200 million to help state and local health departments respond to infectious disease threats. ELC money will also support improved surveillance and response to outbreaks of foodborne infections caused by antibiotic-resistant bacteria.
Enhanced testing for Candida auris, gonorrhea
With the investments, the seven regional labs—part of the CDC's Antibiotic Resistance Lab Network (AR Lab Network)—will be able to expand antibiotic susceptibility testing for Candida auris, the multidrug-resistant fungal infection that has emerged in US healthcare facilities over the past year. C auris has shown resistance to all three classes of drugs typically used to treat Candida infections and can cause serious invasive infections, with mortality rates as high as 50%.
"We've asked this year that all of the seven regional laboratories be able to do Candida antibiotic susceptibility testing, and that means we do the kind of testing necessary to find the right drug for treatment, and to look for new types of resistance," Jean Patel, PhD, the CDC's science lead for antibiotic resistance, told CIDRAP News. Patel said the role of the regional labs in monitoring C auris is critical, since many smaller and clinical laboratories lack the capability to properly identify the fungus and perform susceptibility testing.
The seven regional labs will also be able to do more advanced surveillance, using whole-genome sequencing (WGS), to identify emerging strains of drug-resistant gonorrhea, another major concern for public health officials. Last month, World Health Organization officials said that resistance to the only remaining treatment for the sexually transmitted infection is on the rise, and that widespread treatment failure is on the horizon unless action is taken. The first US cases of highly resistant gonorrhea were identified in Hawaii in September.
"We really think whole-genome sequencing of Neisseria gonorrhea is going to be critical for early detection of new resistance," Patel said. "We anticipate seeing very drug-resistant forms of Neisseria gonorrhea in the United States, and we want to make sure when that occurs, we detect it early and respond early."
WGS is a form of testing that looks at the entire genetic blueprint of an organism. It can help epidemiologists track development of antibiotic resistance in gonorrhea, identify the mechanisms of resistance, understand how the infection spreads from one person to another, and trace the origins of an outbreak.
New TB surveillance center
WGS will also be a core capability of the new National TB Molecular Surveillance Center, based in Michigan. The lab will perform WGS an all TB isolates in the United States, which sees roughly 9,000 cases of TB a year. Patel said the sequencing will provide critical information on transmission dynamics and antibiotic resistance in Mycobacterium tuberculosis, and that information will help guide treatment and prevention efforts.
Although the number of TB cases in the United States has steadily declined since 1992, the disease remains a major global health problem, and Patel says the rise in multidrug-resistant TB in other parts of the world is a cause for concern.
"This increasing threat internationally is the reason why we're investing in TB testing here at home," Patel said.
See also: | https://www.cidrap.umn.edu/antimicrobial-stewardship/cdc-invest-millions-enhanced-antibiotic-resistance-testing |
“Sorrow is better than fear. Fear is a journey, a terrible journey, but sorrow is, at least, an arriving.” So says Father Vincent in “Cry, the Beloved Country,” Alan Paton’s celebrated novel about South Africa. Share on Pinterest Which way to go? Not knowing the outcome can be tougher than the outcome itself. Now, research published in Nature Communications suggests that knowing that something bad is going to happen is better than not knowing whether it will happen or not. Findings show that a small possibility of receiving a painful electric shock causes people more stress than knowing for sure that a shock was on the way. Researchers from University College London (UCL), in the UK, enlisted 45 volunteers to play a computer game, which involved turning over rocks under which snakes might lurk. The aim was to guess whether or not there would be a snake. Turning over a rock with a snake underneath led to a small electric shock on the hand. As the participants became more familiar with the game, the chance of a particular rock harboring a snake changed, resulting in fluctuating levels of uncertainty.
Stress levels match levels of uncertainty An elaborate computer model measured participants’ uncertainty that a snake would be hiding under any specific rock. To measure stress, the researchers looked at pupil dilation, perspiration and reports by participants. The higher the levels of uncertainty, say the findings, the more stress people experienced. The most stressful moments were when subjects had a 50% chance of receiving a shock, while a 0% or 100% chance produced the least stress. People whose stress levels correlated closely with their uncertainty levels were better at guessing whether or not they would receive a shock, suggesting that stress may help us to judge how risky something is. Lead author Archy de Berker comments: “It turns out that it’s much worse not knowing you are going to get a shock than knowing you definitely will or won’t. We saw exactly the same effects in our physiological measures: people sweat more, and their pupils get bigger when they are more uncertain.” While many people will find the concept familiar, this is the first time for research to quantify the effect of uncertainty on stress. Coauthor Dr. Robb Rutledge notes that people who are applying for a job will normally be more relaxed if they know they either will or will not get the job. “The most stressful scenario,” he says, “is when you really don’t know. It’s the uncertainty that makes us anxious.”
| |
Outcome Questionnaire: item sensitivity to change.
Although high levels of reliability are emphasized in the construction of many measures of psychological traits, tests that are intended to measure patient change following psychotherapy need to emphasize sensitivity to change as a central and primary property. This study proposes 2 criteria for evaluating the degree to which an item on a test is sensitive to change: (a) that an item changes in the theoretically proposed direction following an intervention and (b) that the change measured on an item is significantly greater in treated than in untreated individuals. Outcome Questionnaire (Lambert et al., 1996) items were subjected to item analysis by examining change rates in 284 untreated control participants and in 1,176 individuals undergoing psychotherapy. Results analyzed through multilevel or hierarchical linear modeling suggest the majority of items on this frequently used measure of psychotherapy outcome meet both criteria. Implications for test development and future research are discussed.
| |
This study employs a randomized controlled trial of an established intervention, Mindfulness Based Cognitive Therapy (MBCT) adapted for pregnancy, to examine effects on various aspects of maternal psychological stress during pregnancy (magnitude and trajectories of stress) and offspring brain systems integral to healthy and maladaptive emotion regulation. This study considers other potential influences on maternal stress and psychiatric symptomatology, and infant behavior and brain development. The study population is pregnant women aged 21-45, and their infants.
The study will involve an online screen of potentially eligible pregnant women. If women are eligible after the online screen, they will be invited in for an in-person assessment, including cognitive testing and a diagnostic interview, to further determine eligibility. After the assessment, they will be informed of their eligibility status and, if applicable, randomized to a Mindfulness Based Cognitive Therapy (MBCT) group involving an 8-session group-based intervention or to treatment as usual (TAU) during pregnancy followed by one mindfulness psychoeducation session postpartum. Eligible participants will then be invited in for a study visit during which they will give blood, urine, and saliva samples. Participants in the MBCT group will complete questionnaires prior to the 1st group session, after the 4th session, and after the 8th/final session. Participants in the TAU group will complete the same questionnaires at equivalent time points. All participants will come in for an in-person session at 34 weeks GA, during which they will complete questionnaires, a brief clinical interview and provide blood, urine, and saliva samples again. Participants will then come in with their infant for the infant MRI scan within one month of giving birth. Study staff will collect a hair and saliva sample from the infant at this time. Participants will have a remote visit at 6 weeks postpartum, during which time they will complete questionnaires and a clinical interview. At 6 months postpartum, participants will return for their final visit, during which they will complete questionnaires and a clinical interview. Mothers and infants will also provide a hair sample at this time.
Masking Description: Given the use of a psychotherapeutic intervention involving participation in group sessions, participants and interventionists cannot be blinded to group assignment. The study is designed to minimize the extent to which study staff, aside from interventionists and the primary Study Coordinator(s), are aware of group assignment. The Study Coordinator(s) needs to know group assignment because they will inform participants of their group assignment and will communicate with participants regarding group attendance. The Study Coordinator(s) will be the main point of contact for participants throughout the study to reduce risk of participants revealing their group assignment to other study staff. Participants will be informed that only the Study Coordinator(s) and interventionists, if applicable, are aware of their group assignment. Therefore, in order to ensure the integrity of the research study, they should refrain from discussing their group assignment with other staff members.
MBCT adapted for pregnancy includes 8 sequential, weekly 2-hour group sessions co-led by two master's level therapists. Sessions include: 1) introducing new mindfulness skills through in-session practice, 2) reviewing mindfulness practices and troubleshooting barriers to practice, 3) reinforcing mindfulness skills through in-session practice and debriefing, 4) learning about how thoughts influence feelings and behaviors (not all sessions), 5) providing psychoeducational information to support skills, and 6) encouraging the establishment of social support. The intervention is focused on skill development through active engagement in mindfulness practices and exercises to increase awareness of thoughts, feelings and behavior in session, and assignment and review of daily home practices.
Maternal psychological stress will be a composite of the Perceived Stress Scale (PSS), Beck Anxiety Inventory (BAI), and Center for Epidemiologic Studies Depression Scale - Revised (CESD-R). This composite measure includes the degree to which situation's in one's life are appraised as stressful, perception of one's capacity to manage potentially stressful situations, as well as anxiety and depressive symptomatology. The magnitude and trajectory of maternal psychological stress will be examined.
Levels of IL6 assayed from maternal blood samples. Difference in magnitude of inflammation between groups at T5 after adjusting for inflammation prior to the intervention.
The cortisol levels obtained from participant's hair samples to assess cumulative cortisol from pre- to post-intervention.
MRI scans with neonates will occur during natural sleep. Resting state functional connectivity will be the outcome of interest.
MRI scans with neonates will occur during natural sleep. Neonatal subcortical brain structure volume will be the outcome of interest.
The Prenatal Distress Questionnaire (PDQ) is a short measure designed to assess specific worries and concerns related to pregnancy.
Inclusion and exclusion criteria for pregnant women/ mothers will be determined by a combination of the initial screen and intake assessment.
have a history of a mood or anxiety disorder.
not meeting any of the exclusion criteria below.
We are using GA equivalent rather than postnatal age because infants born pre-term will not be scanned prior to term equivalent (37 weeks GA). Therefore, infants who are born preterm may be older in terms of postnatal age, but will be similar to infants born at term with regard to time since conception. The time since conception is more pertinent to our measures of brain development versus postnatal age.
medical complications following birth requiring ongoing hospitalization.
a clinical psychology graduate students or psychiatry trainee (resident or fellow), who will be supervised by the PI or other licensed psychologist, psychiatrist or clinical social worker with expertise in MBCT-PD. Two therapists will participate/lead each group.
(1) If the therapists meet inclusion criteria, there is no other exclusion criteria.
Plan Description: We propose to share individual participant data in accord with the NIMH Data Archive Data Submission Agreement.
Time Frame: In accord with the NIMH Data Archive Data Submission Agreement policies we will share data on a semi-annual basis beginning 6-months after initiating data collection and through the end of the study.
Access Criteria: Data will be available through the NIMH Data Archive by following the procedures for Data Use Certification. | https://www.clinicaltrials.gov/ct2/show/NCT03809598?rank=80 |
Let’s be honest, Episode 11: Businesses and sustainability
Let’s be honest is a Food Circle’s project with the aim to open up the conversation about the challenges when being or becoming a member of the SC (Sustainability Club). This series will shine a light on the different approaches to make life more sustainable, as well as the step-backs and difficulties that arise. Being more kind and understanding, instead of critical, will hopefully help to encourage us to try, instead of giving up when facing a step-back or failure. This is made possible thanks to Sapient, the mother company of Food Circle, which every year offers internships to students from all around the world creating a uniquely multicultural environment.
Let’s celebrate the achievements and give room for honesty and struggles!
Having an enterprise nowadays comes along with a large variety of responsibilities for the owners. Entering the market requires hard work and determination on the entrepreneurs’ behalf. It is also of paramount importance that the right resources are obtained: labour, time, capital, etc. “The problem is that it isn’t hard to start a business, but it’s very hard to make it work.”(Matthews, 2019)
Given all these challenges that entrepreneurs have to overcome while climbing the ladder of success, governments still encourage them to also shift towards an eco-friendly approach in their business. That adds up more responsibilities to their schedules.Is it, though, convoluted to minimize the negative impact your business could have on the environment? Does it consume a significant amount of the firm’s resources? The purpose of this article is to try and provide answers to this question and present the challenges of implementing eco-friendly solutions for your business.
I find it relevant, to begin with, the definition of sustainability in a business context to be able to analyze the concept further.
“In business, sustainability refers to doing business without negatively impacting the environment, community, or society as a whole.” (Spiliakos, 2018)
That is equivalent to aiming to make a positive impact on the environment and society in which your business operates. With their focus on both increasing the revenues and being eco-friendly, businesses need to take into consideration a large variety of factors and solutions.
The first challenge - time
Even though it might be quite hard to explain and understand, time is familiar to everyone. According to Gallup's Dec. 2-6 Lifestyle poll, around 48% of the Americans subject to an experiment consider that they do not have enough time to fulfill their desires. That means that time is a limited resource and the lack of it could represent an impediment. Especially for the ones who need to get more tasks done.
When it comes to businesses, time is considered even more precious. Nevertheless, does the desire to protect the environment pose a threat to it?
The answer is, usually, yes. In order to shift towards a more sustainable business, efforts need to be done and decisions to be made. Choosing the best alternatives for a business to require planning, budgeting and controlling for the results. Especially at the beginning, it is fair to assume that time is one of the essential resources to be invested. It is not possible to minimize the usage of natural resources and emissions of waste, for example in a very short time. Everything takes time to plan in advance.
Nevertheless, once the business embraces the concept of sustainability, it will be less time consuming to identify solutions. All it takes is bringing sustainability among the core values of the business.
The second challenge – Employees and their habits.
According to Ian Hutchinson, a Life & Work engagement strategist, “Employee engagement is an investment we make for the privilege of staying in business.” (Hutchinson, n.d.) That means that the labour force has a say in case the change needs to be done. That is mainly because whenever a change needs to be made, resistance from the employees’ side could be an impediment. Whether the resistance has an organisational or personal source, there are many ways to overcome it.
Keeping employees motivated by incentives or making them understand the necessity of change could be two solutions. It is sometimes the fact that employees are not aware of the reasoning behind the changes that generate resistance. Individuals have habits and personal beliefs and the change should begin with the workforce.
The third challenge – Money
One of the most evident questions that a business has when they try to “go green” is if they can afford it. The answer depends on many factors.
It has been discussed in previous articles that some eco-friendly products are more expensive than their regular alternatives. However, when it comes to businesses, sustainability could end up being a good investment. The beauty of protecting-the-environment measures is that they could actually cut the costs instead of increasing them. That if research is done thoroughly and such solutions are implemented.
Take for example the initiative that some restaurants have of not offering straws. This reduces the amount of plastic discharged in the environment and also saves the business some money. At the same time, becoming eco-friendly might lead to increased revenues for some businesses. That since customers might be more incentivised to buy something if they know they are protecting the environment at the same time.
Still, sometimes it might appear that “going green” is a roller coaster of endless spendings.
In conclusion, we can be honest with ourselves and say that implementing sustainable solutions is not all milk and honey. Nevertheless, going green could prove to be a valuable investment for the business.
Author and Editor: Anda Codreanu, Henry Mitchell
References
Hutchinson, a. (n.d.). | https://www.foodcirclenl.org/post/let-s-be-honest-episode-11-businesses-and-sustainability |
RGD advocates best practices for both graphic designers and their clients. Our primary focus is on fighting spec work and crowdsourcing of design; promoting accessible design best practices; and providing best practices in the areas of salaries and billing practices, pro bono work and internships. If you have questions about RGD's advocacy efforts or would like the Association's assistance responding to a request for spec, please email Executive Director, Hilary Ashworth, at
RGD writes to Mayor and key officials to explain the negatives of spec work both to the design community and clients. News Talk 610 CKTB and Niagara This Week reported on the Association's position.
- 06/04/2016
The comprehensive report includes how design impacts productivity, turnover, employment and the financial performance of businesses. | https://www.rgd.ca/about/advocacy/main/18/news_post/2016-04.php |
1.
Critical thinkers are
flexible
3.
Critical thinkers are
skeptical
4.
Critical thinkers separate
facts
from
opinions
5.
Critical thinkers
don't oversimplify
Remember:
There is no "my side" or "your side". We must always remain open and flexible. The fields of social sciences are constantly changing and adapting with the world around us. What you learn in this class may become discarded in 10 years, years, or even in a few months. What's important is that we always pursue truth aggressively and that we don't allow our own preconceptions to blind us to real knowledge.
2.
Critical thinkers can identify
biases and assumptions
(including their own!)
6.
Critical Thinkers use
logical inference
Critical Thinking in Social Sciences: 6 TIPS
The
s
ocial sciences
are about thinking critically.
Life
is about about thinking critically.
Many issues we cover in the social sciences can be confusing, controversial and garner a wide variety of
opinions
,
which people try to support or discredit with
facts.
Critical Thinking
is the process of investigating an issue, while simultaneously
avoiding one's preconceptions.
Why is critical thinking important?
1. You must be able to think critically to be
well-educated:
-
higher
order thinking (explore ideas deeply)
- working in jobs that don't yet exist (
new challenges
)
2. We live in the
information age:
- determine what's
"junk information"
(not correct, beneficial, or useful)
3. Helps with
practical problems
related to social sciences:
- ex. coping with
stress
, dealing with
relationships
, understanding a person's
motives
If you enjoy mysteries and complexities you have one of the most important qualities of a critical thinker!
Sherlock Holmes doesn't jump to conclusions
"All people on welfare are lazy and cheating the government..."
Have you ever heard someone say something like this?
- person works hard
- resents rising taxes
- knows person who has abused system
Important:
biases don't make an opinion wrong, but we need to be aware of the possibility that they can cause people to avoid evidence that contradicts their opinions.
"vegetables will make you grow up big and strong"
As a kid you may have believed everything you were told:
But as you get older..
""yeah right" "prove it!"
Mr. Walker's mother told him all the vitamins were trapped at the bottom of his bowl of soup!
We must question
all
information and ideas and not just when they go against our own. Even when they come from someone we like and respect (like a teacher or our parents)
As social scientists we require evidence when making a decision.
We need to be
objective
and remain
emotionally detached
when dealing with controversial issues
Social scientists view the world as a complicated place of
cause and effect.
We need to look beyond simple and easy answers when pursuing
truth.
When we draw logical conclusions from our
understanding of evidence
ex.
If your friend tells you that he is going to be at
9:00pm
, when his usual bed time is at
midnight
, what logical inferences could you conclude?
Why would building a wall across the Mexican border not solve all of the U.S/Mexico immigration issues
Present Remotely
Send the link below via email or IMCopy
Present to your audienceStart remote presentation
- Invited audience members will follow you as you navigate and present
- People invited to a presentation do not need a Prezi account
- This link expires 10 minutes after you close the presentation
- A maximum of 30 users can follow your presentation
- Learn more about this feature in our knowledge base article
Critical Thinking in Social Sciences
No description
byTweet
Jermaine Walkeron 11 January 2017
Comments (0)
Please log in to add your comment. | https://prezi.com/aqnddmry9-dz/critical-thinking-in-social-sciences/ |
Recently, Rob co-authored three popularising papers related to our research topics. In the last issue of Živa, a popularising journal of the Czech Academy of Sciences, there are two papers written together with Jana Jersáková. The main paper focuses on mutual relationships of flowering plants and their pollinators, including its evolution. The second paper describes and discusses pollination syndromes. A week ago, a brief description of the projects on dynamics of biodiversity in the Kruger National Park in relation to water stress and herbivores was published in Botanika, a popularising journal of the Institute of Botany CAS. All three papers are freely accessible, click the links above. Although in Czech, there are some nice pictures. | http://www.insect-communities.cz/popularising-articles-on-our-work/ |
Ownership of mobile phones, especially smartphones, is spreading rapidly across the globe.
Yet, there are still many people in emerging economies who do not own a mobile phone, or who share one with others. According to the Pew Research Center, in 2019 mobile divides were most pronounced in Venezuela, India, and the Philippines, countries where three-in-10 adults do not own a mobile phone.
Of course, when a crisis strikes, such digital technologies can make all the difference. As Samira Sadeque points out in “COVID-19: The Digital Divide Grows Wider Amid Global Lockdown“: “The digital divide has become more pronounced than ever amid the global coronavirus lockdown, but experts are concerned that in the current circumstances this divide, where over 46 per cent of the world’s population remain without technology or internet access, could grow wider — particularly among women.”
Health care is one of the arenas where digital technologies have taken on a vital role. There is a growing need for high capacity health-care systems and related internet-based services such as telemedicine. Unequal access to technologies, as well as issues around affordability, will exclude people and might prevent them from accessing medical treatment as well as trusted online information about reducing exposure to viruses such as COVID-19.
Today, there is more and more media coverage of this global digital divide and how the experience of the coronavirus virus ought to contribute to the creation of a more egalitarian world. Recently, in an editorial last month, the Financial Times commented:
“Some countries, such as Estonia, have long championed digital sovereignty, arguing that the ability to operate online is an essential part of modern life. Post-pandemic, this understanding should be more widely spread. But this will depend on upgrading digital infrastructure so that it serves all citizens … Governments should ensure they expand digital access to [include] those who only make limited use of basic services. That may require them to review pricing structures that currently exclude the most vulnerable, who could gain the most from access to digital resources.”
In Canada, 76 per cent of the population owns a smartphone, according to Statistics Canada’s 2016 data. However, an annual wireless price comparison study commissioned by the federal government since 2007 has consistently shown that Canada’s mobile phone rates rank among the highest when compared to other G7 countries.
“Prices in France, the U.K. and Italy are noticeably lower than most other countries,” according to the 2019 Price Comparison Study of Telecommunications Services in Canada and with Foreign Jurisdictions. The study showed, for example, that one gigabyte of data costs an average $64.80 in Canada, compared to $22.89 in Germany, $35.56 in the U.K., $50.17 in the U.S., and $57.82 in Japan.
As a result, Canadian households — particularly those in the low-income level — choose to subscribe either to mobile services only or to landline services only. In 2016, 32.5 per cent of Canadian households subscribed to mobile services only and 11.4 per cent of households subscribed to landline services only, according to Statistics Canada.
Both the digital divide and social inclusion need to be addressed by governments and by civil society organizations. The right to communicate is never more urgent than when lives and livelihoods are at stake because access to trustworthy information and news is blocked.
Marginalized and vulnerable communities, especially women in the Global South, deserve preferential treatment.
Philip Lee is WACC general secretary and editor of its international journal Media Development. His edited publications include The Democratization of Communication (1995), Many Voices, One Vision: The Right to Communicate in Practice (2004); Communicating Peace: Entertaining Angels Unawares (2008); and Public Memory, Public Media, and the Politics of Justice (ed. with Pradip N. Thomas) (2012). WACC Global is an international NGO that promotes communication as a basic human right, essential to people’s dignity and community. The article originally appeared on the WACC blog. | https://rabble.ca/technology/upgrading-digital-infrastructure-serve-all-people-everywhere/ |
The Municipality of North Cowichan (population 30,000) is located in the beautiful Cowichan Valley on Southern Vancouver Island, between Nanaimo and Victoria. Our communities of Chemainus, Crofton, Maple Bay, and the South End including University Village, are home to a multitude of artistic, cultural and outdoor recreational activities. The Municipality provides a stable and varied work environment, competitive pay and benefits.
We are inviting applications from candidates with the proven skills, qualifications and abilities for the position of Assistant Fire Chief of inspections and investigations. If you are self-motivated, looking to take on a new challenge where you can make a difference, enjoy both responsibility and accountability, and are ready to join one of British Columbia’s most inclusive and environmentally conscious municipalities, we look forward to receiving your application!
Reporting directly to the Manager, Fire and Bylaw Services (Fire Chief), the Assistant Fire Chief provides support with the strategic direction, leadership, and management of North Cowichan’s Fire Services. This position will be responsible for fire investigations and fire inspections; however, this may change in the future to provide for developmental opportunities. As an Assistant Fire Chief, you will fulfill requirements as a Local Assistant to the Fire Commissioner.
The ideal candidate will possess relevant post-secondary education augmented by NFPA Level 2 certifications for Inspector and Investigator, along with several years of experience as a Fire Officer in a leadership role. You possess management qualifications that will clearly demonstrate your ability to manage the strategic leadership, administrative and operational functions of a complex organization, as well as in building effective relationships with all stakeholders and the public. You are an effective communicator and a leader of innovation and change, and your values align with the Municipality’s core values of fiscal accountability, commitment, inclusion, collaboration, continuous improvement, environmental stewardship, sustainability and service excellence.
We require a candidate that can develop an inspection program from the ground up utilizing all the latest technology and resources afforded by the Municipality, along with the experience of conducting fire investigations and inspections. You must confidently organize the fire ground and related emergencies using the incident command model, incorporating strategies and tactics to ensure life safety of all first responders and the public. The successful candidate will possess practical operational experience as an Incident Commander at a wide range of events.
You have exceptional communications abilities, including presentation and report writing skills, and the ability to communicate complex information to a diverse group of people in an easy-to-understand manner. You possess an open, team-oriented leadership style and welcome the challenge of seeking continuous improvements in organizational efficiency. You work collaboratively with senior management, Council and staff across the Municipality. You are respected for your ability to gain commitment on department priorities, you bring a distinguished customer service philosophy and are recognized for your positive corporate contribution.
The ideal candidate for this position will maintain a successful police information check, including a vulnerable sector check; agree to share Chief responsibilities after regular work hours; and, possess and maintain a valid Class 3 B.C. driver’s license with an acceptable driving record.
A competitive salary and comprehensive benefit package is offered. This position is excluded from union membership.
Candidates being considered will be required to undergo a comprehensive evaluation of skills, qualifications and abilities.
To Apply:
Visit the Municipality of North Cowichan Career Portal at www.northcowichan.ca/jobs to apply for this position.
Please note that all candidates must apply via the Career Portal; we do not accept resumes via email or hard copy.
Application Deadline: | https://vancouverjobs.me/job-postings/assistant-fire-chief-municipality-of-north-cowichan-duncan-bc/ |
a graph with points plotted to show a possible relationship between two sets of data.
slope intercept form
y=mx+b
point slope form
y-y1 = m(x-x1), where m is the slope and (x1,y1) is the point the line is passing through.
midpoint formula
(x₁+x₂)/2, (y₁+y₂)/2
slope formula
y2-y1/x2-x1
standard form
Ax + By=C, where A, B, and C are not decimals or fractions, where A and B are not both zero, and where A is not a negative
function notation
To write a rule in function notation, you use the symbol f(x) in place of y. | https://quizlet.com/6116787/algebra-2-trig-vocab-flash-cards/ |
You desire a position in robotics. What a fantastic idea! Robotics is a field with a promising future and a wealth of untapped potential. But how may one find employment in robotics? What should you research? Which academic backgrounds best prepare you for a career?
I’ll try to answer those questions for you in this post. In the field of robotics, numerous persons with various majors are employed. Although we will focus on technical positions, there are several degrees that can prepare you to become a robotics developer. I’ll concentrate on the most prevalent ones and highlight the useful abilities they impart for various robotics fields. The degrees are not listed in any particular order because they will prepare you for many, equally significant and enjoyable sectors of robotics. Let’s begin, then!
1. Mechanical Engineering for Robotics
The mechanical design serves as the cornerstone for all robot development. A degree in mechanical engineering is the best preparation for it. You will gain knowledge of mechanical system design and manufacturing processes. You will learn about various materials and how to evaluate the system’s characteristics. All of this will assist you in creating the ideal robot design by determining the proper robot kinematics, doing structural analyses, and much more.
But this isn’t the end of it. Control theory, which focuses on how to instruct the robot’s motors so that it moves in the correct manner, is one of the most crucial subjects for designing robots. The control of the robot is closely related to its mechanical structure, making mechanical engineering particularly well-suited for this area of work. Control theory specialties are common in mechanical engineering degrees, and there are even distinct degrees just for it that are typically offered by the mechanical engineering faculty.
2. Electrical Engineering for Robotics
The robot’s movement is actually caused by electronics. It is in charge of establishing communication, managing electricity, and managing the robot motors. By creating electrical circuits and planning the layout for the electric boards, you will learn how to create electronics as part of an electrical engineering degree (PCBs).
These, however, are typically merely phases within a protracted growth process. A robot has a large number of sensors, both for sensing the environment and for monitoring and managing the robot’s internal status. The tough job of the electrical engineer is to analyze and integrate these sensors. Close cooperation with the other disciplines is required for this. To create their control algorithms, mechanical engineers are interested in sensors found inside motors, such as current or position readings. The sensors that computer scientists use for perception and their machine learning algorithms are important to them (e.g. cameras). Generally speaking, electrical engineering often works at the junction of other fields, therefore collaboration with them is crucial.
Finally, embedded programming employs a large number of electrical engineers. The software needed to combine the sensors and actuate the motors is created by embedded software developers. As a result, they require a thorough knowledge of the hardware, which is ideal for electrical engineers.
3. Computer Science for Robotics
The robot runs a ton of software, thus many computer experts are required tremendous create it. Motion planning algorithms are required to program the robot to move in the desired manner. To ensure that the robot is aware of its surroundings, perception is crucial. As a result, you must create the computer vision algorithms that analyze the data from your cameras.
A user interface that makes it simple to control the robot is also necessary. The software infrastructure needs to be improved in all larger software initiatives. This facilitates development and ensures that everything goes without a hitch. Software architects are crucial for organizing the various software modules and ensuring that everything works well together.
The aforementioned are only a few instances of the various jobs that computer scientists are involved in when developing robots. A computer science major will teach you all about the various components of quality software and give you a thorough understanding of how the execution process operates. Even if you don’t have a lot of experience in the field of robotics yet, it will provide you with all the fundamental tools you need to work on the different difficulties in that field. Simply pick a discipline that interests you, and the rest will come to you as you go.
4. Artificial Intelligence for Robotics
It is undeniable that artificial intelligence is a current hot topic. There is a cause behind this, too. The robot’s artificial intelligence is actually created by these algorithms. Additionally, it is probably where the majority of robotics innovation occurs.
In a normal AI course, you first master a significant amount of algebra and probability-based mathematics before moving on to the fundamentals of machine learning algorithms. You learn how to create machine learning models, how to train them, and how the performance of these models is influenced by the data you feed them. A subset of artificial intelligence (AI), deep learning, focuses on multi-layered machine learning models with many well-known applications, such as object detection or tracking. Reinforcement Learning is a unique area of artificial intelligence that has a significant impact on robotics. The basic goal of this strategy is to train the model by making mistakes and learning from them. This aims to emulate how people learn, and many see it as a potential strategy for resolving challenging robotics problems.
AI is crucial for human-robot interactions as well. It is employed in both voice recognition and the analysis of human face expressions. All of this makes the interaction more natural and aids the robot in understanding what the user intends.
There are numerous challenges that have not yet been solved in the area of artificial intelligence in robotics. Separate degrees in AI are being established by more and more universities. A specialization in more broad degrees like computer science is also common.
5. Mathematics/Physics for Robotics
A degree in physics or mathematics can lead to several opportunities. Even if they lack domain-specific knowledge, graduates are in high demand if they have a solid mathematical background and experience approaching and solving challenging problems.
Graduates in mathematics and physics who are interested in programming can certainly work on any of the several algorithms that we already covered for the Computer Science degree (motion planning, Computer Vision, etc.). Many graduates go on to work on machine learning-related issues. A solid mathematical foundation is necessary once you begin researching machine learning techniques in depth.
After your studies, you will undoubtedly be able to enter the robots industry if mathematics or physics are your passions.
degrees in robotics
Universities that specialize in robotics are popping up everywhere. That is a fantastic chance to learn about a certain industry before beginning your career. But it’s crucial to pay great attention to the design and narrow concentration of these degrees.
Some programs place a greater emphasis on mechanical engineering, leading students to frequently study advanced control theory for robotics. Others are concentrating on the (artificial) intelligence component, which includes machine learning, perception, etc.
A degree in robotics can be a terrific way to get started on your future profession. As different robotics degrees have different emphasis, make sure the courses offered match your interests.
Conclusion
This list demonstrates the diversity of degrees that can lead to a career in robotics. Which one ought to you ultimately select? I made an effort to explain the little emphasis these degrees place on various branches of robotics. Therefore, you should ultimately determine whatever field most interests you.
How do you go about that? by putting a bunch of different things to the test! Investigate a topic if you believe you might be interested in it. Look for home-based tasks to work on (you can draw some inspiration from the article on projects to learn robotics). Start with tutorials on a topic and decide if you want to learn more about it. For example, check at the tutorials overview to learn about various robotics themes. If you are a newbie, you should read my comprehensive post on Where to Begin With Robotics.
Nothing is definite in the end. Even if the aforementioned descriptions give some indications of the potential directions a particular degree path can take you, there are just as many counterexamples. It is simple to transfer across fields because robotics is such an interdisciplinary field.
The setting in which you study is a different consideration that is equally significant to your degree of choice. A institution that allows its students to obtain experience in a quality robotics lab is extremely important. Use the chance you have if you have it!
If you decide to work in robots in the future, no matter what degree you are currently pursuing, you will unavoidably wind up doing it and you will love it! I swear.
Did you like this article?
There are no reviews yet. Be the first one to write one. | https://www.awerobotics.com/home/where-to-begin-with-robotics/top-courses-degrees-to-get-into-the-field-of-robotics/ |
Kaizen Private Equity Fund
Kaizen Private Equity will invest in opportunities along the education market value chain in India: from school management companies in the core education segment, to vocational training providers in the parallel education segment, and to education-specific technology providers in the ancillary education segment. Beside direct development effects, such as job creation and increased tax revenue, investing in the Indian education sector is expected to result in substantial indirect development effects, such as raising literacy levels and improving the employability of India’s youth.
Kaizen Private Equity is the first private equity fund with a sole focus on the Indian education sector. It is advised by Kaizen Management Advisors, based in Mumbai. The management team consists of both private equity professionals and education sector specialists, allowing them to add substantial value to investee companies with their knowledge of the education sector. | https://sifem.ch/investments/portfolio/detail/kaizen-private-equity-fund |
A drought is caused by prolonged dry conditions that threaten to deplete the country's water resources. It results from lack of rain; snow; or sleet, which affects the ability of water reservoirs to replenish. Drought is effectively a prolonged hot temperature that causes moisture to evaporate from the soil, rivers and streams to dry up, and plants and livestock begin to die. Droughts are a natural phenomenon but there is a significant contribution of human activity, like water misuse and mismanagement, which deepen the impact of the drought and of course greenhouse emission that causes global warming.
Today, Cape Town's six major water dams are full. This, however, was the case even in 2014. Then came three consecutive years of little to no winter rains, unchanging water demand and use, inability to take advice about preparing for an impending disaster, and the inevitable happened: Day Zero knocked at the door. There is never a time to be complacent when it comes to water supply, especially in South Africa. This is what Eastern Cape, Limpopo and Northern Cape are learning.
After a drought, an equally prolonged period of rainfall is required for the soil to soak and absorb water sufficiently to be able to reproduce and be fruitful. Western Cape is currently enjoying regular and sufficient winter rainfalls, dams are overflowing and the soil is moist. Western Cape is the prime example that it takes enough and regular rain to end a drought.
What causes droughts?
Droughts do not happen overnight. Only a prolonged dry and hot season qualifies as a drought. High temperatures affect ocean temperatures, which generally dictate in land weather patterns. Drastic changes in oceanic temperatures directly correlate to weather changes inland. The drier the land the less moisture is evaporated into the atmosphere and the less likelihood for that evaporation comes back as rain.
Drought, of course, is a natural phenomenon, but things like greenhouse gas emissions are having an impact on the likelihood of drought and its intensity. This is climate change and global warming. It causes rising temperatures, which make dry regions even more dry. Climate change also alters weather patens from their typical path so that an expectation of winter rains in Western Cape and Summer rains in the rest of the country is no longer guaranteed.
Humanity's role in deepening the drought and water crisis is very clear. Population growth and intensive agricultural water use contribute to imbalance in water supply and demand. Studies in countries like the United States have shown that between 1960 and 2010, consumption of water increased by 25%. This means there was more pumping of groundwater, extraction in rivers and reservoirs matched with less than expected rains, which inevitably results in water stress in many areas. Irrigation and hydroelectric dams have also dried lakes and rivers and downstream water sources.
Deforestation and extensive farming can destroy the quality of land and its ability to absorb and retain water, causing less water to feed into the water cycle, resulting in possible drought.
How do we monitor droughts? One way is to check weather satellites in space. For example, satellite data was used to develop a tool that alerts farmers about upcoming flash droughts. This information can be used to estimate evapotranspiration, which is a measure of how much water is being transferred from the land to the atmosphere through the soil and plants.
Solutions
The natural phenomenon of drought is beyond our control but the deepening of it through human effort is within our reach. For starters, we can start using water wisely and using it more efficiently, helping us stretch the litres of water we have and prepare for no-rain days.
The most important thing for South Africa is to fix our aging and crumbling water infrastructure. We continue to lose drinkable water due to aging infrastructure, faulty meters, crumbling pipes and leaky water mains. The Water Research Council estimated this loss to be above 25%.
Secondly, businesses can also be smart and use water and energy efficient technologies. Farmers can plant drought tolerant plants and apply water-efficient irrigation techniques. The biggest consumer of water of course is agriculture and it is said to withdraw 70% of water from any nation's water supply. This means water efficiency in the agricultural sector holds a key to sustainable water for everyone. There needs to be drastic improvements in irrigation techniques, moving from flood irrigation to drip irrigation, and better irrigation scheduling for different stages of crops.
Much of industrial activity can use recycled water and help us protect drinkable and fresh water from wastage. Recycled water is water that comes from sinks, shower drains, washing machines that could be reclaimed and recycled if we had proper functioning treatment works.
However, with so many water treatment works in dysfunctional state in many provinces, this means treated water that could be used as cooling water for power and oil refineries, watering public parks and golf courses, and replenishing groundwater supplies is not sufficiently available.
Thirdly, we need strong water governance. Water governance is largely the responsibility of the local government. It is critically important for local governments to have strong water governance for water management to succeed. Local governments must be able to manage the water supply and demand in order to protect water resources and ensure the stretch of available litres.
At the current level of water management, which is loose and poorly regulated, South Africa is said to face a water deficit of 17% by 2030. It becomes important to leverage all stakeholders, from public, private, schools, businesses, communities, scientists and other water experts in order to ensure water consciousness becomes our daily lived reality and water conservation becomes everyone's responsibility.
South Africa has low rainfall and low per capital water availability compared to other countries. We have an average of 500mm annual rainfall. This is while water demand has been increasing at a high rate with main economic sectors behind the rising demand. In South Africa, agriculture accounts for 63% of water consumption, followed by municipal use at 26% and then industrial use at 11%.
We therefore need effective water management but as Voge et al 2000 said: "Effective drought management strategies have been impeded by coordination problems and a lack of ability of government."
We need our water authorities to be much stronger on water governance. Otherwise, when summer rains finally come, we will not store and keep as much water as we should.
Yonela Diko is spokesperson to Minister of Human Settlements, Water and Sanitation Lindiwe Sisulu. You can follow him on Twitter on @yonela_diko. | https://ewn.co.za/2020/09/28/yonela-diko-the-creeping-disaster-how-to-prepare-for-country-s-droughts |
Genetic resources are global assets of inestimable value to human kind, which holds the key to increasing food security. The loss of variation in crops due to the modernization of agriculture has been described as genetic erosion. The current status of the genetic diversity and erosion in spice crops is discussed in this chapter. Human intervention into the natural habitats of wild and related species in centers of diversity, diseases, and pests plays an important role in the loss of older species and varieties. This is further complicated by climate change and reproductive behavior of crop species. The Genetic erosion of cultivated diversity is reflected in a modernization bottleneck in the diversity levels that occurred during the history of the crop. Two stages in this bottleneck are recognized: the initial replacement of landraces by modern cultivars and further trends in diversity as a consequence of modern breeding practices. The factors contributing to erosion is due to the enormous diversity in cultivated plants, population growth, deforestation, erosion, changing land use, and climate factors are major threats to the existing biodiversity of the region. Urbanization is increasing and agriculture is changing from subsistence based on highly market-driven farming. Although these changes have increased incomes of the populations of wild habitants to certain extent, not all of them are for the good. In particular, biodiversity is declining as a result of some of these changes. It is mandate to conserve the vanishing plant genetic resources and to understand better the linkages between agricultural and economic system that affect diversity and sustainable production. Genetic erosion may occur at three levels of integration: crop, variety, and allele. Thus, genetic erosion is reflected in the reduction of allelic richness in conjunction with events at variety level. This requires immediate efforts to understand and implement the effective multiplication and conservation strategies using both conventional and modern technologies to save the loss of the valuable genetic resources and preserve them for posterity. An important aspect is also to include genetic resource conservation as an important part in our social life. | https://rd.springer.com/chapter/10.1007/978-3-319-25637-5_9 |
- The Wallace's giant bee has been found in the wild 38 years since its last sighting.
- It's the largest bee in the world, at about the size of human thumb. The bee boasts a wingspan of 2.5 inches and has two large pincers.
- In January, a group of scientists and nature photographers successfully located the bee in Indonesia.
- The species is listed as vulnerable to extinction by the International Union for Conservation of Nature.
Wallace's giant bee had been flying under the radar since its last known sighting in 1981. Until a few months ago.
At 1.5 inches long, the bee is the world's largest. It has a wingspan of 2.5 inches and enormous jaws, making it a fearsome sight.
But Wallace's giant bee — officially called Megachile pluto — had been rather camera shy for the last 38 years. The scientific community feared it had disappeared altogether, but an expedition in the Indonesian jungle caught the elusive insect on camera in January.
"I dreamed of seeing this bee—the sound of its wings, its nest, a living individual," natural history photographer Clay Bolt told Business Insider. "When we achieved our goal, we simply couldn't believe it."
The bee was on the 'most wanted' list
Four times larger than a European honey bee, Wallace's giant bee is about the size of a human thumb.
It boasts a set of intimidating mandibles, but these jaws aren't for eating other bugs — the bee is vegetarian, preferring nectar and pollen. The giant bees use their mandibles to scrap sticky resin off of trees. They then use that resin to construct burrows inside termite nests, and female bees use those shelters to raise their babies.
The bee was last seen alive by entomologist Adam Messer in 1981 on North Moluccas, part of an Indonesian archipelago west of Papua New Guinea. (Another French scientist named Roch Desmier de Chenon collected a specimen seven years later, but failed to photograph or document the animal, according to National Geographic.)
Before Messer's sighting, the previous documented encounter happened more than 100 years earlier, when scientist Alfred Russel Wallace discovered and named the insect on the Indonesian island of Bacan in 1859.
That elusive history landed Wallace's giant bee on Global Wildlife Conservation’s list of the planet's top 25 "most wanted" species. The list is part of the environmental organization's Search for Lost Species program, which partners with locals around the world to find and protect species that have not been seen in decades. The full list includes 1,200 missing animals and plants.
Now, Wallace's giant bee is no longer on there.
'An animal that had only lived in my imagination'
Bolt said his fascination with the bee started after his colleague Eli Wyman, a biologist from Princeton University, showed him a rare specimen of the bee at the American Museum of Natural History. Bolt and Wyman then spent years researching the most promising habitat in which to search for the insect.
In January, Bold and Wyman joined a team of biologists and photographers who traveled to the forests of North Moluccas in an attempt to find and photograph the bee alive in the wild.
The group spent five days in hot, humid conditions, braving occasional torrential downpours as they searched for termite nests suspended from tree trunks. They examined dozens of termite mounds, but the expedition kept coming up empty.
Then on the last day, the team struck gold.
In a termite nest 8 feet off the ground, they found a single female giant bee.
"It was such a humbling experience, and to have the distinct honor of being the first person to photograph this creature in the wild is something that I'll likely never top," Bolt said.
Bolt and his colleagues placed the female in small rectangular flight box in order to photograph and observe her, then released the bee back into the nest.
According to National Geographic, Bolt and Wyman said the sound of the bee's 2.5-inch wings fluttering was striking: A “deep, slow thrum that you could almost feel as well as hear,” Bolt said.
Wyman told National Geographic that the finding was “an incredible, tangible experience from an animal that had only lived in my imagination for years.”
Why the Wallace's giant bee is under threat
The bee’s fearsome size and rare status make it especially interesting to wildlife collectors and traders.
In fact, while Bolt, Wyman, and their colleagues prepared for their expedition to Indonesia last year, an Indonesian eBay user sold two dead specimens of Wallace's giant bee on eBay for thousands of dollars.
Though the species is listed as vulnerable to extinction by the International Union for Conservation of Nature, there are no legal protections in place regulating how the bee is sold and traded online or otherwise.
“We know that putting the news out about this rediscovery could seem like a big risk given the demand, but the reality is that unscrupulous collectors already know that the bee is out there,” Robin Moore, who leads the Lost Species program, said in a press release.
But overzealous collectors aren't the bees' only threats.
Because the giant bees rely on termite nests in forests, they are particularly vulnerable to deforestation and habitat loss. Between 2001 and 2017, Indonesia lost 15% of its tree cover as forests were destroyed to make room for agricultural land, according to Global Forest Watch.
Read More: Meet the first species to go extinct because of climate change — it was tiny, cute, and fluffy
The members of the latest expedition are hoping to use their rediscovery of the bee to push the Indonesian government to institute conservation measures that would protect its habitat.
"I hope that Wallace's giant bee's newfound fame will lead to its protection by the Indonesian government, scientific institutions, and the local communities in which it is found," Bolt said."It should be a symbol, like the beautiful standard wing bird of paradise, of the life that thrives in this miraculous corner of the world."
Scientists hope it might even stave off collectors, too.
"By making the bee a world-famous flagship for conservation, we are confident that the species has a brighter future than if we just let them quietly be collected into oblivion,” Moore said. | https://www.insider.com/worlds-largest-bee-found-38-years-after-last-sighting-2019-3 |
Jurisdictional approaches have become popular in international forums as promising strategies to reduce greenhouse gas emissions caused by deforestation and to guarantee sustainable commodity supply. Yet, despite their growing popularity, up to now, there is little consensus on how such approaches should move forward in specific jurisdictions. In this paper we examine two contrasting municipal-level case studies in the eastern Amazonian state of Pará where jurisdiction-wide efforts are underway to reduce deforestation. By developing detailed forest governance intervention timelines since 2005, conducting semi-structured interviews with key informants, analyzing municipal deforestation trends, plus extensive examination of project reports, governmental documents and other secondary sources, this paper performs two main analyses. First, it characterizes the processes in each municipality by linking context and forest governance intervention timelines to deforestation trends. Second it provides a systematic comparison of processes based on (1) the role of the government, (2) multi-stakeholder participation and inclusiveness, (3) adaptive management, (4) horizontal and vertical coordination, and (5) alignment of public and private (supply-chain) initiatives. In so doing, this article answers some of the imperative questions on how to implement and improve jurisdictional approaches aimed at halting deforestation in the tropics.
Introduction
Progress toward more sustainable land use in ways that contribute to economic development and social equity, has long been a priority in tropical landscapes (Jong et al., 2010). Yet, lately, much of the sustainability debate has been dominated by the urgent need to reduce deforestation given the importance of standing forests and other ‘natural climate solutions’ in helping mitigate catastrophic climate change (Griscom et al., 2017; IPCC, 2019). Policy perspectives to tackle Amazonian deforestation have multiple origins linked to wider conservation and development agendas. While conservationists have argued in favor of expanding protected areas or securing indigenous and local community tenure rights to deter commercial agricultural expansion and to preserve mature forests exposed to encroachment (Nepstad et al., 2006; Soares-Filho et al., 2010), developmentalists have favored incentives for farmers to improve their production practices while complying with land use regulations (Börner et al., 2014; Cunha et al., 2016). In addition, growing demand for agricultural commodities, along with growing competitiveness of agriculture in frontier lands, calls for sustainable interventions by supply chains (Gibbs et al., 2016; Lambin et al., 2018) to complement state regulations and policies for forest conservation.
The Brazilian Amazon is a key landscape where these multiple approaches have been tested, making the country a laboratory of governance innovations. Through many ambitious policies, three levels of Brazilian governments (federal, state, and municipal), the private sector, and civil society organizations were able to engage in reducing Amazonian deforestation in an unprecedented way. Federal policies like the Plan of Action for the Prevention Control of Deforestation in the Amazon in 2004, and state-level initiatives like Pará’s Green Municipality Program in 2011 (Whately and Campanili, 2013), to mention only a few examples, were major developments, while private sector arrangements such as the Soy Moratorium in 2006 and the Cattle Agreement in 2009 gave a further impetus to tackle deforestation (Gibbs et al., 2015; Gibbs et al., 2016). Together, these efforts helped reduce Amazonian deforestation by more than 70% since it peaked in 2004 (Godar et al., 2014; Assunção et al., 2015) making Brazil the world’s largest contributor to reducing emissions during this period (Seymour and Busch, 2016).
However, these efforts have failed to contain persisting deforestation and have become less effective over time (Schielein and Börner, 2018; Seymour and Harris, 2019). In 2013, deforestation rates slowly started to increase again, and there is a resurgence of concerns that the Amazon is closer to reach a “tipping point”, particularly in the eastern and southern portion of the Brazilian Amazon (Lovejoy and Nobre, 2019). For some authors, the steady rise in deforestation is partly linked to the ease with which actors involved in soy, beef and timber production can circumvent government regulations and commodity agreements (Carvalho et al., 2019) and a lack of incentives needed to make forest conservation politically sustainable (Nepstad et al., 2014). In this context, the concept of jurisdictional approaches emerged as a way to tackle deforestation in a more holistic way (Nepstad et al., 2013; TFA, 2017; Boyd et al., 2018). In global debates, jurisdictional approaches emerged from the recognition that international efforts, such as those framed under REDD+ and/or sustainable commodity supply-chain initiatives, were unable to overcome institutional barriers at the landscape level, and thus far failed to achieve the desired changes (Stickler et al., 2018).
Jurisdictional approaches are broadly defined as wall-to-wall frameworks that seek to align governments, businesses, NGOs, and local stakeholders in specific administrative jurisdictions around common interests in land use governance (Fishman et al., 2017; Boyd et al., 2018). They strongly resemble integrated landscape approaches, but their key distinctive feature is a high level of governmental involvement in a landscape that is defined by policy-relevant boundaries (Ros-Tonen et al., 2018). There are multiple scales where jurisdictional approaches may occur - national, subnational, and local. A major recent focus has been on the subnational level, especially in countries where subnational jurisdictions have broad authority to reduce deforestation (Busch and Amarjargal, 2020). Jurisdictional approaches also have different foci. These include jurisdictional approaches to zero deforestation commitments that are delinked from governments (WWF, 2016), multi-stakeholder jurisdictional programs (Hovani et al., 2018), and jurisdictional approaches to REDD+ and low emissions development (Boyd et al., 2018), among others.
The concept of jurisdictional approach is relatively new, and its analysis is only emerging in the literature. Yet, jurisdiction-wide efforts to reduce deforestation, in its broad sense, irrespective of the extent of government involvement or of how comprehensive the actions are, have been in place for some time. In this paper, we analyze two contrasting initiatives in the Brazilian municipalities of Paragominas and São Félix do Xingu. Municipal-level initiatives have been in place in the Brazilian Amazon at least since the late 2000s, when some municipalities were targeted by federal government strategies to reduce deforestation (Thaler et al., 2019). This was triggered by Brazil’s highest deforesters list that defined priority municipalities in order to tackle deforestation more effectively, through command-and-control actions such as credit restrictions and field-based law enforcement (Cisneros et al., 2015). Such strategies included municipal government-led programs and NGO interventions ranging from promoting environmental capacity building of local actors to pilot testing sustainable agricultural practices (Piketty et al., 2015; Gebara et al., 2019).
By analyzing the two cases, our aim is to contribute to ongoing debates, analyses and implementation of jurisdictional approaches to reduce deforestation. Further, we answer the following questions: (1) who should be involved in the design of jurisdictional approaches? (2) how should tradeoffs between inclusiveness and effectiveness be addressed? (3) how can the effectiveness of jurisdictional approaches be measured? (4) how should local jurisdictional approaches align or be nested in higher level approaches? (5) how can such approaches combine public and private actions?
We focus on jurisdictional approaches at local scale as these have received considerably less attention in the literature. We do not assume that our case studies are necessarily perfect illustrations of jurisdictional approaches but rather that are insightful examples of the complexity of interventions involving local governments in reducing deforestation in the real world. The municipalities of Paragominas and São Félix do Xingu were selected because they are emblematic cases of contrasting pathways of forest governance where multiple state and non-state efforts to curb deforestation have been undertaken in the Brazilian Amazon, including governmental programs, NGO projects, and supply-chain initiatives. On the one hand, Paragominas became known as a “success story” as the first municipality to be taken off the list of highest deforesters through an alliance involving the municipal government, NGOs, ranchers and soy farmers (Sills et al., 2015; Viana et al., 2016). On the other hand, despite many efforts and overall reduction in deforestation rates São Félix do Xingu is still among the top deforestation sites in the Brazilian Amazon (Schneider et al., 2015; Schmink et al., 2017). An analysis of the processes to curb deforestation in these two distinct municipalities provides lessons for both scholars and practitioners in how to support jurisdictional approaches moving forward.
The paper proceeds as follows. In Section “Data Collection and Analysis” we present the methodological approach, including the analytical framework and the data collection methods. In Section “Context” we provide a short background of the Brazilian Amazon policy context and the socio-ecological context. In Section “Input and Output Analysis” we present a summary of the forest governance intervention timelines and the deforestation trends observed. In Section “Characterizing Processes in PGM and SFX” we present a categorization of the processes in the two case study municipalities, and in Section “Comparing Processes to Reduce Deforestation Across Five Indicators” we compare the processes through the lens of five key indicators identified from the literature on jurisdictional approaches. In Section “Lessons for Jurisdictional Approaches” we conclude the paper with a summary of lessons learned for jurisdictional approaches.
Data Collection and Analysis
This paper performs a two-case (“cross-case”) or comparative case analysis (Yin, 2014). Case study analysis is the most suitable method to address “how” and “why” questions and to investigate a contemporary complex social phenomenon in depth and in its real-world context, particularly when the boundaries between the phenomenon and the context are not clearly defined (Yin, 2014). This paper adopts the notion of context-inputs-process-outputs (CIPO) that has been widely used in the literature of educational impact evaluations and extends it to the land use sector (Scheerens, 1990). The context (C) is understood as the socio-economic and biophysical factors that shape outcomes (Börner and Vosti, 2013; Wehkamp et al., 2018). The inputs (I) are the interventions, including policies and initiatives, designed to reduce deforestation and enhance land-use governance (Howlett, 2005). The process (P) is the way local actors implement specific instruments and develop interventions in that particular context (Birkland, 2011). Outputs (O) are deforestation trends in the municipalities over time.
The analysis in this paper is broadly divided in two main parts. In the first part (see Sections “Context” and “Input and Output Analysis”) we briefly present the context (C), the inputs (I) and the outputs (O), while in the second part of the paper (see sections “Characterizing Processes in PGM and SFX” and “Comparing Processes to Reduce Deforestation Across Five Indicators”) we focus on the process (P). For the first CIPO element, the context (C), we summarize the policy and socio-ecological contexts that affect interventions in the two study municipalities by drawing on peer-reviewed and gray literature. We understand forest governance as a “set of regulatory processes, mechanisms and organizations” through which state and non-state actors at multiple levels shape forest-related actions and outcomes (Lemos and Agrawal, 2006, p. 298).
To capture the inputs (I), we reviewed project reports, governmental documents and other secondary sources for each municipality to build a timeline of interventions since the late 1970s/early 1980s (complete timelines are presented as Supplementary Information). Outputs (O) were measured using Brazil’s official forest monitoring data to assess deforestation dynamics in the two municipalities through changes in the extent of municipal deforestation between 2005 and 2018 (INPE, 2019).
To understand the process, we focus on the period since 2005 and perform two different analyses. First, we characterize the processes in each municipality by linking context and the timing of forest governance interventions (inputs) to deforestation trends (outputs). Second, we provide a systematic comparison of processes based on five indicators from the literature on jurisdictional approaches: (1) the role of the government; (2) multi-stakeholder participation and inclusiveness; (3) adaptive management [as defined by Williams (2011)]; (4) horizontal and vertical coordination; and (5) alignment of public and private (supply-chain) initiatives. Along with data gathered during the timeline construction, both analyses drew on data collected in semi-structured interviews with a total of 102 key stakeholders in the two municipalities (Paragominas n = 70 in 2013 and 2014; SFX n = 32 between 2017 and 2019). Although the interviews did not follow the exact same format in the two municipalities, they document the main public and private initiatives implemented since 2005, including their outcomes and limitations, the role of different actors, and future outlook and expectations. We gave more attention to farmers in the sampling effort given that they are the direct agents of land use change in both municipalities. Both methodological and data triangulation methods were used to be sure we had enough reliable information and avoid biases (Arksey and Knight, 1999).
All data collected were then leveraged to assess the role of each respondent and organization in the process, the relevance of specific initiatives, the role of local governments, the effectiveness of multi-stakeholder processes, political coordination, and overall perceptions of changes observed in each municipality. The main changes that had occurred in both municipalities were also captured through direct participation in local meetings and discussions with municipal staff. Table 1 above lists the number of interviews with each type of actor.
Context
Forestry-Related Policy Context in the Brazilian Amazon and Its Multiple Levels
Today, 44.1% of the Amazon is covered by specific legislation for forest protection. Indigenous Territories account for half the area that is formally recognized as protected under federal laws (Santos et al., 2013). Conservation units, protected areas created by the National System for Protected Areas in 2000, make up the other half. In addition to protected areas, another 6.2% of the Amazon is under other special tenure regimes, which includes colonization settlements governed by the Brazilian Agency for Agrarian Reform (INCRA), i.e., federal areas designated for agrarian reform purposes.1 These settlements may be either federal land ruled by INCRA or state land, in the case of Pará, ruled by the State Land Agency (ITERPA). The remaining territory is privately held (22.7%) or unclaimed/with no clear status (27%) (Santos et al., 2013).
Most forestry-related issues in the Amazon are governed through the 2012 Brazilian Forest Code, which requires private properties to maintain 80% forest cover as legal reserve, with some exceptions. The Forest Code also instituted the Rural Environmental Registry (CAR) system that has been in force in Pará since 2006 and mandates the registration of all rural properties to facilitate social and economic planning and the monitoring of deforestation (Soares-Filho et al., 2014). State governments may reduce the size of legal reserves in private lands outside protected areas from 80 to 50% for the purpose of compliance (but not as a permission to deforest legal reserves above 50%), by designating certain areas as agricultural production zones through Ecological–Economic Zoning plans (Brito, 2019). This is the case of Paragominas and São Félix do Xingu where the 50% rule applies in private areas. For owners who have environmental debts, the Forest Code also tasked state governments with creating an Environmental Regularization Program to regulate the process of complying with the minimum forest area required per property, in the case of illegal deforestation after 2008. Smallholders are excluded from having to restore legal reserves deforested before 2008 (Brito, 2017). Some of the state regulatory competences such as CAR have been decentralized to certain municipalities in recent years, but most of the responsibility remains at state-level.
Socio-Ecological Context of the Case Studies
PGM and SFX are located in the eastern Amazonian state of Pará (Figure 1). Although their demographic, temporal and economic dynamics involve different processes as detailed below, both municipalities were profoundly shaped by frontier expansion dynamics associated with road building and colonization policies during the military regime (1964–1985) (Tritsch and Le Tourneau, 2016; Schmink et al., 2017). This period was marked by intense conflicts over access to land between newcomers and indigenous and traditional riverside dwellers, among the newcomers themselves, and between newcomers and external investors such as mining companies (Schmink and Wood, 2012). As in many other Amazonian frontiers, the predominant economic model was based on environmentally degrading activities such as logging, extensive cattle ranching and slash-and-burn agriculture (Margulis, 2004).
Although frontier expansion started earlier in PGM (1960s) than in SFX (1980s), both municipalities experienced high rates of forest loss in their territories throughout the 1990s and 2000s. By the mid-2000s, when Brazil started to plan ambitious environmental policies which led to impressive progress in forest governance (Hecht, 2012), PGM and SFX were among the top-deforestation municipalities in the Amazon. Consequently, when Brazil’s Federal Government intensified actions to reduce deforestation and launched a list of critical municipalities in 2008, both SFX and PGM were on it. The list of highest deforesters identified the municipalities to be subsequently targeted by command-and-control actions, such as credit restrictions and field-based law enforcement (Cisneros et al., 2015).2 This instrument ended up triggering the emergence of local processes to curb deforestation in both municipalities (Thaler et al., 2019).
Despite both being highly deforested municipalities in absolute terms by the mid-2000s, SFX and PGM have had their own occupation dynamics and differ significantly in size, tenure, % of deforested area, and agrarian structure (Table 2). PGM witnessed a land-use intensification and diversification process involving the rapid expansion of mechanized agriculture and an increase in timber plantations (Tritsch et al., 2016). This intensification was largely because there were few unclaimed areas to expand. At the same time, mining became an important source of municipal revenue, particularly since the late 2000s. In contrast, in SFX, livestock continued to expand, increasing the size of herds and extending pastureland. This was associated with the existence of large portions of unclaimed lands, particularly at APA Triunfo do Xingu. According to IBGE agricultural census (IBGE, 2017), nearly 90% of the municipal landholdings are used for livestock activities, not only by large scale ranchers but also by a substantial number of smallholders. In general, smallholders tend to focus on breeding while larger actors tend to specialize in raising and fattening cattle (Garcia et al., 2017). In contrast to PGM, mechanization and grain crop production have remained relatively low. Soybean is not yet produced in SFX, and there are no records of timber plantations. Still, the number of landholdings growing permanent crops is increasing mostly due to the expansion of cocoa, a promising new crop among smallholders which, in 2017, involved approximately 1,355 families (IBGE, 2017).
Input and Output Analysis
Inputs: Forest Governance Interventions
Figure 2 provides a brief visual summary of the timelines of forest governance interventions. The main similarities and differences in efforts to reduce deforestation and promote sustainable land use are listed in the following Table 3.
Outputs: Deforestation Trends
Both SFX and PGM mirror the general deforestation trends in the Brazilian Amazon and witnessed a rapid reduction in deforestation rates starting in 2005. In the case of SFX, an initial period of abrupt reduction starting in 2011 was followed by a period of stabilization at low rates and then by a slight increase in recent years. In the case of PGM, deforestation stabilized at residual levels in 2012. Figure 3 depicts the trends. However, while reductions were similar, the overall trajectories differ. PGM is an old frontier where deforestation started in the 1960s, mostly linked to the construction of Belém-Brasília road. By 2005, 42% of the municipal area was deforested and a significant proportion of the remaining forests was undergoing degradation (Hasan et al., 2019). By contrast, by 2005, SFX represented a new frontier with only 16% of accumulated deforestation.
Figure 3. Deforestation trends in SFX and PGM between 2005 and 2018. Source: INPE (2019).
Characterizing Processes in PGM and SFX
Based on the context and analyses of inputs and outputs, along with the interview data focused on actor perceptions we identified three distinct moments in time in each jurisdiction.
Categorization of PGM
Command and Control (2005–2008)
PGM was subject to an initial phase of command and control (2005–2008). In 2005, PGM was impacted by several federal field-based law enforcement operations such as Curupira and Ouro Verde. Since 2006, the municipality has also been monitored by the Soy Moratorium, the main Amazon-level non-state sustainability instrument in the soy sector (Piketty et al., 2015). Yet, being added to the highest deforesters list in 2008 which led to credit restriction and the launching of the Arc of Fire operation, was the decisive moment. PGM faced heavy pressure to reverse a situation which had severe negative social and economic impacts for example due to the closure of illegal sawmills and charcoal ovens, as consensually mentioned by interviewees. This led the municipal government, with support from the main local actors including timber entrepreneurs, soybean growers and ranchers, to start negotiations with the Ministry of Environment to produce a roadmap to get PGM off the list. The first step was the announcement of a local zero-deforestation pact in February 2008. In March, with support from NGOs, PGM started to advance on CAR implementation and deforestation monitoring (Coudel et al., 2013). Later in the same year, the federal operation Rastro Negro, targeted illegal charcoal production among smallholders. That was the last and decisive law enforcement operation. Contrary to previous operations, Rastro Negro was operationalized in close collaboration with the municipal government, already engaged in the spirit to reduce deforestation as a necessary step to get off the list. These interventions became known as the Green Municipality initiative.
Green Municipality (2009–2014)
This phase corresponded to a period in which the Green Municipality Initiative focused on municipal government’s legal and operational capacity. That was particularly visible on issues related to the environment, for example with the Charcoal Law in 2009 (Coudel et al., 2013). In 2010, PGM was the first municipality to be taken off the list and the criteria negotiated with the federal government (annual deforestation rate of less than 40 km2 and 80% of private properties under CAR) were adopted as a federal regulation for other municipalities in the Amazon. Simultaneously, the government of Pará incorporated the Green Municipality guidelines and established a state-level program using the same name, while local politicians took on state-level roles. In parallel, the Pecuária Verde project targeting livestock intensification and adoption of best management practices also provided international visibility to local ranchers (Silva and Barreto, 2014). During this period, PGM became a symbol of sustainability in the Amazon. Smallholders and indigenous groups were relatively absent from the political success (Viana et al., 2016). The role of NGOs and external actors was significantly reduced, particularly since the goal to get off the list was achieved. Since 2013, when the new municipal government took over, the term Green Municipality initiative became obsolete and was no longer used. This phase ended in 2014 when the Soy Moratorium was replaced by the Grain Protocol in Pará. The new agreement has similar aims (to forbid the sale of soybean produced in deforested areas) and took over some of the Cattle Agreement conditions (Piketty et al., 2017).
Moving Beyond Zero Deforestation (2015–2019)
In the third stage, deforestation rates remained very low (below 25 km2 per year) and the aim of PGM moved to improve local economy dynamics. Moreover, other ecological challenges emerged, especially fires and forest degradation (Hasan et al., 2019) which led to the need to combine more efficient production systems, incentive mechanisms and forest restoration initiatives (Osis et al., 2019). In 2015, 28 properties with forest reserves deficits were allowed to become regularized through civil law contracts with landowners with forest surpluses in the same municipality (Brito, 2019). This was possible because the municipality introduced a local law in 2014 regulating compensation for deforested legal reserves (Piketty et al., 2015) and became a pioneer in authorizing such a procedure in Pará. Because the struggle to expand intensified cattle ranching mostly concerned a small group of ranchers linked to the political elite and efforts to access premium markets failed (Silva and Barreto, 2014), the focus shifted to landscape-level strategies. In this period, PGM launched an Integrated Municipal Development Plan based on land use suitability and targeting jurisdictional certification as a strategy to obtain funding and market incentives. It also achieved the Verified Sourcing Area status through the Sustainable Trade Initiative.3 Smallholder participation in the local agenda is on the increase through training efforts, institutional consolidation carried out jointly with the smallholder union, and their involvement in the design of the Integrated Municipal Development Plan.
Categorization of SFX
Command and Control (2005–2009)
The first stage, command and control (2005–2009), was characterized by external initiatives that attempted to reduce deforestation. These included the formal creation of federal and state conservation units, the inclusion of SFX on the municipal list of highest deforesters in 2008, and two federal command-and-control operations: Operation Boi Pirata and the Cattle Embargo. These operations caused local tensions and revolt (Sousa et al., 2016). They were nevertheless key moments that triggered change in local perception: deforestation was no longer acceptable and there was a need to look for alternative models delinked from deforestation. Ranchers and slaughterhouses were highly active in this period, particularly since the main target of command and control was livestock production. The municipal government and several smallholder organizations also took an active part in local discussions. During this period, NGOs played a leading role in promoting local negotiations, agreements, capturing political attention, and fundraising. The Federal Public Prosecutor’s (MPF) Office also took on a major role (throughout the region) pressuring slaughterhouses and, indirectly, ranchers to stop deforestation. MPF’s actions led to the legally binding Terms of Adjustment of Conduct (referred here as Cattle Agreement) in which the main slaughterhouses agreed not to buy cattle from deforested areas.4 This stage ended with two local meetings between all stakeholders that set the stage for the beginning of a broad local agreement focused on reducing deforestation in SFX (Neto and Silva, 2014).
Municipal Pact and Local Enthusiasm (2010–2013)
After the previous period of apprehension and revolt, optimism and enthusiasm became the dominant trends in the municipality. Several projects were implemented, or their main activities peaked in this period, with efforts to get all the different stakeholders on board. Many organizations opened local offices, hired staff and received countless visitors. The Pacto Municipal project became the structural intervention in the municipality (Sousa et al., 2016). A local agreement on reducing deforestation, a multi-stakeholder forum and a post-pact agenda were the main outputs of the project. This period was characterized by a strong component of CAR implementation. To get off the list of highest deforesters, municipalities were required to have at least 80% of their territory registered with CAR. Moreover, the Cattle Agreement required slaughterhouses to only buy animals from registered landholdings.
Building the capacity of both municipal governments and civil society organizations was also important. Some of these actions were linked to REDD+ efforts as SFX was selected as an NGO-led pilot project. However, the REDD+ orientation did not last long due to changing priorities of local organizations (Gebara et al., 2019) and lack of donor funds. For instance, the Municipal Green Fund created to support the development of sustainable economic activities failed to attract funding. At the end of this period, the limits of this strategy became apparent: too much participation and focus on institutional capacity and too little effort to promote economic alternatives to deforestation started to cause disappointment. Despite positive results of CAR implementation (SFX achieved 80% of CAR coverage in November 2011), deforestation, which had reached its minimum level in 2011, slowly started to increase again, particularly among smallholders in INCRA settlements and at the APA Triunfo do Xingu. Nonetheless, deforestation rates in private landholdings outside APA remain low (below 10 km2 per year), suggesting a positive effect of the Cattle Agreement, of CAR implementation and of credit restrictions in these territories.
Disappointment and Value Chain Initiatives (2014–2019)
With the end of Pacto Municipal project in 2014, local actors, in particular smallholders, were faced with a slump in expectations, according to interviews with their representatives. Several organizations stopped their field activities in the municipality, and, at the same time, multi-stakeholder forums became less relevant. Most of the work on CAR implementation ended and the focus on capacity building moved toward improving land use practices through value chain related projects. Among large landholders, intensification and developing transparency and traceability became priorities in the cattle sector. An important attempt to solve the traceability problem was the Rebanho do Xingu Seal. The seal guarantees zero deforestation throughout the three production stages (breeding, raising, and fattening) through the analysis of CAR and GTA (Portuguese acronym for the health inspection document provided by the state agency ADEPARA). The pilot initiative was able to identify around 500 beef cattle raised on deforestation-free properties, whose meat was sold at Walmart stores. However, this initiative stagnated as it was unable to solve problems, including high implementation costs and lack of market incentives for zero deforestation beef. Among smallholders, the most important land use strategy became restoration of degraded pastures with cocoa-led agroforestry systems and the production of certified cocoa. Despite some dynamism in the cocoa sector, up to now, initiatives in both the beef and cocoa sectors have shown limited capacity to be a game changer. In 2016, a municipal ABC plan was adopted as the main development strategy and inherited a significant part of the post-pact agenda. However, successive changes in the municipal government reduced the ownership of these agendas. In recent years, the focus has switched to themes such as credit, technical assistance and clarifying land tenure, which are considered to be the main structural constraints to broader adoption of improved land use practices.
Comparing Processes to Reduce Deforestation Across Five Indicators
Table 4 below summarizes the main differences between the processes at the two locations.
Government Role
Interestingly, government engagement in PGM and SFX differed considerably. The PGM case was marked by strong municipal government leadership in all phases, with particular relevance to the first. The mayor of PGM quickly reacted when federal command and control intensified and local actors were apprehensive, and, in many cases, were willing to respond to federal officials with violence. It was a risky decision as the political dividends from opposing local interest groups who profit from continued deforestation were not clear at the time. Yet, given that the mayor’s leadership was accepted by the local elite, the municipal government managed to find the local social support required to achieve its primary goals. In contrast to PGM, governmental involvement was more intermittent along the three phases in SFX. Local responses to SFX being on the list of highest deforesters, for example, were mainly led by third parties, such as NGOs outside the municipality with donor support. Based on our analysis of CIPO elements, we classified the process in PGM as bottom up, i.e., led by actors at the municipal level whereas the process in SFX was more top down, i.e., led by external actors.
Multi-Stakeholder Participation and Inclusiveness
The SFX is clearly an example where the presence of external actors and externally funded projects required the engagement of a broad base of local actors through participatory processes. Particularly in the second phase, many efforts were made to strengthen the capacities of more marginalized groups, such as smallholders and indigenous groups, and there was a strong emphasis on building multi-stakeholder platforms.5 While the rationale of these initiatives was to promote wide participation as a strategy to strengthen ownership of governance processes and, in this way, to achieve more effective results, too much participation turned out to be counterproductive. According to interviewees, too many multi-stakeholder platforms, countless meetings and speeches that encouraged participation raised high hopes among participants that were eventually not fulfilled, leading to general demobilization and disenchantment. Moreover, important players behind deforestation, such as land speculators, were rarely targeted by participatory processes.
Conversely, the example of PGM was more selective and elitist, as discussed by Viana et al. (2016). Despite the broad-based pact signed virtually by all stakeholders, some groups including smallholders and indigenous groups did not participate or even influence the PGM strategy. Since most deforestation was taking place on medium and large landholdings, and smallholders accounted for only a small part of the territory, it was possible to achieve deforestation targets without involving all stakeholders. Despite their initial tense reaction, the local elites were ready to take steps to achieve agricultural intensification and economic diversification as pathways to curb deforestation. This attitude was facilitated by PGM’s old frontier status.
Adaptive Management
As the process advanced in PGM and SFX (second and third phases), the difference in governmental leadership between the two municipalities was also reflected in their ability to manage stakeholder expectations and take new steps. PGM responded faster and quickly mobilized local actors. This pioneer status and political capacity enabled the municipality to define the rules to get off the list of highest deforesters. As such, the local Green Municipality initiative became obsolete, which led to a shift to new targets, such as the new Integrated Municipal Development Plan and Verified Sourcing Area status described earlier. By contrast, SFX took nearly 2 years longer to reach a minimum agreement and had to accept the rules previously defined by PGM. Moreover, as SFX is much larger and more complex, despite tremendous effort and significant reduction in deforestation, it was not able to reduce the annual deforestation rate to 40 km2 to get off the list of highest deforesters. This led to pessimism, as the expected benefits and satisfaction from the efforts already attained did not materialize. Some interviewees claim that this target is impossible for SFX given its size and, hence, they argue that the success of PGM was the reason for the failure of SFX. Notwithstanding, the changes in targets and activities introduced in 2016 by the new Municipal ABC Plan did not differ significantly from the previous arrangement, and in the end were not substantially implemented.
Horizontal and Vertical Coordination
Both case studies revealed some efforts to promote cross-sectoral policy alignment, but the processes mainly focused on specific commodities and actors. Yet, a few differences were apparent. In SFX, there has been since the second phase a huge effort and investment to build sectoral policies, particularly by NGOs. For example, several projects and activities focused on indigenous livelihoods, economic alternatives for smallholders, cattle intensification for medium and large-scale landholders, and capacity building for local institutions. Yet, despite the many efforts to align sectoral demands and transform them into programs, their operationalization remains difficult. In PGM, sectoral strategies targeting medium-large scale production of commodities have long played a central role (for example Pecuária Verde project). Recent instruments such as the Verified Sourcing Area status and the Integrated Municipal Development Plan were important steps toward promoting more coherent strategies across the jurisdiction, although it is still too early to judge whether this will be achieved.
The level of vertical coordination in the two cases differs remarkably. On the one hand, PGM achieved high levels of coordination with the federal government and even more intense coordination with the state government in the first and second phases. The operation Rastro Negro is one example of municipal and federal collaboration. The adoption of Green Municipalities as a state-level program and the spread of the PGM model throughout the state is an example of effective collaboration between the municipality and the state. Additionally, political stability was stronger in PGM, linked to the central role that local elites played in maintaining the political configuration. Conversely, in SFX, there was a serious lack of vertical coordination. Interviewees pointed to difficult articulation with both the state government (opposition party) and the federal governments (lack of contact). In SFX, distinct political groups have been in power along the three phases, and nearly every local election resulted in significant strategic changes in municipal politics. The political setting is also very problematic in SFX since two of the last four elected mayors were charged with corruption, and one environmental secretary was murdered in the same period. In most cases, articulation across governance levels was led by NGOs that tend to have more permanent structures. As many of the structural problems were related to lack of operational capacity of state and federal agencies (for example, related to APA Triunfo do Xingu and tenure regularization in general) these problems remain largely unresolved which has limited the capacity of SFX to progress.
Alignment of Public and Private Initiatives
Both the Soy Moratorium and the Cattle Agreement, as initiatives involving private commitments to remove commodity-driven deforestation from their supply chains, played an important initial role in both municipalities, as confirmed by interviews with private sector representatives and farmers. PGM was particularly targeted by the Soy Moratorium in the first phase, while in SFX, the Cattle Embargo and later the Cattle Agreement played a determining role in engaging local ranchers in the first and second phases. In many cases, efforts to implement CAR were directly financed by meatpackers and slaughterhouses. However, the cases we analyzed point to a clear mismatch between public and private efforts. On the one hand, corporate actors focus on reassuring investors and buyers that their products are deforestation-free, but are doing the minimum with respect to environmental and social commitments, even some legal requirements, as discussed elsewhere (Tonneau et al., 2017). On the other hand, municipal actors target economic benefits and long-term development. Since the private sector failed to compensate farmers and local government for improved sustainability through premiums or other market incentives, these actors have yet to see the benefits of aligning with corporation aims. This was particular sensitive in the third stage for example in the attempt to promote traceability and certified beef through the Rebanho do Xingu Seal, which failed to create a viable system to compensate ranchers. In part due to the lack of incentives associated to the beef chain, in SFX the Cattle Agreement lost effectiveness over time.
Lessons for Jurisdictional Approaches
Jurisdictional approaches appear in current global agendas as promising strategies to address deforestation, yet critical analysis of existing experiences is lacking. The two contrasted municipal-level efforts to reduce deforestation in the Brazilian Amazon highlighted in this study provide a broader understanding of if, where and how local jurisdictional approaches can help reduce deforestation. The case studies also help identify common principles that could strengthen processes across diverging geographic, social, economic and political contexts. In the following sub-sections, we answer the five questions we posed in the introduction.
Who Should Be Involved in the Design of Jurisdictional Approaches?
By definition, governments are meant to be at the core of jurisdictional approaches as their competence is required to address the structural constraints driving deforestation. As seen in PGM, strong government leadership was essential for progress. Yet, in many forest frontiers, poor domestic policy and legal frameworks, along with weak state monitoring and enforcement capacity predominate. This leads us to question to what extent jurisdictional approaches to reduce deforestation are possible where state capacity and local authority to tackle deforestation is weak. In such situations, the role of non-state actors should not be underestimated, given their longer-term commitment to supporting key interventions in certain municipalities even in periods when local governments play a less active role.
How Should Tradeoffs Between Inclusiveness and Effectiveness Be Addressed?
Promoting equitable participation and mitigating risks of unequal benefit sharing are important aspects of any strategy to reduce deforestation. In that sense, multi-stakeholder platforms and local participation more broadly have been highlighted as key to preventing global agendas from capturing local processes (Hovani et al., 2018) and promoting greater equity and legitimacy in policy design and implementation (Loft et al., 2017). However, multi-stakeholder platforms and participation in general should be carefully addressed and fine-tuned to local realities as they are difficult to implement in practice and to maintain in the medium/long run. Our findings confirm that not all problems can be solved through the participation of diverse stakeholders (Larson et al., 2019). Overvaluing participation as a box-ticking requirement may also have counterproductive effects in the long run, such as demotivation, if those responsible are incapable of bringing about the necessary changes. In that sense, understanding participation as a medium/long-term target and accepting a certain level of tradeoff between inclusiveness and effectiveness would be a more pragmatic approach. This is particularly relevant in cases where deforestation drivers are associated with specific local groups or where unequal power relations between actors with conflicting priorities may jeopardize processes (Rodriguez-Ward et al., 2018; Sarmiento-Barletti et al., 2020).
How Can the Effectiveness of Local Jurisdictional Approaches Be Measured?
Based on the experience gained in PGM and SFX trying to get off the list, it is clear that it is not possible to impose the same targets or expect the same rate and level of deforestation reduction in all cases. Each jurisdiction is unique in terms of features (e.g., spatial configuration, agrarian structure, land use activities, or deforestation drivers), and is shaped by exogenous factors (e.g., market trends, value chain configurations, and different interventions that interact in distinct ways in each jurisdiction). As a result, jurisdictions may be more or less ready to halt deforestation, and reach net, gross, legal or illegal zero-deforestation targets. While the final objective remains important, it is at least as important to recognize the progress made. This avoids a sense of failure that may wrongly delegitimize the efforts invested and may call the leadership of the initiatives taken into question. If such progress is not recognized, local efforts might not be sufficiently valued by external observers, donors or higher-level governments, which might lead to contradictory actions and/or demotivate local stakeholders.
The problem of unrealistic expectations about achievements or limited time frames to promote structural change has also been mentioned elsewhere (Boyd et al., 2018). In that sense, developing a transparent and participatory monitoring system to highlight progress and identify gaps is a viable option. It is not only a question of having a system that would allow comparison between jurisdictions using general indicators. Such monitoring should focus on what is progressing, what is not and how local actors perceive those progresses and shortcomings. This reinforces other claims that metrics need to be developed to establish values, track progress and enable adaptive management in ways that inform stakeholders understanding of the impacts of their actions and what else needs to be done (Sayer et al., 2015; Reed et al., 2016).
How Should Local Jurisdictional Approaches Be Aligned With or Nested in Higher-Level Approaches?
Coordination between levels of government is the key to matching the scale associated with different challenges including environmental regularization and land tenure (Reydon et al., 2019). The authorities of subnational governments to address deforestation vary from country to country; Brazil is one of the countries where second-tier subnational governments (i.e., states) have the greatest authority to reduce deforestation (Busch and Amarjargal, 2020). Interestingly, the emergence of the local initiatives in PGM and SFX was in direct response to the absence of state-level action in reply to federal command-and-control actions. While, in theory local governments can better understand and target local drivers, they require institutional support at higher levels to solve critical issues. In some cases, decentralizing state capacities may suffice to address those structural constraints. But in cases where decentralization is not possible or feasible, finding the right mix of local action to promote ownership of processes and subnational action is the key to solving critical problems. This prevents one-size-fits-all models or universal recipes that may work in one place but not in others.
How Can Such Approaches Combine Public and Private Actions?
Despite supporting efforts to strengthen synergies with jurisdictional initiatives (Lambin et al., 2018), in general, supply chain initiatives and private efforts have hardly dialogued with governmental efforts at local level and even at subnational level. Although they can provide an initial impetus in cases where value chain actors are not sufficiently engaged, meaningful market incentives have yet made their way in the Amazon and corporations mostly do the minimum required to avoid criticism, as has been the case at least in the soybean and beef chains. Private action is still very modest and far below what would be needed to promote and sustain change at local level, particularly as market incentives are the key to maintaining local engagement and guaranteeing progress. Since pay for performance incentives have been “too low and too slow” to reach the ground (Seymour and Busch, 2016), there are few remaining options than governmental incentives and ad hoc non-governmental support to encourage actors to pursue positive agendas and to continue pursuing them in areas where progress is being made, at least until significant external investments are available. In that sense, a transparent and participatory monitoring system would also help local actors to communicate externally and to attract private investment that is truly engaged in promoting sustainability.
While our case studies suggest that there is still a long way to go to build robust and sustainable long-term strategies at local level, new opportunities are emerging. Major corporations recognize that they will miss their 2020 zero-deforestation global targets and are looking for new models and strategies to rapidly implement their commitments. At the same time, the global community is calling for enhanced ambition to achieve the Paris Agreement goals, and new donor- and market-based opportunities are developing with promises of increasing funding for governments responsible for tropical forests. The extent to which local jurisdictions will be able to design attractive strategies for such investment, and finance will be able to reach the ground, is uncertain, but surely both are required for success. In that sense, it will be wise to start closing that gap as rapidly as possible.
Data Availability Statement
The datasets generated for this study are available on request to the corresponding author.
Author Contributions
FB and M-GP conceived the ideas, planned the manuscript, performed the data analysis, and led the writing. FB collected the data for SX. RP-C, M-GP, and JP collected the data for PGM. PP, BB, AD, RP-C, EG, and ID contributed to the drafts. All authors approved the final manuscript.
Conflict of Interest
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Funding
This research was part of the Priority 18 of the CGIAR (Consultative Group on International Agricultural Research) Research Program on Forests, Trees and Agroforestry (FTA, http://foreststreesagroforestry.org) and of CIFOR’s Global Comparative Study on REDD+. The funding partners that have supported this research include the Norwegian Agency for Development Cooperation (NORAD) and the CGIAR Trust Fund (www.cgiar.org/funders).
Supplementary Material
The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/ffgc.2020.00096/full#supplementary-material
Footnotes
- ^ The other two special tenure regimes are quilombola territories, which are collective titles given to communities with proven African ancestry; and military areas.
- ^ The list of highest deforesters was one of the main instruments designed under the Plan for the Protection and Control of Deforestation in the Amazon (PPCDAm), the umbrella program that concentrated federal efforts after 2004.
- ^ Verified Source Area is a concept based on a local pact between private and public institutions to achieve some sustainable targets. Responsible investors or buyers are connected with these areas, thereby valorizing local efforts for sustainability.
- ^ It implies not buying from areas on IBAMA’s embargo list, not being in an area deforested after 2008, having UC and IT overlapping, nor being on the slave labor list. The first step would be to join the CAR.
- ^ Conselho Municipal do Meio Ambiente, Conselho Municipal de Desenvolvimento Rural, Conselho Gestor da APA Triunfo do Xingu, Comitê Gestor do Plano ABC, Comissão da Agenda do Pacto.
References
Arksey, H., and Knight, P. (1999). Interviewing for Social Scientists: An Introductory Resource with Examples. London: Sage. doi: 10.4135/9781849209335
Assunção, J., Gandour, C., and Rocha, R. (2015). Deforestation slowdown in the Brazilian amazon: prices or policies? Environ. Dev. Econ. 20, 697–722. doi: 10.1017/S1355770X15000078
Birkland, T. (2011). An Introduction to the Policy Process. New York, NY: Routledge.
Börner, J., and Vosti, S. A. (2013). “Managing tropical forest ecosystem services: an overview of options,” in Governing the Provision of Ecosystem Services, eds R. Muradian and L. Rival (Dordrecht: Springer Netherlands).
Börner, J., Wunder, S., Wertz-Kanounnikoff, S., Hyman, G., and Nascimento, N. (2014). Forest law enforcement in the Brazilian amazon: costs and income effects. Glob. Environ. Chang. 29, 294–305. doi: 10.1016/j.gloenvcha.2014.04.021
Boyd, W., Stickler, C., Duchelle, A. E., Seymour, F., Nepstad, D., Bahar, N. H. A., et al. (2018). Jurisdictional Approaches to REDD+ and Low Emissions Development: Progress and Prospects. Washington, DC: World Resources Institute.
Brito, B. (2017). Potential trajectories of the upcoming forest trading mechanism in Pará state, Brazilian amazon. PLoS One 12:e0174154. doi: 10.1371/journal.pone.0174154
Brito, B. (2019). The pioneer market for forest law compliance in Paragominas, Eastern Brazilian amazon. Land Use Policy 94:104310. doi: 10.1016/j.landusepol.2019.104310
Busch, J., and Amarjargal, O. (2020). Authority of second-tier governments to reduce deforestation in 30 tropical countries. Front. Forests Glob. Change 3:1. doi: 10.3389/ffgc.2020.00001
Carvalho, W. D., Mustin, K., Hilário, R. R., Vasconcelos, I. M., Eilers, V., and Fearnside, P. M. (2019). Deforestation control in the Brazilian amazon: a conservation struggle being lost as agreements and regulations are subverted and bypassed. Perspect. Ecol. Conserv. 17, 122–130. doi: 10.1016/j.pecon.2019.06.002
Cisneros, E., Zhou, S. L., and Börner, J. (2015). Naming and shaming for conservation: evidence from the Brazilian amazon. PLoS One 10:e0136402. doi: 10.1371/journal.pone.0136402
Coudel, E., Viana, C., Piketty, M. G., and Poccard-Chapuis, R. (2013). Conditions and Impacts of the Green Municipality Process in Paragominas (and Para State). Research Report of the CRP6 Project Emerging Countries in Transition to a Green Economy. Montpellier: CIRAD.
Cunha, F. A. F., Börner, J., Wunder, S., Cosenza, C. A. N., and Lucena, A. F. P. (2016). The implementation costs of forest conservation policies in Brazil. Ecol. Econ. 130, 209–220. doi: 10.1016/j.ecolecon.2016.07.007
Fishman, A., Oliveira, E., and Gamble, L. (2017). Tackling Deforestation Through a Jurisdictional Approach: Lessons From The Field. Gland: WWF.
Garcia, E., Ramos Filho, F., Mallmann, G., and Fonseca, F. (2017). Costs, benefits and challenges of sustainable livestock intensification in a major deforestation frontier in the Brazilian amazon. Sustainability 9:158. doi: 10.3390/su9010158
Gebara, M. F., Sills, E., May, P., and Forsyth, T. (2019). Deconstructing the policyscape for reducing deforestation in the Eastern amazon: practical insights for a landscape approach. Environ. Policy Govern. 29, 185–197. doi: 10.1002/eet.1846
Gibbs, H. K., Munger, J., L’roe, J., Barreto, P., Pereira, R., Christie, M., et al. (2016). Did ranchers and slaughterhouses respond to zero-deforestation agreements in the Brazilian amazon? Conserv. Lett. 9, 32–42. doi: 10.1111/conl.12175
Gibbs, H. K., Rausch, L., Munger, J., Schelly, I., Morton, D. C., Noojipady, P., et al. (2015). Brazil’s soy moratorium. Science 347, 377–378. doi: 10.1126/science.aaa0181
Godar, J., Gardner, T. A., Tizado, E. J., and Pacheco, P. (2014). Actor-specific contributions to the deforestation slowdown in the Brazilian amazon. Proc. Natl. Acad. Sci. U.S.A. 111, 15591–15596. doi: 10.1073/pnas.1322825111
Griscom, B. W., Adams, J., Ellis, P. W., Houghton, R. A., Lomax, G., Miteva, D. A., et al. (2017). Natural climate solutions. Proc. Natl. Acad. Sci. U.S.A. 114, 11645–11650. doi: 10.1073/pnas.1710465114
Hasan, A. F., Laurent, F., Messner, F., Bourgoin, C., and Blanc, L. (2019). Cumulative disturbances to assess forest degradation using spectral unmixing in the northeastern Amazon. Appl. Veget. Sci. 22, 394–408. doi: 10.1111/avsc.12441
Hecht, S. B. (2012). From eco-catastrophe to zero deforestation? Interdisciplinarities, politics, environmentalisms and reduced clearing in Amazonia. Environ. Conserv. 39, 4–19. doi: 10.1017/S0376892911000452
Hovani, L., Cortez, R., Hartanto, H., Thompson, I., Fishbein, G., Adams, J., et al. (2018). The Role of JURISDICTIONAL Programs in Catalyzing Sustainability Transitions in Tropical Forest Landscapes. Arlington, VA: The Nature Conservancy.
Howlett, M. (2005). “What is a policy instrument? Tools, mixes, and implementation styles,” in Designing Government From Instruments to Governance, eds M. Hills, M. Howlett, and P. Eliadis (Montreal: McGill-Queen’s University Press).
IBGE (2006). Agriculture and Livestock Census Brazilian Institute of Geography and Statistics. Available online at: https://sidra.ibge.gov.br/pesquisa/censo-agropecuario/censo-agropecuario-2006/segunda-apuracao (accessed August 15, 2019).
IBGE (2015). Instituto Brasileiro de Geografia e Estatística. Malhas Territoriais. Available online at: https://geoftp.ibge.gov.br (accessed June 15, 2018).
IBGE (2017). Agriculture and Livestock Census Brazilian Institute of Geography and Statistics. Available online at: https://censos.ibge.gov.br/agro/2017/ (accessed August 15, 2019).
INPE (2019). Projeto PRODES: Monitoramento da Floresta Amazônica Brasileira por Satélite (Instituto Nacional De Pesquisas Espaciais). Available online at: http://www.obt.inpe.br/OBT/assuntos/programas/amazonia/prodes (accessed June 20, 2019).
IPCC (2019). Climate Change and Land: an IPCC Special Report on Climate Change, Desertification, Land Degradation, Sustainable Land Management, Food Security, and Greenhouse Gas Fluxes in Terrestrial Ecosystems. Geneva: IPCC.
Jong, W. D., Borner, J., Pacheco, P., Pokorny, B., Sabogal, C., Benneker, C., et al. (2010). “Amazon forests at the crossroads: pressures, responses and challenges,” in Forests and Society – Responding to Global Drivers of Change, eds G. Mery, P. Katila, G. Galloway, R. I. Alfaro, M. Kanninen, M. Lobovikov, et al. (Vienna: IUFRO).
Lambin, E. F., Gibbs, H. K., Heilmayr, R., Carlson, K. M., Fleck, L. C., Garrett, R. D., et al. (2018). The role of supply-chain initiatives in reducing deforestation. Nat. Clim. Chang. 8, 109–116. doi: 10.1038/s41558-017-0061-1
Larson, A. M., Sarmiento-Barletti, J. P., Ravikumar, A., and Korhonen-Kurki, K. (2019). “Multi-level governance: some coordination problems cannot be solved through coordination,” in Transforming REDD+: Lessons and New Directions, eds A. Angelsen, C. Martius, V. De Sy, A. E. Duchelle, A. M. Larson, and T. T. Pham (Bogor: CIFOR).
Lemos, M. C., and Agrawal, A. (2006). Environmental governance. Annu. Rev. Environ. Resour. 31, 297–325. doi: 10.1146/annurev.energy.31.042605.135621
Loft, L., Pham, T. T., Wong, G. Y., Brockhaus, M., Le, D. N., Tjajadi, J. S., et al. (2017). Risks to REDD+: potential pitfalls for policy design and implementation. Environ. Conserv. 44, 44–55. doi: 10.1017/S0376892916000412
Lovejoy, T. E., and Nobre, C. (2019). Amazon tipping point: last chance for action. Sci. Adv. 5:eaba2949. doi: 10.1126/sciadv.aba2949
Margulis, S. (2004). Causes of Deforestation of the Brazilian Amazon. World Bank Working Paper. Washington, DC: World Bank.
Nepstad, D., Irawan, S., Bezerra, T., Boyd, W., Stickler, C., Shimada, J., et al. (2013). More food, more forests, fewer emissions, better livelihoods: linking REDD+, sustainable supply chains and domestic policy in Brazil, Indonesia and Colombia. Carbon Manage. 4, 639–658. doi: 10.4155/cmt.13.65
Nepstad, D., McGrath, D., Stickler, C., Alencar, A., Azevedo, A., Swette, B., et al. (2014). Slowing Amazon deforestation through public policy and interventions in beef and soy supply chains. Science 344:1118. doi: 10.1126/science.1248525
Nepstad, D., Schwartzman, S., Bamberger, B., Santilli, M., Ray, D., Schlesinger, P., et al. (2006). Inhibition of amazon deforestation and fire by parks and indigenous lands. Conserv. Biol. 20, 65–73. doi: 10.1111/j.1523-1739.2006.00351.x
Neto, P., and Silva, R. (2014). ). Processo de Construção da Sustentabilidade em São Félix do Xingu-PA. Projeto Xingu Ambiente Sustentável. Belém: Instituto Internacional de Educação do Brasil [IEB].
Osis, R., Laurent, F., and Poccard-Chapuis, R. (2019). Spatial determinants and future land use scenarios of Paragominas municipality, an old agricultural frontier in Amazonia. J. Land Use Sci. 14, 258–279. doi: 10.1080/1747423X.2019.1643422
Piketty, M. G., Piraux, M., Blanc, L., Laurent, F., Cialdella, N., Ferreira, J., et al. (2017). “Municípios verdes”: from zero-deforestation to the sustainable management of natural resources in the Brazilian Amazon,” in Living Territories to Transform the World, eds P. Caron, E. Valette, T. Wassenaar, D. E. G. Coppens, and V. Papazian (Versailles: Ed. Quae).
Piketty, M.-G., Poccard-Chapuis, R., Drigo, I., Coudel, E., Plassin, S., Laurent, F., et al. (2015). Multi-level governance of land use changes in the Brazilian amazon: lessons from Paragominas, State of Pará. Forests 6, 1516–1536. doi: 10.3390/f6051516
Reed, J., Van Vianen, J., Deakin, E. L., Barlow, J., and Sunderland, T. (2016). Integrated landscape approaches to managing social and environmental issues in the tropics: learning from the past to guide the future. Glob. Change Biol. 22, 2540–2554. doi: 10.1111/gcb.13284
Reydon, B. P., Fernandes, V. B., and Telles, T. S. (2019). Land governance as a precondition for decreasing deforestation in the Brazilian Amazon. Land Use Policy 94, 104313. doi: 10.1016/j.landusepol.2019.104313
Rodriguez-Ward, D., Larson, A. M., and Ruesta, H. G. (2018). Top-down, bottom-up and sideways: the multilayered complexities of multi-level actors shaping forest governance and REDD+ arrangements in Madre de Dios, Peru. Environ. Manag. 62, 98–116. doi: 10.1007/s00267-017-0982-5
Ros-Tonen, M., Reed, J., and Sunderland, T. (2018). From synergy to complexity: the trend toward integrated value chain and landscape governance. Environ. Manage. 62, 1–14. doi: 10.1007/s00267-018-1055-0
Santos, D., Pereira, D., and Veríssimo, A. (2013). Uso da Terra. O Estado da Amazônia. Belém: Imazon.
Sarmiento-Barletti, J. P., Larson, A. M., Hewlett, C., and Delgado, D. (2020). Designing for engagement: a realist synthesis review of how context affects the outcomes of multi-stakeholder forums on land use and/or land-use change. World Dev. 127:104753. doi: 10.1016/j.worlddev.2019.104753
Sayer, J., Margules, C., Boedhihartono, A. K., Dale, A., Sunderland, T., Supriatna, J., et al. (2015). Landscape approaches; what are the pre-conditions for success? Sustain. Sci. 10, 345–355. doi: 10.1007/s11625-014-0281-5
Scheerens, J. (1990). School effectiveness research and the development of process indicators of school functioning. Sch. Eff. Sch. Improv. 1, 61–80. doi: 10.1080/0924345900010106
Schielein, J., and Börner, J. (2018). Recent transformations of land-use and land-cover dynamics across different deforestation frontiers in the Brazilian Amazon. Land Use Policy 76, 81–94. doi: 10.1016/j.landusepol.2018.04.052
Schmink, M., Hoelle, J., Gomes, C. V. A., and Thaler, G. M. (2017). From contested to ‘green’ frontiers in the Amazon? A long-term analysis of São Félix do Xingu, Brazil. J. Peasant Stud. 46, 377–399. doi: 10.1080/03066150.2017.1381841
Schmink, M., and Wood, C. H. (2012). Conflitos Sociais e a Formação da Amazônia. Belém: Ed.UFPA.
Schneider, C., Coudel, E., Cammelli, F., and Sablayrolles, P. (2015). Small-scale Farmers’ needs to end deforestation: insights for REDD+ in São Felix do Xingu (Pará, Brazil). Int. Forestry Rev. 17, 124–142. doi: 10.1505/146554815814668963
Seymour, F., and Busch, J. (2016). Why Forests? Why Now? The Science, Economics, and Politics of Tropical Forests and Climate Change. Washington, DC: Center for Global Development.
Seymour, F., and Harris, N. L. (2019). Reducing tropical deforestation. Science 365, 756–757. doi: 10.1126/science.aax8546
Sills, E. O., Herrera, D., Kirkpatrick, A. J., Brandão, A. Jr., Dickson, R., Hall, S., et al. (2015). Estimating the impacts of local policy innovation: the synthetic control method applied to tropical deforestation. PLoS One 10:e0132590. doi: 10.1371/journal.pone.0132590
Silva, D., and Barreto, P. (2014). O aumento da Produtividade e Lucratividade da Pecuária Bovina na Amazônia: o Caso do Projeto Pecuária Verde em Paragominas. Belém: IMAZON.
Soares-Filho, B., Moutinho, P., Nepstad, D., Anderson, A., Rodrigues, H., Garcia, R., et al. (2010). Role of Brazilian Amazon protected areas in climate change mitigation. Proc. Natl. Acad. Sci. U.S.A. 107, 10821–10826. doi: 10.1073/pnas.0913048107
Soares-Filho, B., Rajão, R., Macedo, M., Carneiro, A., Costa, W., Coe, M., et al. (2014). Cracking Brazil’s forest code. Science 344:363.
Sousa, R., Silva, R., Miranda, K., and Neto, M. (2016). Governança Socioambiental na Amazônia: Agricultura Familiar e os Desafios para a Sustentabilidade em São Félix do Xingu – Pará. Belém: Instituto Internacional de Educação do Brasil - IEB.
Stickler, C., Duchelle, A., Nepstad, D., and Ardila, J. (2018). “Policy innovation and partnerships for change,” in Transforming REDD+: Lessons and New Directions, eds A. Angelsen, C. Martius, V. De Sy, A. Duchelle, A. Larson, and T. Pham (Bogor: CIFOR).
TFA (2017). Tropical Forest Alliance 2020 Annual Report 2016-2017. Cologny: WEF.
Thaler, G. M., Viana, C., and Toni, F. (2019). From frontier governance to governance frontier: the political geography of Brazil’s Amazon transition. World Dev. 114, 59–72. doi: 10.1016/j.worlddev.2018.09.022
Tonneau, J. P., Gueneau, S., Piketty, M. G., Drigo, I. G., and Poccard-Chapuis, R. (2017). “Agroindustrial strategies and voluntary mechanisms for the sustainability of tropical value-chains: the place of territories,” in Sustainable Development and Tropical Agri-chains, eds E. Bienabe, A. Rival, and D. Loeillet (Dordrecht: Springer).
Tritsch, I., and Le Tourneau, F. (2016). Population densities and deforestation in the Brazilian Amazon: new insights on the current human settlement patterns. Appl. Geogr. 76, 163–172. doi: 10.1016/j.apgeog.2016.09.022
Tritsch, I., Sist, P., Narvaes, I., Mazzei, L., Blanc, L., Bourgoin, C., et al. (2016). Multiple patterns of forest disturbance and logging shape forest landscapes in Paragominas, Brazil. Forests 7:315. doi: 10.3390/f7120315
Viana, C., Coudel, E., Barlow, J., Ferreira, J., Gardner, T., and Parry, L. (2016). How does hybrid governance emerge? Role of the elite in building a green municipality in the Eastern Brazilian Amazon. Environ. Policy Govern. 26, 337–350. doi: 10.1002/eet.1720
Wehkamp, J., Koch, N., Lübbers, S., and Fuss, S. (2018). Governance and deforestation — a meta-analysis in economics. Ecol. Econ. 144, 214–227. doi: 10.1016/j.ecolecon.2017.07.030
Whately, M., and Campanili, M. (2013). Programa Municípios Verdes: Lições Aprendidas e Desafios Para 2013/2014. Belém, PA: Governo do Estado do Pará.
Williams, B. K. (2011). Adaptive management of natural resources—framework and issues. J. Environ. Manag. 92, 1346–1353. doi: 10.1016/j.jenvman.2010.10.041
WWF (2016). Jurisdictional Approaches to Zero Deforestation Commodities. Gland: WWF.
Yin, R. K. (2014). Case Study Research: Design and Methods. Thousand Oaks, CA: Sage.
Keywords: REDD+, supply-chain initiatives, forest governance, multi-stakeholder participation, adaptive management, third-tier jurisdictions
Citation: Brandão F, Piketty M-G, Poccard-Chapuis R, Brito B, Pacheco P, Garcia E, Duchelle AE, Drigo I and Peçanha JC (2020) Lessons for Jurisdictional Approaches From Municipal-Level Initiatives to Halt Deforestation in the Brazilian Amazon. Front. For. Glob. Change 3:96. doi: 10.3389/ffgc.2020.00096
Received: 18 September 2019; Accepted: 15 July 2020;
Published: 14 August 2020.
Edited by:Priya Shyamsundar, The Nature Conservancy, United States
Reviewed by:Charles Palmer, The London School of Economics and Political Science, United Kingdom
Tim Cadman, Griffith University, Australia
Copyright © 2020 Brandão, Piketty, Poccard-Chapuis, Brito, Pacheco, Garcia, Duchelle, Drigo and Peçanha. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. | https://www.frontiersin.org/articles/10.3389/ffgc.2020.00096/full |
Ericsson has joined the O-RAN Alliance, which focuses on evolving the radio access network (RAN) architecture and orchestration toward open-source, rather than proprietary, implementations.
Ericsson said that joining the O-RAN Alliance "reinforces [its]commitment to network evolution, openness, and industry collaboration" and that it will "focus on the open interworking between RAN and network orchestration and automation, with emphasis on AI-enabled closed-loop automation and end-to-end optimization, with the aim of lowering operating cost and improve end-user performance."
Its engagement with the O-RAN Alliance "is based on the future needs of mobile network service providers, and how networks must evolve to enable broad range of services with strong focus on quality, performance and security," the equipment vendor added.
Ericsson said that it plans to focus on the upper-layer function, as specified in 3GPP, to provide interoperable multivendor profiles for specified interfaces between central RAN functions, which it said would result in faster deployment of 5G networks on a global scale.
The O-RAN Alliance believes that it will be "impossible to bring service agility and cloud scale economics to the RAN without openness."
O-RAN Alliance was founded in February of last year by mobile operators AT&T, China Mobile, Deutsche Telekom, NTT DoCoMo and Orange. The management structure consists of an operating board made up of 15 operators and a Technical Steering Committee, and it currently has six technical workgroups, a TSC workgroup and an operator workgroup.
O-RAN members include AT&T, China Mobile, Orange, NTT DoCoMo, T-Mobile, China Telecom, Airtel, Jio, KT Corp, Singtel, SK Telecom, TIM, Telefonica, Telstra, Verizon, Dish, KDDI, SoftBank and Sprint. Its contributors include Amdocs, Aricent, Broadcom, Ciena, Cisco, Commscope, Ericsson, Fujitsu, Intel, JMA Wireless, Keysight Technologies, NEC, Nokia, Red Hat, Samsung, Viavi and ZTE.
In December 2018, O-RAN said it had started collaboration arrangements with the Linux Foundation to establish an open source software community for the creation of open source RAN software. Collaboration with the Linux Foundation would enable the creation of open source software supporting the O-RAN architecture and interfaces, the entity said. | http://news.xgnlab.com/2019/02/increasing-horizon-of-o-ran-as-ericsson.html |
I just love the The Postpartum Stress Center site. One great tool on the site is a PPD risk assessment for women who are pregnant or planning to be pregnant. It's a good way to become educated on various factors that could predispose you to experiencing a postpartum mood disorder. For example, the following are some of the factors listed in the assessment:
- I have had a previous episode of postpartum depression and/or anxiety that was successfully treated with therapy and/or medication.
- I might have experienced symptoms of postpartum depression following previous births, but I never sought professional help.
- I have had one or more pregnancy losses.
- I have a history of depression/anxiety that was not related to childbirth.
- I have lost a child.
- I have been a victim of the following:
Childhood sexual abuse
Childhood physical abuse
Physical assault by someone I know
Physical assault by a stranger
Physical assault during this pregnancy
Sexual assault by someone I know
Sexual assault by a stranger
- There is a family history of depression/anxiety, treated or untreated.
- I have a history of severe PMS.
- I do not have a strong support system to help me if I need it.
- I have a history of drug or alcohol abuse.
- People have told me I'm a perfectionist.
- During the past year, I have experienced an unusual amount of stress (ex: Move, job loss, divorce, loss of loved one)
I find this list so interesting and wish I'd had it back in the day. For example, the perfectionist issue -- who would have thought that being a perfectionist could raise your risk for having PPD? But, I can totally see it and how that overwhelming feeling that you're not doing everything you should be doing for your newborn, the household, other kids who need your attention, etc., etc., is devastating to a perfectionist who is used to having everything all put together perfectly.
And what about a history of severe PMS? That's such a huge and common issue. According to the American College of Obstetricians and Gynecologists, approximately 40% of women experience PMS on a consistent basis. And nearly 85% of women will experience one or more of the symptoms over the course of their reproductive life. But do these women realize that having PMS may be a factor that puts them at higher risk for PPD?
And of course the support system issue is a biggy and I've written about it before because it's something I've experienced personally -- both the lack of support and having support. For me, not having a strong support system was the overriding factor when I suffered from PPD. I felt like I was screaming out for help but no one was hearing me. It was horrible feeling so utterly alone and it nearly did me in. But with my subsequent pregnancy here in Arizona, when I had an overwhelmingly strong support system in place, my postpartum was wonderful. When pregnant women are busy filling a nursery with furniture, bedding, diapers, and other essentials, what they really need to be doing is filling up their support system with friends and family who are willing to pitch in with meals, household help, supportive phone calls, shopping assistance, birth announcements and more.
I could write all day about the above list. Most importantly, I want to applaud The Postpartum Stress Center for creating its eye-opening PPD Risk Assessment During Pregnancy. I think every pregnant woman should take a look at it. If you're not familiar with The Postpartum Stress Center, it was founded in 1988 by the wonderful Karen Kleiman, MSW, and received Postpartum Support International's Jane Honikman Award in 2003.
Add a CommentComments
There are no comments yet. Be the first one and get the conversation started! | https://www.empowher.com/postpartum-depression/content/kristin-park-are-you-risk-postpartum-mood-disorder |
It’s been a year since schools nationwide closed in response to COVID-19, and the shift to virtual learning has been extremely difficult for both teachers and students. Before 2020, most teachers didn’t have a lot of experience with teaching in virtual settings, so they were forced to learn as they went. In many cases, they had to learn new tools and develop new strategies in real time, all while helping their students make the same transition.
Despite these challenges, many teachers found ways to succeed and thrive in their new environments—so we set out to learn more about what’s working well for them and what they’ve learned after a year of virtual learning. We surveyed educators and asked them about their experiences, and they gave us keen insights into their biggest wins, their hardest obstacles to overcome, and the unique advantages of having to pivot to virtual learning.
“Virtual teaching during a pandemic: Lessons learned” covers the four key findings from our educator survey in detail. Here’s a summary.
1. Many educators are rising to the challenges of virtual learning by focusing on finding new, creative solutions to best support student learning.
Notably, there was no single tool or practice that stood out as ideal for all students; rather, two key messages emerged: Teachers took strength from their own resilience as they’ve found new ways to build teaching momentum, and they’ve been propelled by a growth mindset and willingness to try new things. Teachers told us that they miss teaching in person and the emotional energy that comes from their interactions with students. Those who felt most successful have worked to find new ways to foster similar or analogous interactions virtually. Teachers also shared with us that their schools and districts are supporting their willingness to try new approaches and solutions by loosening local requirements—so they’re playing a stronger role in planning what to teach and assess in their classrooms. That’s had positive outcomes for both student and teacher morale.
2. Many educators are finding a silver lining: They can use digital tools and virtual learning environments to have a positive impact on teaching and learning in unexpected ways.
The overwhelming majority of teachers we surveyed agreed on one key thing—the COVID-19 pandemic changed their mindset around using digital tools for learning. They described how they quickly went from using technology as a way to replicate in-person learning to using it to create learning opportunities that can only happen in virtual environments. By leveraging an ecosystem of different tools to organize teaching and learning, engage students, check for understanding, communicate with students and families, and address diverse learning needs, many teachers are seeing new benefits they might not have realized so easily in a physical classroom. In fact, the majority of educators are more comfortable with digital tools now compared with before the pandemic and anticipate that they will use the ones they’ve grown to value more frequently when consistent face-to-face learning is possible again.
3. Most educators are informally advancing their digital literacy skills by using district-provided resources or examples from their professional learning.
In our survey, teachers told us that, rather than finding their own tools, they typically rely on the applications and systems their districts provide—and that they spend a lot of time learning how to make the best use of them. That can create a burden, and simultaneously, they’ve discovered an opportunity to support one another: As they share and adapt their most effective techniques, they’re able to try new teaching strategies in their own virtual classrooms. In the same way that we’ve seen teacher resilience and adaptability create momentum for action, shared teaching experiences have created new peer-to-peer learning opportunities.
4. Educators are reconsidering how they approach and implement assessment in the context of virtual learning.
Our teacher respondents were clear: Assessment has been particularly difficult but also especially important in the context of the pandemic and virtual learning. While interim assessments have been helpful to spotlight where their students are and to identify unfinished learning, teachers have highlighted the even greater need for formative assessment practices. They’ve expressed a desire to have actionable data so they can respond to their students and differentiate instruction. They told us that in the context of virtual learning, they’re relying on formative assessment practices because they’re the most effective way to evaluate student learning and social-emotional well-being at any given time. Assessment data has also been key for helping students set learning goals that are clear and aligned with both student needs and local standards.
Read more
Across all the educators we surveyed, we saw a wide variety of responses that reflect the different experiences educators are having. Yet one thing was clear from all of our respondents: They’re committed to their students, and they’re finding ways to make learning happen, even in the most challenging situations. To learn about our findings in more detail, download “Virtual teaching during a pandemic: Lessons learned.”
For additional information on professional learning opportunities through NWEA, read “Why investing in professional learning is essential for educators—and students, too” and “Classroom ready: Be there for your teachers with assessment support, curriculum connections, and professional learning.”
Steve Underwood, manager on the Professional Learning Design team at NWEA, contributed to this post.
The post Rising to the occasion: 4 lessons learned about virtual teaching during COVID-19 appeared first on Teach. Learn. Grow.. | https://e-test.id/rising-to-the-occasion-4-lessons-learned-about-virtual-teaching-during-covid-19/ |
Beijing accused G7 members of bias, ignoring the facts and irresponsibility on Tuesday as it rejected a statement made by the group targeting, but not naming, China over maritime tensions.
The accusations were leveled by Hong Lei, spokesman for the Chinese Foreign Ministry, at a news conference.
"What the G7 members have said and done are too far from the facts," Hong said.
"China strongly urges the G7 members to respect the facts, discard bias, stop making irresponsible remarks and focus on things that can really help to properly handle and resolve the disputes and contribute to regional peace and stability."
On Monday, leaders of the Group of Seven countries expressed concerns over tensions in the East and South China seas and called for nations to abide by international law. Their comments marked the end of a two-day summit in southern Germany.
"We strongly oppose the use of intimidation, coercion or force, as well as any unilateral actions that seek to change the status quo, such as large-scale land reclamation," the G7 leaders said, without naming countries. Many observers interpreted the statement as targeting China.
In his reply, Hong stressed that construction work by China on the Nansha Islands in the South China Sea is an act within its sovereignty with which no other countries have the right to interfere.
He also said the facilities are mostly for civilian use to better fulfill international obligations such as maritime navigation and rescue work.
Zhou Yongsheng, a professor of Japanese studies at China Foreign Affairs University, said the fact that the statement did not name China proves "inner conflict and struggle" within G7 is continuing. It was also a disappointment for Japanese Prime Minister Shinzo Abe, he said.
"Some G7 members, such as Germany and France, obviously do not want to be 'kidnapped' by Japan to sacrifice their friendship with China over something that really does not affect their interests," Zhou said. "As hard as Abe has tried to make this a big issue on the international stage, he has failed."
Abe had been widely reported as making lobbying efforts to put the maritime issues on the G7 summit agenda.
Jia Xiudong, a senior international affairs researcher at the China Institute of International Studies, said none of the G7 members have the right to meddle in the situation in the South China Sea, as they are not directly involved.
"Making statements like this and ignoring facts and justice will not enhance the voice of the G7 on the global political stage. Rather, it will diminish its image and weaken the group's influence," Jia said.
"No one within the group really cares about the South China Sea-not even Japan. This purely political move will not help the G7 to regain the reputation and influence it has lost to emerging organizations like the G20."
Jia said the G7 statement could make the South China Sea situation more complicated, as some parties that are directly involved may take it as a sign of an endorsement of their activities within Chinese territory. | https://africa.chinadaily.com.cn/world/2015-06/10/content_20955440.htm |
Correctional theories currently the united states correctional system forms an important part of the criminal justice system the system's conception of justice, punishment and correction is made up of a combination of retributive, denunciation and utilitarian theories. Corrections • what are the causes and consequences of america's policy of mass imprisonment • what is the american prison/industrial complex and how can it be explained. Welcome to the companion website this site is intended to enhance your use of correctional theory second edition by francis t cullen and cheryl lero jonson the materials on this site are geared toward increasing your effectiveness with this material and maximizing the potential for your students to learn. Encyclopedia of community corrections , shannon m barton-bellessa, may 1, 2012, social science, 520 pages in response to recognition in the late 1960s and early 1970s that traditional.
In criminal justice, particularly in north america, correction, corrections, and correctional, are umbrella terms describing a variety of functions typically carried out by government agencies, and involving the punishment, treatment, and supervision of persons who have been convicted of crimes. Correctional theory - introduction rehabilitation is firmly entrenched in the history of corrections in the united states penitentiaries, for example were formed in 1820 with the belief that offenders could be morally reformed (cullen, & jonson, 2012, pp 27-28. Substantively, this is a good survey of correctional theory stylistically, i tired of the authors' informal style and personal anecdotes (the tangent on cullen's tennis-inspired dog names was particularly perplexing.
In this presentation, professor robert m worley provides an overview of the various correctional theories he also provide a discussion of evidence-based practices and explains how these have. Get this from a library correctional theory : context and consequences [francis t cullen cheryl lero jonson] -- this accessible book identifies and evaluates the major competing theories used to guide the goals, policies, and practices of the correctional system. Students who are interested in studying corrections, can consider the criminal justice bs (institutional theory and practice) and/or the criminal justice management degree programs the college will continue to offer a minor in correctional studies.
You will gain an understanding of the evolution of correctional philosophies and the correctional system in the united states the corrections process is a result of society responding to deviance. Click on the titles below for an up-to-date list of corrections and revision each title links to an index in pdf which will list the correction, the page number and a link to the corrected page with the revision highlighted. Correctional theory studies how and why we punish people in society these theories look at the institutions and structures of punishment, how they are justified, and how well they accomplish what.
Correctional administration: integrating theory and practice provides students a practical understanding of correctional operations touching briefly on the history and background of corrections, its focus lies in teaching students the purpose and practice of working in a corrections facility, along with the challenges that face its staff and. Introduction: criminological theory and community corrections practice the purpose of this chapter is to provide students with a brief overview of the major theories of crime causation, focusing on the implications of current criminological theories (of. This article covers the theory and best practices of successful psychological treatment of offenders, their delivery by clinical psychologists, and the psychological rehabilitation of prisoners to reduce recidivism rates. For this book correctional administration: integrating theory and practice (second edition), we attempt to convey the difficulty of the work in this arena, as well as the potential enjoyment and fulfillment that can accompany doing a challenging job well, while contributing to the public.
Psychodynamic trait theory was born out of sigmund freud's desire to explain the forces behind all of human behavior and thus became a tool with which to gauge destructive behavior and its appropriate punishment according to freud there are three forces at play in the human psyche. The authors explore alternative visions of corrections that differ from the punitive orientation of the past 40 years, especially the theory of rehabilitation readers are encouraged to become evidence-based thinkers and to develop their own theory of corrections to guide them as citizens, policymakers, and practitioners. One problematic element to the social-contract theory of punishment is the perhaps even less than human under this view, correctional treatment is infi. A prison, also known as a correctional facility, jail, gaol (dated, british and australian english), penitentiary (american english), detention center (american english), or remand center is a facility in which inmates are forcibly confined and denied a variety of freedoms under the authority of the state. | http://qpcourseworkwgts.laluzsiuna.info/correctional-theory.html |
FIELD OF THE INVENTION
BACKGROUND
SUMMARY OF THE INVENTION
DETAILED DESCRIPTION OF THE INVENTION
The present invention is related to a mattress, and more particularly related to an inflatable mattress.
The human history of invention of beds is long, nevertheless, new techniques have still been constantly in development. After all, the ratio of time spent on sleeping for human beings is very large. In addition, there are many different kinds of mattresses, such as spring beds, memory mattresses, tatamis, etc. Another different kind of mattresses further includes water beds, and inflatable mattresses.
For inflatable mattresses, by inflation in interior inflatable spaces, certain predetermined shapes can be maintained for mattresses, so as to to provide experiences similar to using mattresses for users. Since inflatable mattresses can reduce volumes greatly by deflating, thus, one can have considerable conveniences on storages or deliveries. In addition, since relatively less materials are required for inflatable mattresses, there are stronger advantages with respect to conventional mattresses on manufacturing costs.
Nevertheless, conventional inflatable mattresses still pay more attention conveniences, there are still considerable room for improvements required for overall experiences of using.
According to the first embodiment of the invention, there is an inflatable mattress. The inflatable mattress has an upper layer structure, a inflatable middle layer structure and a bottom layer structure.
The upper layer structure comprises a basic fabric layer and a covering layer. The basic fabric layer is woven by a plurality of fabric fibers. the number of weft and warp fibers per square inch are higher than 100. The covering layer and the basic fabric layer stick to each other by glues, wherein the weight of the glues are higher than 20 grams per square yard and lower than 80 grams per square yard. The basic fabric layer orients toward the direction of the coverage layer, pressing and embedding into the covering layer and constituting a fabric surface laminating material.
The so-called pressing and embedding the basic fabric layer into the covering layer is, by pressing the basic fabric layer and the covering layer, the uneven surface of the basic fabric layer having fibers directly or indirectly pressing the covering layer via glues, so that the surface of the covering layer generating a certain degree of concavity, and thus composing a more stable pressing result.
A first surface of the inflatable middle layer is fixedly connected to the upper layer structure. Also, the inflatable middle layer comprises at least one air chamber configured to provide a predetermined shape after expansion during the inflation. The outer side of the inflatable middle layer comprises the fabric surface laminating materials.
A second surface of the inflatable middle layer is fixedly connected to the bottom layer structure, and the bottom layer structure comprises the fabric surface laminating materials.
The portions orienting outward of the upper layer structure, the inflatable middle layer and the bottom layer structure are all covered with the fabric surface laminating materials.
In one condition, the outer surfaces of the upper layer structure, the inflatable middle layer and the bottom layer structure are all covered with the fabric surface laminating materials.
The inflatable middle layer can have at least one drawstring. the drawstring connects respectively to the upper layer structure and the bottom layer structure, so as to ensure the predetermined shape of the inflatable mattress after expansion during the inflation.
The upper layer structure can comprise a top surface portion. The top surface portion covers more than 70% of the area of the upper layer structure facing up, and the top surface portion can connect to the bottom layer structure by a plurality of drawstrings, and the drawstrings are Y-type drawstrings or round hole drawstrings.
The fabric fiber of the basic fabric layer can comprise polyester materials, and the weight per unit length is different for the fabric fiber in weft direction and the fabric fiber in warp direction.
The fabric fiber of the basic fabric layer has number of fibers of more than 50 and less than 300 per square inch in weft direction and warp directions. For example, the basic fabric layer can be constituted by polyester fibers of 75D×75D or 110D×80D. The “D” means the standard unit of the wire diameter of 1 gram per 9 kilometers used in the industry.
The materials of the covering layer can include PVC material. The glues can withstand drawing forces larger than 20 newtons per 3 centimeters. Alternatively, the glues can withstand drawing forces of more than 10 newtons and less than 50 newtons per 3 centimeters. The material of the glues can be PU glues with viscosity between 350000 CPS to 100000 CPS.
The thickness of the covering layer can be between 0.15 mm to 0.9 mm. The thickness of embedment of the basic fabric layer into the covering layer can be between 0.06 mm to 0.1 mm.
4
The affordability for tensile strength of the fabric surface laminating materials is larger than 200 newtons per 2.54 centimeters, the affordability for tear strength of the fabric surface laminating material is larger than newtons, and the affordability for peel strength is larger than 40 newtons per 3 centimeters.
The covering layer is doped with an flame retardant material, so that an ignition point for the fabric surface laminating materials can be larger than 350 degrees Celsius.
The covering layer can have embossments. The embossments not only increase strengths of materials and reduce the manufacturing costs, but also cause the covering layer to expand easier during treating processes.
The embossments of the covering layer can have a plurality of protruding embossment stripes cross-distributed on the surface of the covering layer. For example, the plurality of protruding embossment stripes constitute a plurality of hexagonal honeycomb structure. In addition, a plurality of protruding embossment blocks can also be distributed on interspace areas separated out by the protruding embossment stripes.
The embossments can be distributed on one side or two sides of the covering layer.
The inflatable middle layer has a surrounding sheet and a plurality of surrounding side drawstrings. The surrounding side drawstrings are welded with the surrounding sheet to configured to a multi-layer structure in the inflatable middle layer.
According to another embodiment of the invention, the embodiment provides a method for manufacturing an inflatable mattress. The inflatable mattress has an upper layer structure, an inflatable middle layer and a bottom layer, and the upper layer structure, the inflatable middle layer and the bottom layer have fabric surface laminating materials. The method includes steps as follows. Flatten a fabric surface material. Glue on, dry, and cool the heated flat-lying fabric surface material. Flatten a PVC material to conduct the preheating. Press and form the fabric surface material and the preheated PVC material into the fabric surface laminating materials by a double-fitting roller. Manufacture the outer surfaces of the upper layer, the inflatable middle layer and the bottom layer by using the fabric surface laminating materials. Use drawstrings to connect the upper layer structure and the bottom layer structure.
In one embodiment, the basic fabric layer conducts a waterproof treatment. The fabric layer can also be dope with silver materials to prevent the occurrence of stink smell.
In one embodiment, the inflatable mattress can further include a buckling structure, configured to settle and fix a removable upper layer mattress above the upper layer structure. The removable upper layer mattress is transparent materials, and the removable mattress has a buckle-in structure corresponding to the buckling structures.
In one embodiment, wherein the removable upper layer mattress can be a thin pad constituted by a nature material.
In one embodiment, wherein the upper layer structure further comprises a memory buffer layer. The buffer layer is set below the basic fabric layer, configured to memorize the stature of a user, in order to provide a surface shape more consistent with the stature of a user.
By combinations of above mentioned embodiments and corresponding characteristics, and according to a plurality of and repeated experiments and tests, the manufacturing method and related techniques herein can manufacture an inflatable mattress which can reduce the costs and at the same time having comfort, thus bringing substantial technical effects.
FIG. 1
a
FIG. 1
a.
10
Refer to exemplifies an embodiment of an inflatable mattress according to the invention.
FIG. 1
FIG. 1
a,
a.
10
101
105
102
101
101
102
In the inflatable mattress has an upper layer structure , an inflatable middle layer and a bottom layer structure . The upper layer structure has a top surface portion, that is the inner rectangular region of the upper layer structure in the The top surface portion covers more than 70% of the area of the upper layer structure facing up, and the top surface portion can connect to the bottom layer structure by a plurality of Y-type drawstrings
101
The upper layer structure can include a basic fabric layer and a covering layer. The basic fabric layer is woven by a plurality of fabric fibers. the number of weft and warp fibers per square inch are higher than 100. The covering layer and the basic fabric layer stick to each other by glues, wherein the weight of the glues are higher than 20 grams per square yard and lower than 80 grams per square yard. The basic fabric layer orients toward the direction of the coverage layer, pressing and embedding into the covering layer and constituting a fabric surface laminating material.
The so-called pressing and embedding the basic fabric layer into the covering layer is, by pressing the basic fabric layer and the covering layer, the uneven surface of the basic fabric layer having fibers directly or indirectly pressing the covering layer via glues, so that the surface of the covering layer generating a certain degree of concavity, and thus composing a more stable pressing result.
105
101
106
102
The inflatable middle layer can be fixedly connected to the upper layer structure on a first surface of the inflatable middle layer by the drawstrings , and fixedly connected to the bottom layer structure on a second surface of the inflatable middle layer.
105
102
105
105
The inflatable middle layer can have one or more than one inflation holes which conducting the inflation to the inflatable middle layer by an exterior inflation pump. Another approach is to integrate the inflation pump and the inflatable middle layer , providing an inflation power by plug-in or other ways. The inflatable middle layer comprises at least one air chamber configured to provide a predetermined shape after expansion during the inflation. In practical manufacturing, more than two air chambers can be designed, these air chambers maintain certain independence even by the corresponding mechanism (such as a unidirectional valve), so that another air chamber still can maintain a certain supporting force when an air chamber is under leakage. In addition, if the mattress is designed to be used by two people, the air chamber corresponding to the mattress can be designed to have a buffer space between the two sets. By this way, even a first user lying on one side turns over, a second user sleeping on the other side will not be directly affected by excessive vibrations. The buffer layer can be constituted by vibration isolated materials in itself such as an air chamber with lower pressure or sponges, etc. Or, the gases between the two sets of air chambers do not connected with each other, so as to generate an effect similar to the independent cylinder of the conventional mattresses.
105
103
103
10
10
In addition, the periphery of the inflatable middle layer has a surrounding side drawstring . The side surrounding sheet outstretches a surrounding three-layer structure by at least one side drawstring. The side drawstring can be plastic materials, fabric fiber materials or other connection materials which can provide a certain tension and supporting forces. by ways of high frequency welding or paste . . . etc., the side surrounding sheet is connected to other portions of the inflatable mattress , so as to maintain the inflatable mattress to keep a predetermined shape during inflation.
FIG. 1
b
FIG. 1
b.
10
Refer to exemplifies an embodiment of the inflatable mattress according to the invention.
FIG. 1
FIG. 1
b,
b.
11
111
155
112
111
111
112
116
In the inflatable mattress has an upper layer structure , an inflatable middle layer , and a bottom layer structure . The upper layer structure has a top surface portion, that is the inner rectangular region of the upper layer structure in The top surface portion covers more than 70% of the area of the upper layer structure facing up, and the top surface portion connect to the bottom layer structure by a plurality of round hole drawstrings .
115
111
115
112
A first surface of the inflatable middle layer is fixedly connect to the upper layer structure . A second surface of the inflatable middle layer is fixedly connected to the bottom layer structure .
115
113
113
114
113
11
11
In addition, the periphery of the inflatable middle layer has a surrounding side drawstring . The side surrounding sheet outstretches a surrounding three-layer structure by at least one side drawstring . The side drawstring can be plastic materials, fabric fiber materials or other connection materials which can provide a certain tension and supporting forces. by ways of high frequency welding or paste, etc., the side surrounding sheet is connected to other portions of the inflatable mattress , so as to maintain the inflatable mattress to keep a predetermined shape during inflation.
105
115
105
115
101
111
103
113
102
112
101
111
In other words, the inflatable middle layers , can be constituted by various kinds of different structures. Besides, since the top surface portion of the upper layer structures , are main parts in touch with users, therefore, the top surface portion can include structures constituted by the fabric surface laminating materials, so that the fabric surface laminating materials oriented toward users are constituted by fabric surfaces. Of course, based on considerations of factors on designs and convenience of manufacturing . . . etc., this kind of fabric surface laminating materials can also extend to the whole upper layer structures , , or even extend to the side surrounding sheets , . If one hopes the mattresses can be designed to be used on two side, the bottom layer structures , can be directly designed to be the same as the design of the upper layer structures , .
a
FIG. 2
a
FIG. 2
FIG. 2
a,
20
201
203
201
203
202
Refer to . exemplifies a schematic drawing of an embodied way of the fabric surface laminating materials according to the invention. In this kind of fabric surface laminating materials has a basic fabric layer and a covering layer . The basic fabric layer and the covering layer are being fixedly connected by the glues .
201
201
The basic fabric layer can be woven by a plurality of fabric fibers via weft (or warp) or other geometrical structural ways. By different materials and weaving methods, different surface characteristics can be provided. For example, by different materials and weaving methods, the basic fabric layer of colors, patterns, heat dissipations, heat preservations, and different sense of touches can be conducted.
201
203
203
FIG. 2
a,
These fabric fibers can generate surfaces with different flatness on the surfaces according to their characteristics. In principle, during the process of pressing, the basic fabric layer presses and embeds into the covering layer . Please be noted that, even the covering layer which looks like flat in by the processes of sticking by the glues and pressing laminating on the surface, even the tiny or invisible laminations are belonging to the “embedment into the uneven surface of the surface of the fabric layer” herein.
b
FIG. 2
b
FIG. 2
201
21
211
213
211
213
211
213
exemplifies a more obvious squeezing and embedment phenomenon. The surface of the basic fabric layer of the fabric laminating materials has obvious uneven ups and downs. Likewise, one conduct the lamination between the basic fabric layer and covering layer by the glues. From one can clearly see the basic fabric layer pressing and embedding into the surface of the covering layer , so that the two (, ) join more closely together.
In one embodiment, the covering layer can have a polarity, and by this characteristic, the covering layer can be fixed to the inflatable middle layer by way of high frequency welding. For example, the covering layer can use PVC materials. Comparing to similar materials (with respect PVC materials), PVCs are easily being processed, capable of recycling for continuous use, high strength, superior geometrical stability.
In one embodiment, the fabric fiber of the basic fabric layer has number of fibers of more than 50 and less than 300 per square inch in weft direction and warp direction. For example, the basic fabric layer can be constituted by polyester fibers of 75D×75D or 110D×80D. Wherein the “D” means the standard unit of the wire diameter of 1 gram per 9 kilometers used in the industry.
In one embodiment, the glues can withstand drawing forces larger than 20 newtons per 3 centimeters. In one embodiment, the material of the glues is PU glue with viscosity between 350000 CPS to 100000 CPS. For example, the PU glues can be adopted for the glues with viscosity between 350000 CPS to 100000 CPS, or between 40000 CPS to 60000 CPS. The PU is the polyurethane material, and the CPS is a measurement unit for viscosity in the industry.
Set different amount of glues according to different standards of laminating firmness. The higher the standard of the laminating firmness, the more the amount of glues, and the higher the costs. Preferred approaches are such as between 20 N/3cm to 100 N/3cm, or between 10 N/3cm to 50 N/3cm, and the amount of glues is 10 g/Y2 to 100 g/Y2. The “N” represents the newton, and the g/Y2 represents the amount of glues per square yard.
In one embodiment, the thickness of the covering layer is between 0.15 mm to 0.9 mm, the thickness of embedment of the basic fabric layer into the covering layer is between 0.06 mm to 0.1 mm, and the total thickness of the upper layer structure is between 0.2 mm to 1 mm. It is an option as long as the thickness of the covering layer satisfies the high frequency welding and no leakage. The second lamination can all be under processing.
In one embodiment, the affordability for tensile strength of the basic fabric layer is larger than 200 newtons per 2.54 centimeters, an affordability for tensile strength of the fabric surface laminating material is larger than 4 newtons, and an affordability for peel of the fabric surface laminating material is larger than 40 newtons per 3 centimeters. There can be different settings according to different specifications, materials, and glue formulas of the basic fabric layer. Also, the basic fabric layer can be doped with flame retardant material. For example, by doping flame retardant material or a fireproof treatment, the ignition point of the basic fabric layer is larger than 300 degrees Celsius.
FIG. 3
FIG. 3
a,
a,
311
31
312
311
311
In the covering layer can have embossed patterns. The embossed patterns of the covering layer can be pressed out by one side of an embossing roller or by conducting rolling pressing on all two sides, thus generating the corresponding embossed patterns. In one can press out horizontal and vertical parallel embossed stripes on the covering layer manufactured by materials of PVC, etc. by the roller. Comparing to other portions , the embossed strips can have a certain degree of protrusions. By designs of the embossed stripes , one can not only create higher stability and structural drawing forces, better effects can also be generated with respect to the cement of the basic fabric layer.
FIG. 3
FIG. 3
b
FIG. 3
b,
a,
32
321
In the covering layer has protrusive embossed patterns . Different from the design of the embossed stripes with staggered arrangements in horizontal and vertical directions in the embossed stripes in constitute hexagonal shapes similar to honeycomb shapes, bringing better structural stability. In the point of view of manufacturing, it means one can manufacture a thin layer having the same strength using thinner materials. This approach can further reduce the manufacturing costs.
c
FIG. 3
33
331
In , the covering layer also has protrusive embossed blocks distributed between the embossed stripes arranged in horizontal and vertical directions (i.e. the black geometrical block patterns as marked). Please be noted that, the manufacture of embossed patterns on all two sides or only one side of these covering layers can be conducted. The embossed patterns can be protrusive, concave, or local protrusive and local concave.
FIG. 4
FIG. 4
1
Refer to . exemplifies a method for manufacturing the mentioned fabric surface laminating materials. First, flatten the fabric surface material (step S). Usually, for the convenience of transportation, fabric surfaces are preserved by way of rolling up.
2
3
4
Next, apply glue on the heated flat-lying fabric surface and drying cooling (step S). In addition, flatten the PVC material acted as the covering layer (step S). Next, press and form the fabric surface material and the preheated PVC material into the fabric surface laminating materials by a double-fitting roller (step S). In other words, by the pressure of the roller, the PVC material of the covering layer is conducted to have closer fixing with the fabric surface material acted as the basic fabric layer by the glues.
5
Finally, one cools the fabric surface laminating materials by a cooling wheel and conducts an accumulator winding (step S). The manufactured fabric surface laminating materials can conduct welding or lamination later with other elements, so as to constituted various kinds products of inflatable mattresses.
In addition, the covering layer at PVC can have embossed patterns on all two sides, and the kinds of lines are not being limited. Different patterns can be pressed out by different sleeves. The purpose is to decrease the thickness of the material, and easily to e flattened, smoother, and easily been processed.
In one embodiment, a waterproof treatment is conducted to the basic fabric layer. In addition, the brush, waterproof treatment can be conducted on the surface of the basic fabric layer. The basic fabric layer can be doped with silver materials, so as to avoid the occurrence of stink smell.
FIG. 5
a
FIG. 5
a.
512
512
511
512
Refer to exemplifies a schematic drawing of fixing the connection of the upper layer structure of the inflatable mattress by Y-type drawstrings . Since the Y-type drawstrings are divided into two branches on the portion of the connection with the upper layer mechanism , and are looked like a shape of an Y-type alphabet, so as to be named Y-type drawstrings . In fact, the number of branches of the drawstrings can be even more than two, depending on designs for practical requirements.
b
FIG. 5
b
FIG. 5
a
FIG. 1
FIG. 1
521
522
521
b.
In addition, refer to . exemplifies a schematic drawing of fixing the upper layer structure of the inflatable mattress by round hole drawstrings . The round circle drawstrings draw the upper layer structure in a structure similar to a way of a cylinder, generating a certain drawing force, and then ensure the shape of the inflatable mattress, corresponding to embodiments in and Of course, it should be noted that, technical staffs can also use other ways to ensure the shape of the inflatable mattress.
6
62
622
624
61
61
612
614
Refer to FIG.. In one embodiment, the inflatable mattress can further include one or more than one buckling structures , , configured to settle and fix a removable upper layer mattress above the upper layer structure . The removable upper layer mattress is transparent materials, and the removable mattress has a buckle-in structure corresponding to the buckling structures , .
In one embodiment, In one embodiment, wherein the removable upper layer mattress can be a thin pad constituted by a nature material such as using a straw mat, tatami materials.
In one embodiment, wherein the upper layer structure further comprises a memory buffer layer, such as a memory buffer layer constituted by memory sponges. The buffer layer is set below the basic fabric layer, configured to memorize the stature of a user, in order to provide a surface shape more consistent with the stature of a user.
By combinations of above mentioned embodiments and corresponding characteristics, and according to a plurality of and repeated experiments and tests, the manufacturing method and related techniques herein can manufacture an inflatable mattress which can reduce the costs and at the same time having comfort, thus bringing substantial technical effects.
BRIEF DESCRIPTIONS OF THE DRAWINGS
a
FIG. 1
is a schematic drawing of an inflatable mattress according to the first embodiment of the invention.
b
FIG. 1
is a schematic drawing of an inflatable mattress according to the second embodiment of the invention.
a
FIG. 2
is an embodiment of fabric surface laminating materials according to the invention.
b
FIG. 2
is an embodiment of fabric surface laminating materials according to the invention.
a
FIG. 3
is an embodiment of a first kind of embossments of the PVC layer according to the invention.
b
FIG. 3
is an embodiment of a second kind of embossments of the PVC layer according to the invention.
c
FIG. 3
is an embodiment of a third embossments of the PVC layer according to the invention.
FIG. 4
is a flow diagram of manufacturing the fabric surface laminating materials according to the invention.
a
FIG. 5
is a schematic drawing of a drawstring of the inflatable mattress according to the invention.
b
FIG. 5
is a schematic drawing of an embodiment of a drawstring of the inflatable mattress according to the invention.
FIG. 6
is a schematic drawing of another embodiment of the inflatable mattress according to the invention. | |
creative ENGAGEMENT STATEMENT
I am a committed and versatile individual who has a passion for working creatively alongside cultural institutions and communities to provide art outreach projects.
Art improves wellbeing by providing an outlet for people to express themselves. As a facilitator, I feel it is important that I am able to tailor my approach to each individual. By working creatively with many audiences I can allow the arts to become easily accessible to them. I achieve this through encouraging hands on and visual learning.
I specialise in creating and using sensory resources when working with all audiences, particularly under 5's along with those with additional needs.
When working with groups, my focus is on how the making process can explore the different ways people respond to objects and materials. This approach works well in a collaborative participatory setting, because it creates an environment where ideas can be shared and evolved. I do this alongside encouraging their individual approaches and celebrate the importance of having a unique response to a material or object - embracing and honing this into a piece of art.
When delivering sculpture based workshops, I aim to highlight the changes that are happening in the making of sculpture today and allow participants to break away from the more traditional approaches they may be used to. This stems from the heart of my practice being that any object I come across can become a material in which to make a sculpture.
My drawing and mark-making workshops open drawing up to a new, and in some cases scared, audience. I aim to make drawing accessible and open. Encouraging participants to try new materials, new techniques and in some case alternative approaches, with no fear.
"You have taught me that drawing is like magic" [a quote from a 7 year old participant in a mark making session.]
ARTIST STATEMENT
Seashells, plant pots, doorbells, glasses, bottle-tops, china dolls, plastic soldiers, light bulbs, umbrellas, champagne flutes, yoghurt pots, telephones, fake flowers, car tires, teapots, hair brushes, beads, road signs, forks and spoons, stuff.
This is the stuff we collect, treasure, discard, reuse, frame or hide.
My obsession with stuff has stemmed from a fascination with the illogicality of the misunderstood condition of hoarding; motivated by the personal relationship and research I have done into the condition.
Hoarding – "a suffocating yet fascinating, illogical, misunderstood condition that is a response to the dismissive, wasteful, thoughtless society which we find ourselves living in”
I am a sculptor whose practice combines the use of everyday objects and some traditional sculpture techniques. I work with found, discarded or hoarded objects which I then bind together using materials such as string, tape, paint, plaster, wax and fabric; creating a sense of uniformity as well as ambiguity. During the spontaneous construction process, the associations I have with the objects sometimes influence how they are placed within the sculpture. However, through the application and combination of different materials, I strip these objects of their practical connotations. This reduces them to “things”, allowing them to be redefined and reinterpreted. My combination of the objects and materials and colours I chose leads to a tacky, twee, kitsch aesthetic.
In my sculptures I also revisit the childhood like fascination with funfairs and the loud, garish, colourful, moving shapes. It is these spaces that trigger temporary yet highly sensuous experiences, which these new works evoke. Despite being completely opposite, a claustrophobic hoarders home and a large open funfair, both create a highly sensuous experience, as the senses are stimulated but in very different ways.
My practice is viewer oriented, where I aim to create a conversation between the sculpture and its audience. So, it is important to me that the process is recognisable within the sculptures and that the viewer is able to investigate and dissect the component materials. The use of used objects within my work seeks to critically reflect on the throw-away society in which we live. By appropriating and reusing discarded objects, I give them a new lease of life that is charged with fun and surreal moments that stimulate the viewers imagination.
My most recent work focuses on the material of plastic, a highly mass produced material that consumes everyday life. I run this exploration alongside the societal response where individuals attempt to go 'plastic-free'. I'm using old discarded 'penny' toys along with other highly recognisable plastic objects which also bring a nostalgic element to the work.
My paintings and large scale ink drawings explore form and abstraction. They depict the process of painting and how different mediums and materials interact with each other on the surface. Through layering, the paintings create continuous visual stimulus, as something new is always waiting to be seen.
IoMA STATEMENT
IoMA (Institute of Miniature Art) is a project I have been running from my studio, since September 2019, with an online platform on Instagram. The gallery provides a space for small-scale artworks, in any medium to be exhibited. IoMA has already exhibited 11 exhibitions which have included sculpture, painting, print, poetry, sound, ceramics, artist-books and textiles.
This gallery is open to submissions of all types of artwork created on a miniature scale and then I curate exhibitions shown publicly online. This project is still in the very early stages but I hope to be able to develop it further with outreach potential in the future. In February 2020 the gallery hosted a month-long residency with, Leeds based, Creative Mothers Project, which provided 5 evoking exhibits about motherhood. | https://www.beatriceleeknowles.com/statement |
Telescopes had been built to look at the stars, and astronomers weren’t going to ignore the closest example — our Sun.
Usually, telescopes are built to see objects that are too faint and far away to be easily visible. They’re constructed with giant mirrors or lenses so they can collect more light than the human eye can see on its own.
Telescopes designed to see the Sun, or “solar telescopes,” have the opposite problem — their target emits too much light. The Sun is extremely bright, and astronomers need to be able to filter out much of the light to study it. This means that the telescope itself doesn’t have to be extremely powerful; instead, the instruments attached to it do the heaviest work.
Solar telescopes are ordinary reflecting telescopes with some important changes. Because the Sun is so bright, solar telescopes don’t need huge mirrors that capture as much light as possible. The mirrors only have to be large enough to provide good resolution. | https://history.amazingspace.org/resources/explorations/groundup/lesson/eras/solar/index.php |
Tweet This
Students in "The Pulse of Art" with teacher Bobbi Coller and guest artist Hope Grayson.
In the summer of 1941 a Swiss engineer returns from a walk in the woods near his home in Geneva. He is astonished at the dozens of prickly seedpods—burrs—that stick, tenaciously, to his socks, trousers, shirt and dog. Out of curiosity he looks closer and sees that each burr is covered with hundreds of miniature hooks that catch in the fur and fabric of whatever brushes up to them. He wonders, "Would it be possible to construct a man-made version of these burrs as a way of attaching materials without glue, buttons or zippers?" The answer is yes, but it would take seven years to refine this leap of imagination. Finally, in 1948, George de Mestral brought his curiosity to the market as Velcro, an invention inspired by nature, and now so ubiquitous that we take it for granted.
In 1972 a young student drops out of Reed College but then hangs around campus and audits a course on calligraphy. Many years later, in a famous commencement speech at Stanford University, Steve Jobs pointed to that class as the inspiration for the Apple Macintosh's sophisticated typography. "If I had never dropped in on that single course in college, the Mac would have never had multiple typefaces or proportionally spaced fonts. And since Windows just copied the Mac, it's likely that no personal computer would have them."
Aside from being distinctive and charming origin tales of products that are indispensable in our lives today, there is an important commonality here: Both George de Mestral and Steve Jobs took time to notice things, to see with real curiosity, while immersed in an area outside their discipline. These men were, obviously, individuals with exceptional talents and insight. But can one teach the kind of observation that made their breakthroughs possible? A class at the Icahn School of Medicine at Mt. Sinai in New York City aims to do just that.
On a cool Monday evening this past October I walked into the Annenberg building of the Mt. Sinai Hospital complex on Madison Avenue, took the elevator to the 13th floor, and found myself in "The Pulse of Art," a class taught by the husband and wife team of Drs. Barry and Bobbi Coller.
A dozen medical students sat around a large conference table facing a screen, on which was projected an image of a young woman holding a dog. Painted in 1782 by George Romney, "Lady Hamilton as Nature" captures the teenage Emma Lyon glancing upwards towards the viewer, her hair flowing backward as if gently ruffled in a breeze. The students stare at the image for a few minutes. Bobbi Coller then asks them what they see. They are reticent — no one says anything. Unfazed, she asks if Emma looks healthy. A few students nod their heads but remain silent. Hesitatingly, one student observes that Lyon's hair is full and brown, and there's a lot of it. "Good," says Barry Coller. "Full hair is a sign of health. Anything else?" Slowly the class comments on her eyes, lips and blushing cheeks.
The next image appears on the screen. It's also a portrait of a woman. The subject is standing but faces away from the viewer, one hand draped against the side of her black evening dress, the other resting on a table behind her. "Talk to me about the color of her skin," says Barry. A student mentions that she's pale, almost cadaverous in her coloring. "That's right," says Bobbi. "And Madame Virginie Gautreau, who posed for the painter John Singer Sargent, was not the only one who looked like this in 1883." The Collers then launch into an explanation of the dangerous beauty trend, in the late 19th century, of upper-class women ingesting arsenic to whiten their skin.
A painting entitled 'Madame X' by American artist John Singer Sargent is discussed at the National Gallery in London, 20 February 2006. Photo credit: CARL DE SOUZA/AFP/Getty Images
Barry Coller, David Rockefeller Professor of Medicine at Rockefeller University, and Bobbi Coller, an art historian and curator, created "The Pulse of Art" to encourage medical students to look more closely, to see things about their patients that might be unexpected, but critical in making a diagnosis. Benefitting from the school's location right next to New York's Museum Mile, the Collers also take their class to the Guggenheim Museum and the rare book room of the New York Academy of Medicine, so that students can interact with original works and see where medical history and art come together. An opportunity to move from viewer to creator by picking up the brush themselves is always a highlight of the semester. This year artist Hope Grayson visited the class and taught them how to match their skin tones by mixing colors from an artist's palette.
"The Pulse of Art" is not the first class at a medical school to use art to refine students' observational skills. Nearly twenty years ago the Yale School of Medicine and the Yale Center for British Art launched a program that introduced all first-year medical students to the practice (and joy) of active observation with subject matter entirely outside what they study in their anatomy or physiology classes.
The Academy for Medicine and Humanities at the Icahn School of Medicine goes even further. Formed in 2012 as part of the Department of Medical Education at Icahn, the academy incorporates music, writing, philosophy and visual art into the standard medical school curriculum. While most of the courses are elective, those students who embrace what other disciplines can teach them about medicine also advance their careers in striking ways. In a discussion about the impact of the arts on his students, Barry Coller told me that active participants in the Academy graduate in the top 20% of their class, and go on to be chief residents and leaders in their medical specialties.
Credit: Bobbi Coller
Dr. Barry Coller and students in "The Pulse of Art" matching skin tones with paint.
Towards the end of the class I attended, a haunting but riveting image flashes onto the screen. "Death in the Sickroom" by Edvard Munch is a remembrance of the painter's family reacting to the death of his sister Sophie from tuberculosis. Students point out that each member of the family seems to be very alone in confronting the death of their loved one. "As the doctor in the room," asks Barry, "who would you speak to?" It is a powerful moment. | |
Category:
Documents
4 download
Embed Size (px)
DESCRIPTION-
TRANSCRIPT
CLASSIFICATION OF GENETIC DISORDERS.I. 1.SINGLE GENE DISORDERII. 2. MITOCHONDRIAL INHERITANCE.III. 3. POLYGENIC INHERITANCE
I. SINGLE GENE DISORDERA single gene disorder is the result of a single mutated gene. There are estimated to be over 4000 human diseases caused by single gene defects. Single gene disorders can be passed on to subsequent generations in several ways. Genomic imprinting and uniparental disomy, however, may affect inheritance patterns. The divisions between recessive and dominant types are not "hard and fast" although the divisions between autosomal and X-linked types are (since the latter types are distinguished purely based on the chromosomal location of the gene). For example, achondroplasia is typically considered a dominant disorder, but children with two genes for achondroplasia have a severe skeletal disorder that achondroplasics could be viewed as carriers of. Sickle-cell anemia is also considered a recessive condition, but heterozygous carriers have increased immunity to malaria in early childhood, which could be described as a related dominant condition. When a couple where one partner or both are sufferers or carriers of a single gene disorder and wish to have a child they can do so through IVF which means they can then have PGD (Pre-Implantation Genetic Diagnosis) to check whether the fertilized egg has had the genetic disorder passed on. 1. AUTOSOMAL DOMINANT.2. AUTOSOMAL RECESSIVE.3. X- LINKED RECESSIVE.4. X-LINKED DOMINANT.5. Y-LINKED
1. AUTOSOMAL DOMINANTMain article: Autosomal dominant#Autosomal dominant gene Only one mutated copy of the gene will be necessary for a person to be affected by an autosomal dominant disorder. Each affected person usually has one affected parent. There is a 50% chance that a child will inherit the mutated gene. Conditions that are autosomal dominant often have low penetrance, which means that although only one mutated copy is needed, a relatively small proportion of those who inherit that mutation go on to develop the disease. Examples of this type of disorder are Huntington's disease, Neurofibromatosis 1, Marfan Syndrome, Hereditary nonpolyposis colorectal cancer, and Hereditary multiple exostoses, which is a highly penetrate autosomal dominant disorder. Birth defects are also called congenital anomalies.
2. AUTOSOMAL RECESSIVEMain article: Autosomal dominant#Autosomal recessive allele Two copies of the gene must be mutated for a person to be affected by an autosomal recessive disorder. An affected person usually has unaffected parents who each carry a single copy of the mutated gene (and are referred to as carriers). Two unaffected people who each carry one copy of the mutated gene have a 25% chance with each pregnancy of having a child affected by the disorder. Examples of this type of disorder are cystic fibrosis, sickle-cell disease (also partial sickle-cell disease), Tay-Sachs disease, Niemann-Pick disease, spinal muscular atrophy, Roberts Syndrome, and Dry (otherwise known as "rice-brand") earwax.
3. X-linked dominant
X-linked dominant disorders are caused by mutations in genes on the X chromosome. Only a few disorders have this inheritance pattern, with a prime example being X-linked hypophosphatemic rickets. Males and females are both affected in these disorders, with males typically being more severely affected than females. Some X-linked dominant conditions such as Rett syndrome, Incontinentia Pigmenti type 2 and Aicardi Syndrome are usually fatal in males either in utero or shortly after birth, and are therefore predominantly seen in females. Exceptions to this finding are extremely rare cases in which boys with Klinefelter Syndrome (47,XXY) also inherit an X-linked dominant condition and exhibit symptoms more similar to those of a female in terms of disease severity. The chance of passing on an X-linked dominant disorder differs between men and women. The sons of a man with an X-linked dominant disorder will all be unaffected (since they receive their father's Y chromosome), and his daughters will all inherit the condition. A woman with an X-linked dominant disorder has a 50% chance of having an affected fetus with each pregnancy, although it should be noted that in cases such as Incontinentia Pigmenti only female offspring are generally viable. In addition, although these conditions do not alter fertility per se, individuals with Rett syndrome or Aicardi syndrome rarely reproduce.
4. X-LINKED RECESSIVE
X-linked recessive conditions are also caused by mutations in genes on the X chromosome. Males are more frequently affected than females, and the chance of passing on the disorder differs between men and women. The sons of a man with an X-linked recessive disorder will not be affected, and his daughters will carry one copy of the mutated gene. A woman who is a carrier of an X-linked recessive disorder (XRXr) has a 50% chance of having sons who are affected and a 50% chance of having daughters who carry one copy of the mutated gene and are therefore carriers. X-linked recessive conditions include the serious diseases Hemophilia A, Duchenne muscular dystrophy, and Lesch-Nyhan syndrome as well as common and less serious conditions such as male pattern baldness and red-green color blindness. X-linked recessive conditions can sometimes manifest in females due to skewed X-inactivation or monosomy X (Turner syndrome).
5. Y-linked
Y-linked disorders are caused by mutations on the Y chromosome. Because males inherit a Y chromosome from their fathers, every son of an affected father will be affected. Because females inherit an X chromosome from their fathers, female offspring of affected fathers are never affected. Since the Y chromosome is relatively small and contains very few genes, there are relatively few Y-linked disorders. Often the symptoms include infertility, which may be circumvented with the help of some fertility treatments. Examples are Male Infertility and hypertrichosis pinnae.
II. MITOCHONDRIAL
This type of inheritance, also known as maternal inheritance, applies to genes in mitochondrial DNA. Because only egg cells contribute mitochondria to the developing embryo, only mothers can pass on mitochondrial conditions to their children. An example of this type of disorder is Leber's Hereditary Optic Neuropathy.
III. MULTIFACTORIAL AND POLYGENIC (COMPLEX) DISORDERS
Genetic disorders may also be complex, multifactorial or polygenic, this means that they are likely associated with the effects of multiple genes in combination with lifestyle and environmental factors. Multifactoral disorders include heart disease and diabetes. Although complex disorders often cluster in families, they do not have a clear-cut pattern of inheritance. This makes it difficult to determine a persons risk of inheriting or passing on these disorders. Complex disorders are also difficult to study and treat because the specific factors that cause most of these disorders have not yet been identified. | https://vdocuments.net/classification-of-genetic-disorders.html |
Familial hyperparathyroidism-jaw tumor syndrome (HPT-JT) is an autosomal dominant inherited condition resulting in early onset parathyroid adenomas or hyperplasia, fibro-osseous jaw tumors, uterine adenofibromas or adenosarcomas, and occasionally, parathyroid carcinoma or Wils tumors. Discover and understanding of the genetic basis for this disease will provide opportunities for early diagnosis and treatment in affected families, as well as an understanding of sporadic tumors of these organs. We propose to identify and characterize that genetic basis. Our central hypothesis, backed by prior research, is that mutations of a single autosomal gene (HRPT2) cause all the manifestations of HPT-JT. To date, we have mapped HRPT-2 to a 22 cM region chromosome 1q25-q31 through genetic linkage analysis of 8 affected families. We have also identified ten potential tumor genes in this region. Our specific goals are to 1) further narrow this region, 2) determine if one of the ten candidate genes is HPRT1, and if one is, 3) characterize how the identified mutations affect the functioning of the gene product. If the ten fail, we will 4) search for new candidate genes by cDNA selection. More specifically, to narrow the region we will continue to use genetic linkage analysis and will expand our existing kindreds and seek as yet in discovered HPT-JT families. In addition, we will use LOH studies and incorporate more closely spaced markers in the region. Concurrently, we shall continue to map our ten candidate genes to specific YACs or BACs in the region, producing a map order for the given genes to approximately 1 Mb resolution. For candidate genes mapping to the refined HRPT2 region, we will determine which gene genes are expressed in tissues affected in HLP- JT and screen those genes for mutations using single-stranded conformation polymorphism analysis (SSCP) and sequencing. Once HRPT2 is identified, we will begin to characterize how the mutations in our families affect the functioning of the gene product. If no candidate genes actually map to the refined region, or if no apparently detrimental mutations are found in the mapped genes, we will search for HRPT2 through cDNA selection experiments. The strengths of our study include the unique clinical resource, the limited number of potential tumor-related candidate genes to evaluate, the collective experience and laboratory resources of our collaborative group, and our clearly-defined strategy for localizing this important new tumor gene.
| |
- Ability to maintain a high-level customer service culture reinforced by effective customer interaction and follow-ups.
- Ability to engage in the day-to-day activities relating to customer complaints
- Provide accurate information on how to get the best out of their equipment
while providng needed training on proper use.
- Provide effective customer service to both current and potential customers by following established processes.
- Ability to handle multiple tasks like customer complaints while documenting any issues raised by customers.
- Acts as a source of information to customers by answering questions, escalating issues, follow-ups and provide instructions to customers as needed.
- Identify and escalate priority issues, complete call logs, maintain and update customer data in CRM.
- Apply all necessary knowledge and skills on the job regarding phone interactions with our customers and internal partners.
- Effectively executing customer follow-up to encourage adherence to payment plans.
- Exceptional knowledge of and adherence to all company policies and procedures.
- Provide adequate customer education during each interaction with clients on products based on clients’ needs.
- Demonstrate strong understanding of company products and services, guidelines, usage, and product performance.
- Create Customer Service processes that can be replicated by new, junior technical support personnel.
- Manage and train new junior technical support personnel.
Operations Management
- Provide technical support to customers by tracking and following up on new installations as well as follow up calls.
- Coordinate with Technicians and managers to compile and update installation information in the database.
- Create Operations processes in conjunction with the Engineering team for troubleshooting that can be replicated by new, junior technical support personnel. | http://jobs.nicolesinclair.com/vacancy/technical-support-team-lead-solar-energy |
Bitcoin couldn't exist if any of these components didn't exist. Another dominant addition among the key elements of the blockchain ecosystem would obviously concern consensus algorithms. Blockchain technology is based on the claim of fully verified and highly secure transactions. While many would quickly assume that decentralization helps deliver this advantage, the actual actors responsible for verifying transactions are consensus algorithms.
Consensus algorithms are basic processes in computer science that can help achieve agreement on certain issues among distributed systems. The role of consensus algorithms in the blockchain ecosystem is largely focused on achieving reliability in a multi-node blockchain network. As a result, you can ensure that all incoming blocks on the network have been verified and offer security. Most important of all, you can find several consensus algorithms to define one of the crucial components of the blockchain.
Here are the different types of consensus algorithms that you can find in a blockchain ecosystem:. Miners use special software to solve the incredibly complex mathematical problem of finding a nonce that generates an accepted hash. Because the nonce is only 32 bits and the hash is 256, there are approximately four billion possible nonce-hash combinations that need to be extracted before the correct one is found. When that happens, it is said that the miners found the golden nonce and its block is added to the chain.
A blockchain is a growing list of records, called blocks, that are linked together using cryptography. Each block contains a cryptographic hash of the previous block, a timestamp, and transaction data (usually represented as a Merkle tree). The timestamp proves that the transaction data existed when the block was published to enter its hash. Since each of the blocks contains information about the previous block, they form a chain, and each additional block reinforces the previous ones.
Therefore, blockchains are resistant to modification of their data because once recorded, the data in any given block cannot be retroactively altered without altering all subsequent blocks. EVM could show how valuable the components of the blockchain ecosystem are for understanding the instructions for managing the states of digital smart contracts. This means you have more flexibility in choosing where to deploy your blockchain network components, whether on-premises, in public clouds, or on hybrid cloud architectures. Each participant plays a unique role in the blockchain ecosystem, thus serving as a vital component of the ecosystem.
You can find other components in the blockchain ecosystem, which are interestingly related to these components in one way or another. The final addition between the logical components of the blockchain ecosystem points to the virtual machine. This is where you would need to identify the logical components that lay the foundation of the blockchain ecosystem. At the same time, detailed printing of the components of the blockchain ecosystem with an understanding of their functions helps to easily navigate the ecosystem.
A detailed description of the participants and logical components in the blockchain ecosystem clearly shows the complexity of the blockchain. The idea of getting most people to agree on the ownership and balance of an asset is called consensus, and consensus is an important component of every blockchain. We will take this opportunity to discuss the four main components of a blockchain, and then develop on the other actors and participants. However, the most striking feature of this answer to “what are the components of the blockchain” concerns decentralization.
Conducting a faithful analysis of the exact components of a blockchain and knowing how to differentiate them from the parts of work can be overwhelming. One of the notable mentions of virtual machines as components of the blockchain ecosystem is evident in the Ethereum blockchain ecosystem. In addition, the components of the blockchain ecosystem are also evident in information users without any real contribution to the production of goods and services in question. .
. | https://www.nextcryptocity.com/what-are-the-3-components-of-blockchain |
Background: COPD is an irreversible widespread disease which increases dramatically. According to previous research the quality of life in (QoL) patients with COPD is impaired but can be improved by pulmonary rehabilitation.
Aim: The aim of this study was to evaluate if a six week nurse-led multidisciplinary program for pulmonary rehabilitation in primary care had effect on quality of life in patients with COPD in a three-year period.
Method: Quasi experimental design was used to evaluate the program. The intervention group consisted of 40 patients who had participated in the program. The control group consisted of 24 patients who received traditional care. QoL was measured at baseline, after one year and after three years using Clinical COPD Questionnaire (CCQ). Statistical analysis of differences within the groups over the three years was performed by means of Friedmans test. Mann-Whitney U test was used to analyze differences between the groups.
Results: There was no statistically significant difference between the groups at baseline. There was no statistically significant difference in improvement between the groups during a three year perspective for CCQ total. Neither was there any statistically significant difference within the control group. During a three year perspective there was a statistically significant difference of improvement within the intervention group for CCQ total (p=0,037) and CCQ functional state (p=0,026).
Conclusion: The rehabilitation program had an improving effect on QoL in patients with COPD within the intervention-group during a three year perspective. | https://erj.ersjournals.com/content/40/Suppl_56/P747 |
Dore + Whittier is looking for architectural professionals who are passionate about making a positive difference in the lives of children, teachers, and other public servants. For 30 years our firm has been a leader in educational facility design with a strong focus on public K-12 schools and public safety projects. With offices in Newburyport, MA, and Burlington, VT, D+W is an award-winning full-service design and project management firm that fosters close collaboration at all levels. We pride ourselves in our commitment to designing buildings that inspire its users with exciting, functional, sustainable, and cost-effective design.
SUMMARY
As a Job Captain/Project Architect you will be an essential team member responsible for leading the technical advancement of a project and the production of construction documents, drawings, and specifications.
QUALIFICATIONS
- Bachelor’s Degree in Architecture or equivalent combination
- Licensed or working toward architectural licensure
- 5+ years’ experience in Architectural projects, with minimum 5 years’ experience managing production teams of multiple staff
- 5 years’ production experience in design and drafting using Revit software, possessing an in-depth working knowledge of the current version of the software is required
- Excellent communications skills and proactive interaction with others. Ability to deal with multiple personality types and mentor others on the production team
- Strong attention to detail and accuracy
- Familiarity with Massachusetts Public Bidding Laws (MCPPO certification strongly preferred)
- Familiarity with Adobe Creative Suite or related software
- LEED certification preferred, but not required
ESSENTIAL DUTIES AND RESPONSIBILITIES
- Participate in all phases of a project, from assisting with the design concept, implementing the design concept during Schematic Design, leading the production team during Design Development, maintaining Contract Documents, to spearheading the administration of the construction.
- Serve as a principal decision maker related to issues such as application of building codes to the project and assuring code compliance, selection of construction materials, technical aspects of the building design, construction technology, and constructability. Consultation with the Project Manager and Project Designer will be required when significant impacts will occur.
- In concert with the Project Manager, hold responsibility for budgeting and managing the staff hours in which the project documents will be completed. Help develop production schedules, project document scope and hours budgets, creation of strategies to help assure completion of the documents within the hours available, monitoring progress as the work progresses, and adjustments to strategies.
- Establish direction for and supervise the production team via distribution of tasks to team members, review of produced work, and creation of redlines to describe and detail the work to be done or corrected. While not anticipated to be a principal Revit operator, knowledge of current Revit version is imperative to enable interaction with the production team and enable direct contribution of production in the completion of certain tasks during times of high demand.
- Serve as the primary point of coordination, serving as liaison with all team consultants. Activities include establishing and enforcing consultant delivery schedules, review of consultant progress drawings and specifications for coordination and completeness, preparing redlines and corrections, and coordinating those with the consultant. Coordinate the architectural responses to consultant issues with in-house CAD operators.
- Supervise production of all addenda and bid question responses during the bidding process. Establish an Addendum schedule, and coordinate issues requiring consultant responses.
- Working with the Project Manager, supervise the construction administration process, leading the review of technical aspects of and responses to submittals and RFI’s. Supervise the creation of sketches, RFI Responses, Proposal Requests, and other instruments of change.
The above cited duties and responsibilities describe the general nature and level of work being performed by people assigned to this job. It is not intended to be an exhaustive list of all the duties and responsibilities that an incumbent may be expected to perform.
Dore + Whittier prohibits discrimination and harassment of any type and affords equal employment opportunities to employees and applicants without regard to race, color, religion, sex, sexual orientation, gender identity or expression, pregnancy, age, national origin, disability status, genetic information, protected veteran status, or any other characteristic protected by law. | https://www.doreandwhittier.com/careers/job-captain-project-architect/ |
Preliminary remarkWe have completely revised and expanded this chapter for Python3. We recommend that you work through the following chapters there: Since the maintenance and expansion of four different Python tutorials - i.e. Python2 in German and English and also Python3 in both languages - means an enormous amount of work, we have decided to focus mainly on the future focus on German and English tutorials for Python3. We hope for your understanding!
Object-oriented programming (OOP)
Object-oriented programming (OOP for short) has enjoyed great popularity since its "introduction" or "invention" with "Simula 67" by Ole-Johan Dahl and Kristen Nygard. However, it is not undisputed. For example, the Russian computer scientist Alexander Stepanow attested the OOP only a limited mathematical perspective and said that the OOP was almost as big a fraud as artificial intelligence.1 Alexander Stepanow played a key role in the development of the "C ++ Standard Template Library", so he should be very familiar with OOP and its practical problems.
The basic concept of object-oriented programming is to summarize data and their functions (methods) - i.e. functions that can be applied to this data - in one object and to encapsulate them externally so that methods of external objects cannot manipulate this data directly.
Objects are defined using classes.
A class is a formal description of what an object is like, i.e. which attributes and which methods it has.
A class must not be confused with an object. Instead of an object, one speaks of an instance of a class.
Analogy: cake class
When introducing object-oriented programming, examples from everyday life are often used. These are mostly examples that help to clarify object-oriented concepts, but these examples cannot then be converted into program code. So also in this example of a cake class.
Let's consider the recipe of a strawberry cake. Then you can consider this recipe as a class. That is, the recipe determines how an instance of the class must be designed. If someone bakes a cake according to this class, then he creates an instance or an object of this class. There are then various methods of processing or modifying this cake. By the way, a nice method is "eat up" in this example.
A strawberry cake belongs to a superordinate class "cake", which inherits its properties, e.g. that a cake can be used as a dessert, to subclasses such as strawberry cakes, sponge cakes, pies and so on.
Objects
The central term in object-oriented programming is that of the object. In OOP, an object denotes the mapping of a real object with its properties and behavior (methods) in a program. In other words: An object can always be described by two things:
- what it can do or what we can do with it in a program
- what we know about it
class
A class is an abstract generic term for the description of the common structure and common behavior of real objects (classification).
Real objects are abstracted to the features that are important for the software.
The class serves as a blueprint for mapping real objects into software objects, the so-called instances. The class combines the properties (attributes) required for this and the methods required for manipulating the properties.
Classes are often related to one another. For example, you have an upper class (cake) and another class is derived from this (strawberry cake). This derived class inherits certain properties and methods of the superclass.
Methods and properties using the example of the "Account holder" and "Account" classes:
Encapsulation of data
Another major advantage of OOP is the encapsulation of data.
Properties can only be accessed using access methods. These methods can contain plausibility tests and they (or "only" they) have "information" about the actual implementation.
For example, a method for setting the date of birth can check whether the date is correct and within a certain range, e.g. current account for children under 14 not possible or customers over 100 years unlikely.
Inheritance
In our example it is easy to see that an "Account" class cannot satisfy a real bank.
There are different types of accounts: current account, savings account, etc.
But all the different accounts have certain properties and methods in common. For example, each account will have an account number, an account holder, and an account balance. Common Methods: Depositing and Withdrawing
So there is such a thing as a basic account from which all other accounts "inherit".
Inheritance is used to create new classes based on existing classes. A new class can arise both as an extension and as a restriction of the original class.
The simplest class
The definition of a new class in Python starts with the keyword class.class account (object): pass The above class has neither attributes nor methods. Incidentally, the "pass" is an instruction that tells the interpreter that the actual instructions will not be "delivered" later.
An object or an instance of the above (empty) class is created as follows: >>> Account () <__main__.konto object="" at="" 0x7f5feca55750="">
Definition of methods
Outwardly, a method only differs from a function in two respects:
- It is a function that is defined within a class definition.
- The first parameter of a method is always a reference self to the instance from which it is called.
Example with method:
class account (object): def transfer (self, target, amount): pass def deposit (self, amount): pass def pay out (self, amount): pass def account balance (self): pass
Constructor
In fact, there are no explicit constructors or destructors in Python. The __init__ method is often referred to as a constructor. If it were really a constructor, it would likely be called __constr__ or __constructor__. Instead it is called __init__, because this method is used to initialize an object that has previously been automatically generated ("constructed"). This method is called immediately after the construction of an object. So it looks as if the object was created by __init__. This explains the frequently made mistake in the designation.
We now use the __init__ method to initialize the objects in our account class. __init__
Constructors are defined like other methods:
Destructor
The same applies to what has already been said under the Constructors section: There are no explicit constructors. A method __del__ can be defined for a class. If you delete an instance of a class with del, the __del__ method is called. However, only if there is no further reference to this instance. Destructors are principally required in C ++ because they are used for garbage collection. Since you don't have to worry about garbage collection in Python, the __del__ method is used relatively rarely.
The following is an example with __init__ ("constructor") and __del__ (destructor):
One should, however, be careful when using the __del__ method. "del x" only starts the __del method when there are no further references to the object, i.e. when the reference counter is set to 0. Problems arise, for example, when there are circular references (circular references), such as, for example, with doubly linked lists.
Full example of account classclass Account (object): def __init __ (self, owner, account number, account balance, current account = 0): self.Inhalter = owner self.Account number = account number self.Account balance = account balance self.Contact account = transfer current account def (self, target, amount ): if (self.account balance - amount < -self.kontokorrent):="" #="" deckung="" nicht="" genuegend="" return="" false="" else:="" self.kontostand="" -="betrag" ziel.kontostand="" +="betrag" return="" true="" def="" einzahlen(self,="" betrag):="" self.kontostand="" +="betrag" def="" auszahlen(self,="" betrag):="" self.kontostand="" -="betrag" def="" kontostand(self):="" return="" self.kontostand="" hat="" man="" dieses="" beispiel="" unter="" konto.py="" abgespeichert,="" kann="" man="" in="" einer="" python-shell="" wie="" folgt="" damit="" arbeiten:="">>> from account import account >>> K1 = account ("Jens", 70711,2022.17) >>> K2 = account ("Uta", 70813,879.09) >>> K1.account balance () 2022.17 >>> K1. transfer (K2,998.32) True >>> K1.account balance () 1023.85 >>> K2.account balance () 1877.41
"Public" blemish
The Account () class still has one small flaw. The attributes can be accessed directly from the outside:>>> K2.account balance 1877.4100000000001 >>> K2.account number 70813 >>> K2.current account 0 >>> K2.account balance = 1000000 >>> K2.account balance 1000000
Data encapsulation
Normally, all attributes of a class instance are public, i.e. accessible from the outside. Python provides a mechanism to prevent this from happening. The control does not take place via any special keywords but via the names, i.e. a single underscore before the actual name for the protected and double underscore for private, as can be seen in the following table:
|Names||description||importance|
|Surname||Public||Attributes without leading underscores can be read and written both within a class and from outside.|
|_Surname||Protected||You can also read and write from the outside, but the developer makes it clear that you shouldn't use these members.|
|__Surname||Private||Are not visible from the outside and cannot be used.|
Static members
Up until now, each object in a class had its own attributes and methods that differed from those of other objects.
This is known as "non-static" or dynamic because they are dynamically created for each object in a class.
But how can you e.g. count the number of different instances / objects of a class? In our account () class, this is the number of different accounts.
Static attributes are defined outside of the constructor directly in the class block. It is customary to position the static members directly below the class statement.
In our example of the account class, for example, the number of accounts within the program can only be counted statically:
Inheritance
Just as the instances are counted in the Account () class, it could also be necessary or useful in other classes. But you don't want to transfer the code for counting up into the constructor and that for counting down into the destructor in every class.
There is the possibility of passing on the ability to count instances to other classes.
To do this, you define an "upper" class counter that transfers its properties to others, such as an account. In the diagram opposite, we assume, for example, that the classes "Account", "Member" and Employee "require the basic class" Counter ".
In the following we show the complete code of such a counter class:
Multiple inheritance
A class can also be a subclass of several base classes, i.e. it can inherit from several classes. The inheritance of several is known as Multiple inheritance
From a syntactic point of view, this is very simple: Instead of just one class within the brackets after the class name, you specify a comma-separated list of all base classes from which you want to inherit.
1 in an interview with Graziano Lo Russo
Next chapter: slots
- How good are products from Khadi India
- What is a pneumatic brake system
- How is chemical engineering interesting
- What is the world tree in mythology
- How do I report scholarships to universities
- How do I install Kali Linux 2017
- What are Wikipedia's plans for the future
- What do you mean by computer peripherals
- Will she come back to me
- Christianity is being marginalized in today's society
- A taboo has failed
- What's big SMALL in smartphones
- Why do I lack feelings towards people
- Which music albums do you smoke to?
- How can I calculate a competitor's WACC
- What is interaction design
- Should I have my attachment removed?
- What is blippar
- Why is viscosity not a colligative property
- Why does Modi not like Andhra Pradesh
- Who was before God
- Would you recommend a comic to me? | https://speak-english.info/?post=1355 |
This tiny kangaroo-like creature is the jerboa, a rodent native to desert climes across North Africa, China and Mongolia. Species of jerboa can be found from the Sahara, the hottest desert in the world, to the Gobi, one of the coldest deserts in the world. At either extreme, you can find a member of the jerboa family happily burrowing beneath the ground. By using burrowing systems, the jerboa can escape the extreme heat or cold. Its short forearms and powerful hind legs are made for digging, and it has folds of skin that can close off their nostrils to sand, as well as special hairs to keep sand from getting in its ears. Its long back legs are also made for traveling rapidly while using minimal energy. Jerobas can get all the water they need from the vegetation and insects they eat. In fact, in laboratory studies, jerboas have lived off of only dry seeds for up to three years. | https://www.mnn.com/earth-matters/animals/photos/17-animals-amazingly-adapted-to-thrive-in-deserts/jerboa |
How long does it take to write a CD-R or CD-RW disc?
The amount of time taken to write a disc depends upon the speed of the recorder, the writing method used by the recorder and the amount of information required to be written. Recording speed is measured the same as the reading speed of ordinary CD-ROM drives and players. At single speed (1x) a recorder writes 150 KB (153,600 bytes) of data (CD-ROM Mode 1) per second and at a multiple of that figure at each speed increment above 1x.
As the market for CD-R and CD-RW products came into its own writing speed accelerated due to rapid advances made in hardware and media technology. One breakthrough came in writing modes which permitted recorders to reliably operate beyond 20x speed. Available units now employ a variety of writing modes including Constant Linear Velocity (CLV), Zone Constant Linear Velocity (ZCLV), Partial Constant Angular Velocity (PCAV) and Constant Angular Velocity (CAV).
CDs were originally designed for consumer audio applications and initially operated using a CLV mode to maintain a constant data transfer rate across the entire disc. The CLV mode sets the disc’s rotation at 500 RPM decreasing to 200 RPM (1x CLV) as the optical head of the player or recorder reads or writes from the inner to outer diameter. Since the entire disc is written at a uniform transfer rate it takes, for example, roughly 76 minutes to complete a full 74 minute/650 MB disc at 1x CLV. As recording speed increases the transfer rate increases correspondingly so that at 8x CLV writing an entire disc takes 9 minutes and at 16x 5 minutes. Recording time as well is directly related to the amount of information to be written so partial discs are completed in proportionally less time. But writing at higher speeds requires rotating the disc faster and faster (eg. 10,000 to 4,000 RPM at 20x CLV which places escalating physical demands upon both media and hardware. Manufacturers have met this challenge by moving beyond the original CLV mode to obtain even higher performance.
In contrast to CLV which maintains a constant data transfer rate throughout the recording process, ZCLV divides the disc into regions or zones and employs progressively faster CLV writing speeds in each. For example, a 40x ZCLV recorder might write the first 10 minutes of the disc at 20x CLV, the next 15 minutes at 24x CLV, the following 30 minutes at 32x CLV and the remainder at 40x CLV speed.
Some recorders make use of the PCAV mode which spins the disc at a lower fixed RPM when the optical head is writing near the inner diameter but then shifts to CLV part way further out on the disc. As a result, the data transfer rate progressively increases until a predetermined point is reached and thereafter remains constant. For example, a 24x PCAV recorder might accelerate from 18x to 24x speed over the first 14 minutes of the disc then maintain 24x CLV writing for the remainder of the disc.
The CAV mode spins the disc at a constant RPM throughout the entire writing process. Consequently, the data transfer rate continuously increases as the optical head writes from the inner to outer diameter of the disc. For example, a 48x CAV recorder might begin writing at 22x at the inner diameter of the disc accelerating to 48x by the outer diameter of the disc.
What is the difference between low and high speed CD-RW discs?
CD-RW media present additional problems in that it is not possible for one kind of CD-RW disc to support all recording speeds. Low speed discs are compatible with all CD-RW recorders and can only be written from 1x to 4x speeds. High speed discs, on the other hand, can be written from 4x to 10x but only on recorders bearing the high speed CD-RW logo.
Can CD-R and CD-RW discs written at different speeds be read back at any speed?
The speed at which a disc is written has nothing to do with the speed at which it can be read back in a recorder, CD-ROM or DVD-ROM drive.
Do some CD-R recording speeds produce better results than others?
Recorder and media manufacturers carefully tune their products to operate with each other across a wide range of speeds. As a result, equally high quality CDs are created when recording at almost all speeds. However, 1x presents a minor exception. Generally speaking, the physics and chemistry involved in the CD recording process seem to produce more consistent and readable marks in CD-R discs at 2x and greater speeds.
Can any CD-R disc be recorded at any speed?
In order to accommodate progressively higher recording speeds CD-R disc design and manufacturing has continued to evolve. Consequently, reliable operation is best achieved by following disc manufacturers’ guidance with respect to the range of writing speeds formally supported by their respective discs, while acknowledging that this can change as recording specifications change. Additionally, new media companies and products continually enter the market and some recorder companies may test particular brands of discs more extensively than others. Thus it may be advisable to inquire of the recorder manufacturer for specific media recommendations.
Is there any way to prevent a recorder from writing a CD-R disc at too high a speed? | http://www.osta.org/technology/cdqa5.htm |
Along with Moses and the Pietà, the David is one of Michelangelo’s most well-known sculptures. Now housed in Florence’s Accademia Gallery, it’s a white marble masterpiece that stands 5.17 meters high. Michelangelo began sculpting the work in 1501 and it took him 18 months to complete it, a challenge for the then-26 year old Michelangelo. The Opera del Duomo commissioned him to work on this giant block of marble that had been sitting unused for 40 years, because artists like Agostino di Duccio had thought it would be too fragile to support the weight of the legs.
From that “dead” block of marble, Michelangelo managed to sculpt a powerful, magnificent sculpture. In his depiction, David is no longer a child, as he is represented in works by other Renaissance masters like Donatello and Verrocchio. Instead, he’s a young and mighty man ready to strike down the giant, with a tension in his hands that hold a stone and sling, in his contracted muscles, and in his gaze: he truly seems as if he’s about to hurl a fatal blow.
Completed in January of 1504, the statue became the symbol of the Florentine Republic: a committee of artists including Botticelli and Leonardo da Vinci decided to place it in front of the steps of Palazzo Vecchio, where David would best symbolize the values of good government and defense.
The statue stayed there until 1873, when it was moved to the Accademia Gallery for conservation purposes. It's still there today and the one that you can see in Piazza della Signoria is a perfect copy. | https://www.visittuscany.com/en/attractions/michelangelos-david/ |
DENVER, July 3, 2018 — Secretary of State Wayne Williams warned Coloradans today to beware of possible charity scams in the wake of wildfires affecting many parts of Colorado. He also offered guidance to help citizens raise funds legally or donate wisely to help affected communities recover.
There are at least ten wildfires that have burned over 170,000 acres across Colorado, according to the most recent reports on the Incident Information System.
To learn how to help those affected by the wildfires, we recommend checking www.helpcoloradonow.org, a partnership between the Colorado Division of Homeland Security and Emergency Management (DHSEM) and Colorado Voluntary Organizations Active in Disaster (COVOAD).
Volunteers who have written authorization from a charity to raise funds on the charity’s behalf are exempt from the registration requirement.
Individuals exclusively making an appeal for funds on behalf of a specific individual name in the solicitation are exempt from the registration requirement, as long as all of the proceeds of the solicitation are given to or expended for the direct benefit of the specified individual. Any money destined for a specific individual or family is considered a private gift, not a charitable donation, and they are not tax-deductible.
If you wish to establish a fund to assist those affected by a tragedy, be especially careful to respect the wishes of the individuals’ family and friends. The law requires that you have written permission to use the names or photographs of any person or organization in your fundraising appeals, so be aware that your well-intentioned efforts could be derailed by harsh criticism from affected parties’ families if you fail to obtain their permission first.
For additional assistance, you may want to contact a regional nonprofit resource center or an association, such as the Colorado Nonprofit Association, the Center for Nonprofit Excellence, Community Resource Center, Colorado Nonprofit Development Center, or Metro Volunteers. These organizations offer educational materials and advice on nonprofits, including volunteering for relief efforts and forming a 501(c)(3) charitable organization.
Ask for the caller’s registration number with the Secretary of State, and then confirm that the organization is registered and current with its filings at www.checkthecharity.com. Contact the Secretary of State’s Office, if you want to confirm whether an unregistered charity or fundraiser needs to be registered in Colorado.
If the charity is required to file the federal form 990, 990-EZ, 990-N, 990-T, or 990-PF with the IRS, ask to see it. You are also entitled to see a copy of its IRS Application for Tax-Exempt Status and Determination Letter.
If talking to a paid solicitor, ask what portion of the contribution will be paid to the charity or, if giving directly to a charity, designate your donation to a specific disaster.
Do not click on links to charities on unfamiliar websites or in texts or emails. These may take you to a lookalike website where you will be asked to provide personal financial information or to click on something that downloads harmful malware into your computer. Don’t assume that charity recommendations on Facebook, blogs or other social media have already been vetted.
Beware of newly formed charitable organizations. These may be formed with the best of intentions, but an existing charity is more likely have the sound management and experience to quickly respond to the situation, and it will have a track record which you can review.
Call the charity to see if it is aware of the solicitation and has authorized the use of its name.
Verify with local charities any claims that the soliciting charity will support local organizations.
You cannot deduct contributions earmarked for relief of a particular individual or family, even if they are made to a qualified charitable organization. When you decide to contribute to an individual or family, do not give cash. Contribute by check that is payable to the fund, not to an individual.
When considering gifts to an individual or family, ask the fundraiser whether there is a trust or deposit account established for their benefit. Contact the banking institution to verify the existence of the account, and check locally to confirm that there really is such a need.
The fact that a charity has a tax identification number does not necessarily mean your contribution is tax-deductible. Ask for a receipt showing the amount of the contribution and stating that it is tax-deductible.
When you decide to contribute to an individual or family, do not give cash. Contribute by check that is payable to the charity or fund, not to an individual, and mail directly to the charity.
Most relief organizations can deliver assistance more rapidly if they purchase goods near the location of the disaster, so consider sending a check, rather than clothing or supplies.
If you believe that you have been solicited by a fraudulent charity, please file a complaint with the Secretary of State (303-894-2200; www.sos.state.co.us – Charities and Fundraisers), or the Attorney General (800-222-4444; www.coloradoattorneygeneral.gov). | http://theprowersjournal.com/2018/07/colorado-wildfires-call-for-wise-giving/ |
The standard model of online prediction deals with serial processing of inputs by a single processor. However, in large-scale online prediction problems, where inputs arrive at a high rate, an increasingly common necessity is to distribute the computation across several processors. A non-trivial challenge is to design distributed algorithms for online prediction, which maintain good regret guarantees. In , we presented the DMB algorithm, which is a generic framework to convert any serial gradient-based online prediction algorithm into a distributed algorithm. Moreover, its regret guarantee is asymptotically optimal for smooth convex loss functions and stochastic inputs. On the flip side, it is fragile to many types of failures that are common in distributed environments. In this companion paper, we present variants of the DMB algorithm, which are resilient to many types of network failures, and tolerant to varying performance of the computing nodes.
1 Introduction
In online prediction problems, one needs to provide predictions over a stream of inputs, while attempting to learn from the data and improve the predictions. Unlike offline settings, where the learning phase over a training set is decoupled from the testing phase, here the two are intertwined, and we cannot afford to slow down.
The standard models of online prediction consider a serial setting, where the inputs arrive one by one, and are processed by a single processor. However, in large-scale applications, such as search engines and cloud computing, the rate at which inputs arrive may necessitate distributing the computation across multiple cores or cluster nodes. A non-trivial challenge is to design distributed algorithms for online prediction, which maintain regret guarantees as close as possible to the serial case (that is, the ideal case where we would have been able to process all inputs using a single, sufficiently fast processor).
In , we presented the DMB algorithm, which is a template that allows to convert any serial online learning algorithm into a distributed algorithm. For a wide class of such algorithms, showed that when the loss function is smooth and the inputs are stochastic, then the DMB algorithm is asymptotically optimal. Specifically, the regret guarantee of the DMB algorithm will be identical in its leading term to the regret guarantee of the serial algorithm, including the constants. Also, the algorithm can be easily adapted to stochastic optimization problems, with an asymptotically optimal speedup in the convergence rate, by using a distributed system as opposed to a single processor.
However, the DMB algorithm makes several assumption that may not be realistic in all distributed settings. These assumptions are:
-
All nodes work at the same rate.
-
All nodes are working properly throughout the execution of the algorithm.
-
The network connecting the nodes is stable during the execution of the algorithm.
These assumptions are not always realistic. Consider for example a multi-core CPU. While the last two assumptions are reasonable in this environment, the first one is invalid since other processes running on the same CPU may cause occasional delays on some cores (e.g., ). In massively distributed, geographically dispersed systems, all three assumptions may fail to hold.
In this companion paper to , we focus on adding robustness to the DMB algorithm, and present two methods to achieve this goal. In Sec. 3 we present ways in which the DMB algorithm can be made robust using a master-workers architecture, and relying on the robustness of off-the-shelf methods such as leader election algorithms or databases. In Sec. 4, we present an asynchronous version of the DMB algorithm that is robust with a fully decentralized architecture.
2 Background
We begin by providing a brief background on the setting and the DMB algorithm. The background is deliberately terse, and we refer the reader to for the full details.
We assume that we observe a stream of inputs , where each is sampled independently from a fixed unknown distribution over a sample space . Before observing each , we predict a point from a convex set . After making the prediction , we observe and suffer the loss , where is a predefined loss function, assumed to be convex in its first argument. We may now use to improve our prediction mechanism for the future (e.g., using a stochastic gradient method). The goal is to accumulate the smallest possible loss as we process the sequence of inputs. More specifically, we measure the quality of our predictions on examples using the notion of regret, defined as
where . Note that the regret is a random variable, since it depends on stochastic inputs. For simplicity, we will focus on bounding the expected regret.
We model our distributed computing system as a set of nodes, each of which is an independent processor, and a network that enables the nodes to communicate with each other. Each node receives an incoming stream of examples from an outside source, such as a load balancer/splitter. As in the real world, we assume that the network has a limited bandwidth, so the nodes cannot simply share all of their information, and that messages sent over the network incur a non-negligible latency.
The ideal (but unrealistic) solution to this online prediction problem is to run a standard serial algorithm on a single “super” processor that is sufficiently fast to handle the stream of examples. This solution is optimal, simply because any distributed algorithm can be simulated on a fast-enough single processor. The optimal regret that can be achieved by such serial algorithms on inputs is . However, when we choose to distribute the computation, the regret performance might degrade, as the communication between the nodes is limited. Straightforward approaches, as well as previous approaches in the literature, all yield regret bounds which are at best , where is the number of nodes in the system. Thus, the regret degrades rapidly as more nodes are utilized.
In , we present the DMB algorithm, which has the following two important properties:
-
It can use a wide class of gradient-based update rule for serial online prediction as a black box, and convert it into a parallel or distributed online prediction algorithm. These serial online algorithms include (Euclidean) gradient descent, mirror descent, and dual averaging.
-
If the loss function is smooth in (namely, its gradient is Lipschitz), then the DMB algorithm attains an asymptotically optimal regret bound of . Moreover, the coefficient of the dominant term is the same as in the serial bound, which is independent of and of the network topology.
The DMB algorithm is based on a theoretical observation that, for smooth loss functions, one can prove regret bounds for serial gradient-based algorithms that depend on the variance of the stochastic gradients. To simplify discussions, we use to denote such variance bounds for predicting inputs, where satisfies
For example, we show in that for both mirror-descent (including classical gradient descent) and dual averaging methods, the expected regret bounds take the form
where is the Lipschitz parameter of the loss gradient , and quantifies the size of the set from which the predictors are chosen. As a result, it can be shown that applying a serial gradient-based algorithm on averages of gradients, computed on independent examples with the same predictor , will reduce the variance in the resulting regret bound.
In a nutshell, the DMB algorithm uses the distributed network in order to rapidly accumulate gradients with respect to the same fixed predictor . Once a mini-batch of sufficiently many gradients are accumulated (parameterized by ), the nodes collectively perform a vector-sum operation, which allows each node to obtain the average of these gradients. This average is then used to update their predictor, using some gradient-based online update rule as a black box. Note that the algorithm is inherently synchronous, as all nodes must use the same predictor and perform the averaging computations and updates at the same time. A detailed pseudo-code and additional details appear in .
The regret analysis for this algorithm is based on a parameter , which bounds the number of inputs processed by the system during the vector-sum operation. The gradients for these inputs are not used for updating the predictor. While depends on the network structure and communication latencies, it does not scale with the total number of examples processed by the system. Formally, the regret guarantee is as follows:
Theorem 1.
Let be an -smooth convex loss function and assume that the stochastic gradient has -bounded variance for all . If the online update rule used by the DMB algorithm has the serial regret bound , then the expected regret of the DMB algorithm over examples is at most
Specifically, if , and the batch size is chosen to be for any , the expected regret is .
Note that for serial regret bounds of the form , we indeed get an identical leading term in the regret bound for the DMB algorithm, implying its asymptotic optimality.
3 Robust Learning with a Master-Workers Architecture
The DMB algorithm presented in assumes that all nodes are making similar progress. However, even in homogeneous systems, which are designed to support synchronous programs, this is hard to achieve (e.g., ), let alone grid environments in which each node may have different capabilities. In this section, we present a variant of the DMB algorithm that adds the following properties:
-
It performs on heterogeneous clusters, whose nodes may have varying processing rates.
-
It can handle dynamic network latencies.
-
It supports randomized update rules.
-
It can be made robust using standard fault tolerance techniques.
To provide these properties, we convert the DMB algorithm to work with a single master and multiple workers. Each of the workers receives inputs and processes them at its own pace. Periodically, the worker sends the information it collected, i.e., the sum of gradients, to the master. Once the master has collected sufficiently many gradients, it performs an update and broadcasts the new predictor to the workers. We call this algorithm the master-worker distributed mini-batches (MaWo-DMB) algorithm. For a detailed description of the algorithm, see Algorithm 1 for the worker algorithm and Algorithm 2 for the master algorithm.
This algorithm uses a slightly different communication protocol than the DMB algorithm. We assume that the network supports two operations:
-
Broadcast master workers: the master sends updates to the workers.
-
Message worker master: periodically, each worker sends a message to the master with the sum of gradients it has collected so far.
One possible method to implement these services is via a database. Using a database, each worker can update the gradients it collected on the database, and check for updates from the master. At the same time, the master can check periodically to see if sufficiently many gradients have accumulated in the database. When there are at least gradients accumulated, the master performs an update and posts the result in a designated place in the database. This method provides nice robustness features to the algorithm, as discussed in Sec. 3.2.
3.1 Properties of the MaWo-DMB algorithm
The MaWo-DMB algorithm shares a similar asymptotic behavior as the DMB algorithm (e.g. as discussed in Thm. 1). The proof for the DMB algorithm applies to this algorithm as well. To get the optimal rate, we only need to bound the number of inputs whose gradient is not used in the computation of the next prediction point. A coarse bound on this number can be given as follows: Let be the maximal number of inputs per time–unit. Let be the time difference between messages sent from each worker to the master, let be the time it takes the master to perform an update, and let be the maximal time it takes to send a message between two points in the network. Using this notation, the number of inputs dropped in each update is at most . Specifically, let be the time when the master encounters the ’th gradient. Inputs that were processed before time were received by the master. Moreover, at time all of the workers have already received the updated prediction point. Therefore, only inputs that were processed between and might be dropped. Clearly, there are at most such inputs.
While asymptotically the MaWo-DMB algorithm exhibits the same performance as the DMB algorithm, it does have some additional features. First, it allows workers of different abilities to be used. Indeed, if some workers can process more inputs than other workers, the algorithm can compensate for that. Moreover, the algorithm does not assume that the number of inputs each worker handles is fixed in time. Furthermore, workers can be added and removed during the execution of the algorithm.
The DMB algorithm assumes that the update rule is deterministic. This is essential since each node computes the update, and it is assumed that they reach the same result. However, in the MaWo-DMB algorithm, only the master calculates the update and sends it to the rest of the nodes, therefore, the nodes all share the same point even if the update rule is randomized.
3.2 Adding Fault Tolerance to the MaWo-DMB algorithm
The MaWo-DMB algorithm is not sensitive to the stability of the workers. Indeed, workers may be added and removed during the execution of the algorithm. However, if the master fails, the algorithm stops making updates. This is a standard problem in master-worker environments. It can be solved using leader election algorithms such as the algorithm of . If the workers do not receive any signal from the master for a long period of time, they start a process by which they elect a new leader (master). proposed a leader election algorithm for ad-hoc networks. The advantage of this kind of algorithm for our setting is that it can manage dynamic networks where the network can be partitioned and reconnected. Therefore, if the network becomes partitioned, each connected component will have its own master.
Another way to introduce robustness to the MaWo-DMB algorithm is by selecting the master only when an update step is to be made. Assume that there is a central database and all workers update it. Every time–units, each worker performs the following
-
lock the record in the database
-
add the gradients computed to the sum of gradients reported in the database
-
add the number of gradients to the count of the gradients reported in the database
At this point, the worker checks if the count of gradients exceeds . If it does not, the worker releases the lock and returns to processing inputs. However, if the number of gradients does exceed , the worker performs the update and broadcasts the new prediction point (using the database) before unlocking the database and becoming a worker again.
This simple modification we just described creates a distributed master such that any node in the system can be removed without significantly affecting the progress of the algorithm. In a sense, we are leveraging the reliability of the database system (see e.g., [1, 2, 3]) to convert our algorithm into a fault tolerant algorithm.
4 Robust Learning with a Decentralized Architecture
In the previous section, we discussed asynchronous algorithms based on a master-workers paradigm. Using off-the-shelf fault tolerance methods, one can design simple and robust variants, capable of coping with dynamic and heterogeneous networks.
That being said, this kind of approach also has some limitations. First of all, access to a shared database may not be feasible, particularly in massively distributed environments. Second, utilizing leader election algorithms is potentially wasteful, since by the time a new master is elected, some workers or local worker groups might have already accumulated more than enough gradients to perform a gradient update. Moreover, what we really need is in fact more complex than just electing a random node as a master: electing a computationally weak or communication-constrained node will have severe repercussions. Also, unless the communication network is fully connected, we will need to form an entire DAG (directed acyclic graph) to relay gradients from the workers to the elected master. While both issues have been studied in the literature, it complicates the algorithms and increases the time required for the election process, again leading to potential waste. In terms of performance guarantees, it is hard to come up with explicit time guarantees for these algorithms, and hence the effect on the regret incurred by the system is unclear.
In this section, we describe a robust, fully decentralized and asynchronous version of DMB, which is not based on a master-worker paradigm. We call this algorithm asynchronous DMB, or ADMB for brevity. We provide a formal analysis, including an explicit regret guarantee, and show that ADMB shares the advantages of DMB in terms of dependence on network size and communication latency.
4.1 Description of the ADMB Algorithm
We assume that communication between nodes takes place along some bounded-degree acyclic graph. In addition, each node has a unique numerical index. We will generally use to denote a given node’s index, and let denote the index of some neighboring node.
Informally, the algorithm works as follows: each node receives examples, accumulates gradients with respect to its current predictor (which we shall denote as ), and uses batches of such gradients to update the predictor. Note that unlike the MaWo-DMB algorithm, here there is no centralized master node responsible for performing the update. Also, for technical reasons, the prediction themselves are not made with the current predictor , but rather with a running average of predictors computed so far.
Each node occasionally sends its current predictor and accumulated gradients to its neighboring nodes. Given a message from a node , the receiving node compares its state to the state of node . If , then both nodes have been accumulating gradients with respect to the same predictor. Thus, node can use these gradients to update its own predictor , so it stores these gradients. Later on, these gradients are sent in turn to node ’s neighbors, and so on. Each node keeps track of which gradients came from which neighboring nodes, and ensures that no gradient is ever sent back to the node from which it came. This allows for the gradients to propagate throughout the network.
An additional twist is that in the ADMB algorithm, we no longer insist on all nodes sharing the exact same predictor at any given time point. Of course, this can lead to each node using a different predictor, so no node will be able to use the gradients of any other node, and the system will behave as if the nodes all run in isolation. To prevent this, we add a mechanism, which ensures that if a node receives from a neighbor node a “better” predictor than its current one, it will switch to using node ’s predictor. By “better”, we mean one of two things: either was obtained based on more predictor updates, or . In the former case, should indeed be better, since they are based on more updates. In the latter case, there is no real reason to prefer one or the other, but we use an order of precedence between the nodes to determine who should synchronize with whom. With this mechanism, the predictor with the most gradient updates is propagated quickly throughout the system, so either everyone starts working with this predictor and share gradients, or an even better predictor is obtained somewhere in the system, and is then quickly propagated in turn - a win-win situation.
We now turn to describe the algorithm formally. The algorithm has two global parameters:
-
: As in the DMB algorithm, is the number of gradients whose average is used to update the predictor.
-
: This parameter regulates the communication rate between the nodes. Each node will send message to its neighbor every time–units.
Each node maintains the following data structures:
-
A node state , where
-
is the current predictor.
-
is the running average of predictors actually used for prediction.
-
counts how many predictors are averaged in . This is also the number of updates performed according to the online update rule, in order to obtain .
-
-
A vector and associated counter , which hold the sum of gradients computed from inputs serviced by node .
-
For each neighboring node , a vector and associated counter , which hold the sum of gradients received from node .
When a node is initialized, all the variables discussed above are set to zero, The node then begins the execution of the algorithm. The protocol is composed of executing three event-driven functions: the first function (Algorithm 3 below) is executed when a new request for prediction arrives, and handles the processing of that example. The second function (Algorithm 4) is executed every time–units, and sends messages to the node’s neighbors. The third function (Algorithm 5) is executed when a message arrives from a neighboring node. Also, the functions use a subroutine update_predictor (Algorithm 6) to update the node’s predictor if needed. For simplicity, we will assume that each of those three functions is executed atomically (namely, only one of the function runs at any given time). While this assumption can be easily relaxed, it allows us to avoid a tedious discussion of shared resource synchronization between the functions.
It is not hard to verify that due to the acyclic structure of the network, no single gradient is ever propagated to the same node twice. Thus, the algorithm indeed works correctly, in the sense that the updates are always performed based on independent gradients. Moreover, the algorithm is well-behaved in terms of traffic volume over the network, since any communication link from node to node passes at most message every time–units, where is a tunable parameter.
As with the MaWo-DMB algorithm, the ADMB algorithm has some desirable robustness properties, such as heterogeneous nodes and adding/removing new nodes, and communication latencies. Moreover, it is robust to network failures: even if the the network is split into two (or more) partitions, it only means we end up with two (or more) networks which implement the algorithm in isolation. The system can continue to run and its output will remain valid, although the predictor update rate will become somewhat slower, until the failed node is replaced. Note that unlike the MaWo-DMB algorithm, there is no need to wait until a master node is elected.
4.2 Analysis
We now turn to discuss the regret performance of the algorithm. Before we begin, it is important to understand what kind of guarantees are possible in such a setting. In particular, it is not possible to provide a total regret bound over all the examples fed to the system, since we have not specified what happens to the examples which were sent to malfunctioning nodes - whether they were dropped, rerouted to a different node and so on. Moreover, even if nodes behave properly in terms of processing incoming examples, the performance of components such as interaction with neighboring nodes might vary over time in complex ways, which are hard to model precisely.
Instead, we will isolate a set of “well-behaved” nodes, and focus on the regret incurred on the examples sent to these nodes. The underlying assumption is that the system is mostly functional for most of the time, so the large majority of examples are processed by such well-behaved nodes. The analysis will focus on obtaining regret bounds over these examples.
To that end, let us focus on a particular set of nodes, which form a connected component of the communication framework, with diameter . We will define the nodes as good, if all those nodes implement the ADMB algorithm at a reasonably fast rate. More precisely, we will require the following from each of the nodes:
-
Executing each of the three functions defining the ADMB algorithm takes at most one time–unit.
-
The communication latency between two adjacent nodes is at most one time–unit.
-
The nodes receive at most examples every time–unit.
As to other nodes, we only assume that the messages they send to the good nodes reflect a correct node state, as specified earlier. In particular, they may be arbitrarily slow or even completely unresponsive.
First, we show that when the nodes are good, up-to-date predictors from any single node will be rapidly propagated to all the other nodes. This shows that the system has good recovery properties (e.g. after most nodes fail).
Lemma 1.
Assume that at some time point, the nodes are good, and at least one of them has a predictor based on at least updates. If the nodes remain good for at least time–units, then all nodes will have a predictor based on at least updates.
Proof.
Let be the node with the predictor having at least updates. Counting from the time point defined in the lemma, at most time–units will elapse until all of node ’s neighbors will receive a message from node with its predictor, and either switch to this predictor (and then will have a predictor with updates), or remain with the same predictor (and this can only happen if its predictor was already based on updates). In any case, during the next time–units, each of those neighboring nodes will send a message to its own neighbors, and so on. Since the distance between any two nodes is at most , the result follows. ∎
The next result shows that when all nodes are good and have a predictor based on at least updates, not too much time will pass until they will all update their predictor.
Theorem 2.
Assume that at some time point, the nodes are good, and every one of them has a predictor with updates (not necessarily the same one). Then after the nodes process at most
additional examples, all nodes will have a predictor based on at least updates.
Proof.
Consider the time point mentioned in the theorem, where every one of the nodes, and in particular the node with smallest index among them, has a predictor with updates. We now claim that after processing at most
|(1)|
examples, either some node in our set had a predictor with updates, or every node has the same predictor based on updates. The argument is similar to Lemma 1, since everyone will switch to the predictor propagated from node , assuming no predictor obtained a predictor with more updates. Therefore, at most time–units will pass, during which at most examples are processed.
So suppose we are now at the time point, where either some node had a predictor with updates, or every node had the same predictor based on updates. We now claim that after processing at most
|(2)|
examples, any node in our set obtained a predictor with updates. To justify Eq. (2), let us consider first the case where every node had the same predictor based on updates. As shown above, the number of time–units it takes any single gradient to propagate to all nodes is at most . Therefore, after time–units elapsed, each node will accumulate and act upon all the gradients computed by all nodes up to time . Since at most examples are processed each time–unit, it follows that after processing at most examples, all nodes will update their predictors, as stated in Eq. (2).
We still need to consider the second case, namely that some good node had a predictor with updates, and we want to bound the number of examples processed till all nodes have a predictor with updates. But this was already calculated to be at most , which is smaller than Eq. (2). Thus, the time bound in Eq. (2) covers this case as well.
With these results in hand, we can now prove a regret bound for our algorithm. To do so, define a good time period to be a time during which:
-
All nodes are good, and were also good for time–units prior to that time period.
-
The nodes handled examples overall.
As to other time periods, we will only assume that at least one of the nodes remained operational and implemented the ADMB algorithm (at an arbitrarily slow rate).
Theorem 3.
Suppose the gradient-based update rule has the serial regret bound , and that for any , decreases monotonically in .
Let be the number of examples handled during a sequence of non-overlapping good time periods. Then the expected regret with respect to these examples is at most
where . Specifically, if , then the expected regret bound is
When the batch size scales as for any , we get an asymptotic regret bound of the form . The leading term is virtually the same as the leading term in the serial regret bound. The only difference is an additional factor of , essentially due to the fact that we need to average the predictors obtained so far to make the analysis go through, rather than just using the last predictor.
Proof.
Let us number the good time periods as , and let be a predictor used by one of the nodes at the beginning of the -th good time period. From Lemma 1 and Thm. 2, we know that the predictors used by the nodes were updated at least once during each period. Thus, is the average of predictors , where each was obtained from the previous using gradients each, on some examples which we shall denote as . Since is independent of these examples, we get
Based on this observation and Jensen’s inequality, we have
|(3)|
The online update rule was performed on the averaged gradients obtained from . This average gradient is equal to the gradient of the function . Moreover, the variance of this gradient is at most . Using the regret guarantee, we can upper bound Eq. (3) by
Since , and since we assumed in the theorem statement that the expression above is monotonically decreasing in , we can upper bound it by
From this sequence of inequalities, we get that for any example processed by one of the nodes during the good time period , it holds that
|(4)|
Let be the number of examples processed during each good time period. Since examples are processed overall, the total regret over all these examples is at most
|(5)|
To get the specific regret form when , we substitute into Eq. (5), and substitute to get
∎
References
- Philip A. Bernstein, Vassos Hadzilacos, and Nathan Goodman. Concurrency Control and Recovery in Database Systems. Addison-Wesley, 1987.
- Dean Brock. A recommendation for high-availability options in tpc benchmarks. Transaction Processing Performance Countil, http://www.tpc.org/information/other/articles/ha.asp.
- Fay Chang, Jeffrey Dean, Sanjay Ghemawat, Wilson C. Hsieh, Deborah A. Wallach, Mike Burrows, Tushar Chandra, Andrew Fikes, and Robert E. Gruber. Bigtable: A distributed storage system for structured data. In proceedings of the conference on USENIX syposium on Operating Systems Design and Implementation, 2006.
- O. Dekel, R. Gilad Bachrach, O. Shamir, and L. Xiao. Optimal distributed online prediction using mini-batches. Technical report, arXiv, 2010.
- R. G. Gallager, P. A. Humblet, and P. M. Spira. A distributed algorithm for minimum-weight spanning trees. ACM Transactions on Programming Languages and Systems - TOPLAS, 1983.
- Navneet Malpani, Jennifier L. Welch, and Nitin Waidya. Leader election algorithms for mobile ad hoc networks. In proceedings of the 4th international workshop on Discrete algorithms and methods for mobile computing and communications, 2000.
- Fabrizio Petrini, Darren J. Kerbyson, and Scott Pakin. The case of the missing supercomputer performance: Achieving optimal performance on the 8,192 processors of asci q. In Proceedings of the ACM/IEEE Super-Computing Conference, 2003. | https://www.groundai.com/project/robust-distributed-online-prediction/ |
The calcium-sensing receptor in bone metabolism: from bench to bedside and back.
The calcium-sensing receptor (CaSR), a key player in the maintenance of calcium homeostasis, can influence bone modeling and remodeling by directly acting on bone cells, as demonstrated by in vivo and in vitro evidence. The modulation of CaSR signaling can play a role in bone anabolism. The calcium-sensing receptor (CaSR) is a key player in the maintenance of calcium homeostasis through the regulation of PTH secretion and calcium homeostasis, thus indirectly influencing bone metabolism. In addition to this role, in vitro and in vivo evidence points to direct effects of CaSR in bone modeling and remodeling. In addition, the activation of the CaSR is one of the anabolic mechanisms implicated in the action of strontium ranelate, to reduce fracture risk. This review is based upon the acquisition of data from a PubMed enquiry using the terms "calcium sensing receptor," "CaSR" AND "bone remodeling," "bone modeling," "bone turnover," "osteoblast," "osteoclast," "osteocyte," "chondrocyte," "bone marrow," "calcilytics," "calcimimetics," "strontium," "osteoporosis," "skeletal homeostasis," and "bone metabolism." A fully functional CaSR is expressed in osteoblasts and osteoclasts, so that these cells are able to sense changes in the extracellular calcium and as a result modulate their behavior. CaSR agonists (calcimimetics) or antagonists (calcilytics) have the potential to indirectly influence skeletal homeostasis through the modulation of PTH secretion by the parathyroid glands. The bone anabolic effect of strontium ranelate, a divalent cation used as a treatment for postmenopausal and male osteoporosis, might be explained, at least in part, by the activation of CaSR in bone cells. Calcium released in the bone microenvironment during remodeling is a major factor in regulating bone cells. Osteoblast and osteoclast proliferation, differentiation, and apoptosis are influenced by local extracellular calcium concentration. Thus, the calcium-sensing properties of skeletal cells can be exploited in order to modulate bone turnover and can explain the bone anabolic effects of agents developed and employed to revert osteoporosis.
| |
To foster among the Namibian youth a spirit of national identity, a sense of unity and self respect, as well as in depth awarness of social, economic, political, educational and cultural prospects and adversities.
To develop the inherit abilities and capabilites of young people both individually and collectivelly.
To encourage literacy and artistic activities among the youth.
To establish and maintain relations with international youth bodies and national youth structures in other countries.
To mobilize funds both locally and internationally for the course of youth development
To popularize and advocate the concept of gender equality among the youth.
To initiate youth development projects and activities with the aim of ecouraging the active participation of the youth in the process of self empowerment.
To facilitate implementation, monitor and evaluate youth development programs
To pursue advocacy role with regard to the rights and opportunities for the youth with physical and mental disabilities. | https://www.nyc.org.na/?page_id=508 |
Get our free book (in Spanish or English) on rainwater now - To Catch the Rain.
Mobility&Transport
This article considers modern and future transportation modes and their developing technology.
Contents
- 1 Origins
- 2 History of mobility
- 3 How vehicles changed the world
- 4 Transportation Modes
- 5 Oil powered
- 6 Wind powered
- 7 Electric Cars
- 8 Air
- 9 Space
- 10 Projects
- 10.1 Traffic In Urban Areas
- 10.2 Sustainability
- 10.3 Autonomous Driving
- 10.4 Other
- 11 See also
Origins
Why did people need to be mobile?
History of mobility
How vehicles changed the world
Transportation Modes
Land
Human-powered
Animal-powered
Road
Car
The modern car was invented in 1866 by Karl Benz and became widely available in the beginning of the 20th century. It is rather used to transport people instead of goods.
Truck
Bus
In 2013 5.3 billion passengers were transported by bus in Germany, making it the most popular mode of public transport by passenger count. 37 billion passenger km were traversed, second only to rail (88 billion passenger km). Busses are mostly used for short distance trips like inner city commuting. On average, an inner city commuting bus emits .18 kg of CO2 per passenger km, and a long distance (>32 km) bus emits .05 kg of CO2 per passenger km. Busses were first invented when businessmen in the 1820s started to use horse drawn stagecoaches to transport the public on fixed routes, and gained in popularity after steam (and later petrol or diesel) powered engines enabled an increase in passenger count and a decrease in fare.
Rail
Railroads are the second most used way to transport freight and people in local traffic.
Personal transport
Yearly, 7.8 billion people are using the train in Europe. That is 1484 travellers per minute. The most important trains are:
- InterCityExpress (Germany, Netherlands, Belgium, France, Denmark, Switzerland an Austria)
- Thalys (France, Germany, Belgium and Netherlands)
- Enterprise (UK)
- EuroCity (conventional trains in nearly every Country)
- TGV (France, Belgium, Italy, Switzerland and Germany)
- Oresundtrain (Denmark and Sweden)
On Average, travelling via train costs 13-25 cent per kilometer, with a speed of 100-120 km/h. An average long distance (>32 km) train emits 0.12 kg of CO2 per passenger km, while an average commuter rail or subway train emits .1 kg of CO2 per passenger km.
Freight transport
Water
Oil powered
Ships are the most important component for global freight traffic. They are able to transport large amounts of goods at low cost. Transportation is slower than rail-, road- or airtransport. International shipping accounts for 4.5% of global C02 emissions 1.12bn tonnes annually.
While getting absolutely vital for global trading, they became more and more unimportant for passenger transport over the last century. Today, passenger transport via cruiseship is the appreciable way for long distances, according to the motto "The journey is the destination", as it is the slowest modern transportation method.
Wind powered
There are efforts to re-invent sail transport Tres Hombres is a Dutch registered sail cargo ship which has been trading since 2009
Electric Cars
Electric cars run on electric energy only. They are propelled by one ore more electric motors which are powered by rechargeable battery packs. They provide several benefits over cars with an oil powered internal combustion engine:
- Energy efficient. While cars with internal combusion engines are relatively energy inefficient (only ~15-20% of the energy stored in fuel is used to move the vehicle), electric cars have an energy efficiency of around 80%.
- Performance benefits. Electric cars provied instant torque resulting in a strong and smooth acceleration.
- Less energy dependence. Electric energy is a domestic energy source. Countries do not depend on foreign oil imports.
- "Environmentally friendly". Electric cars are not completely environmentally friendly. They require a lot of energy and special materials to manufacture. They contribute however to cleaner air (mostly in cities) because they do not produce harmful pollution at the tailpipe. Although the source of the energy may pollute the environment (e.g. old coal fired power plants). This is sometimes called the long tailpipe of electric cars.
There are some challenges with electric cars, that are mostly battery related:
- Driving range. Electric cars often have less maximum range than fuel powered cars. This is further limited by recharging infrastructure. The average american drives less than 40 miles (64 km) a day, so this is only a problem for long distance traveling. For everyday use electric cars provide the benefit of recharging your car at home over night.
- Recharging. Recharging an electric car takes a lot more time than refueling a conventional car. Even a 'quick charge' to 80% battery capacity can take up to 30 minutes. This might improve with better battery and recharging technology. Battery swapping provides an alternative approach to this problem. With an automated system you can get a fully charged battery in about 90 seconds.
- Battery cost. The large battery packs are expensive and may have to be replaced after some time.
Air
Transportation by plane is fast, but unecological and extremely expensive. For freight transport it is mainly used for light and valuable goods, which have to be delivered very quickly.
Space
Traveling or transporting through space is one of the newer and less explored types of mobility/traveling. Due to the high costs on research, production of the spacecrafts or other things which are shoot into the orbit. However, private companies with bigger budgets than governmental organisations start to make business by traveling into the orbit.
Projects
Traffic In Urban Areas
Problems
Traffic Jams
Pollution
Transportation is now a dominant source of emission of pollutants, some of whose impacts are :
- climate change : traditional petrol or diesel powered transport releases gases like lead, carbon monocide, methane, and many more into the atmosphere.
- decrease in air quality : pollution caused by transport affects air quality causing damage to human health. Pollutants like carbon monoxide and nitrogen dioxide are linked to increased risk of respiratory problems.
- decrease in water quality : fuel or other hazardous chemicals discarded from vehicles or port and airport operations, such as de-icing, can contaminate bodies of water.
More information about the environmental impacts of transportation can be found here.
Waste Of Resources
Urban sprawl
In order to accomodate for a rising number of cars on the road cities' traffic infrastructure had to change. Roads had to widen, impediments to traffic flow, like pedestrians, cyclists and street crossings had to be removed and structures like parking spaces had to be build in order to make automobile use more advantageous. These changes came at the expense of other modes of transport. In extreme cases they created a dependency on cars to access certain destinations of daily life, but at the very least they made automobile use more appealing than other modes of transport, which in turn creates the need for more of these pro-car changes to take place in cities in order to keep up with day to day traffic. This causes a spiraling effect, where more cars cause more extreme changes to citie's traffic infrastructure, and these changes in turn cause more cars to be on the road. This is seen mainly as an issue of environmental sustainability, since cars are a major contributor to the production of greenhouse gases responsible for global warming.
Solutions
Public Transport
Car Sharing
Car Pools
Work At Home
Sustainability
ec2go
ec2go is an e-CarSharing concept, designed by the FB5 at the FH Aachen from 2010 to 2013.
Autonomous Driving
Google Driverless Cars
Security Issues
Some people simply cannot wait until the day when autonomous cars become the norm. A few innovators, including Google, have already dipped their toes into the self-driving car market. But the convenience and safety of autonomous cars may be contradicted by the security risks such vehicles pose.
Law enforcement agencies have said that autonomous cars may encourage criminal behavior. Criminals may evade police by changing the programming in self-driving cars to bypass the rules of the road. Having their hands free in an autonomous vehicle may enable criminals to fire weapons or handle other dangerous equipment while the vehicle is in motion, and cars filled with explosives also may be programmed to hit targets.
Autonomous vehicles also may be vulnerable to hackers. Hackers may be able to take over an autonomous vehicle in much the same way they would take control of a computer, tablet or smartphone.
Right now it seems unlikely that the current crop of autonomous cars will be state-of-the-art crime vehicles, seeing as Google's version cannot drive faster than 25 miles per hour. But these concerns are things manufacturers will have to consider before they can offer autonomous vehicles for sale to the general public.
Future Truck
The Future Truck concept was presented by Mercedes-Benz at the "IAA Nutzfahrzeuge 2014". It is planned to be ready for mass production in 2025.
Other
Hyperloop
Hyperloop is a concept for highspeed underground passenger transportation. Capsules, powered by eletricity, are supposed to drive through a partially vacuumed tube with speeds of 1200 km/h.
See also
The EV revolution a brilliant blog by Peter Sinclair on Climate Crocks. | https://www.appropedia.org/Mobility%26Transport |
Light exposure before or during bedtime can make it difficult to fall and stay asleep because your brain won’t make enough sleep-inducing melatonin. Even if you do manage to fall asleep with lights on in your bedroom, you may not get enough rapid eye movement (REM) sleep.
Is it healthier to sleep in the dark?
Darkness is essential to sleep. The absence of light sends a critical signal to the body that it is time to rest. Light exposure at the wrong times alters the body’s internal “sleep clock”—the biological mechanism that regulates sleep-wake cycles—in ways that interfere with both the quantity and quality of sleep.
Are nightlights bad for eyes?
Fiction: It has been thought that using a nightlight in your child’s bedroom may contribute to nearsightedness, however there is not enough evidence to support this claim. Keeping a nightlight on in your baby’s room may actually help them learn to focus and develop important eye coordination skills when they are awake.
What color light is the best for sleeping?
What color light helps you sleep? Warm light is better for sleep because the eyes are less sensitive to the longer wavelengths in warm light. Light bulbs with a yellow or red hue and are best for bedside lamps. Blue light, on the other hand, is the worst for sleep.
Why is it bad to sleep with a bra on?
Sleeping in a bra will not make a girl’s breasts perkier or prevent them from getting saggy. And it will not cause a girl to develop cancer or stunt her breast growth. (It’s also not true that underwire bras cause breast cancer.) Some women want to wear a bra to bed because they think it feels more comfortable.
Should I sleep in total darkness?
Exposure to light during nighttime can mess up the naturally programmed increase of melatonin levels, which slows down the body’s natural progression to sleep. In addition to regulating our melatonin levels, sleeping in complete darkness helps lower the risk of depression.
How can I improve my eyesight in 7 days?
Blog
- Eat for your eyes. Eating carrots is good for your vision. …
- Exercise for your eyes. Since eyes have muscles, they could use some exercises to remain in good shape. …
- Full body exercise for vision. …
- Rest for your eyes. …
- Get enough sleep. …
- Create eye-friendly surroundings. …
- Avoid smoking. …
- Have regular eye exams.
10 дек. 2018 г.
Do carrots actually improve eyesight?
From the campaign, the myth grew that carrots improved already-healthy vision in the dark — for example, during blackouts. That claim is false, according to Harvard Health Publications. “Vitamin A will [help] keep your vision healthy; it won’t improve your vision,” Taylor says.
What age should a child stop using a night light?
Gemma Caton. It depends on your toddler’s age, and why you think he needs a night-light. Night-lights can be a source of comfort for children who are afraid of the dark or scared of monsters. However, toddlers don’t generally experience this kind of night-time anxiety until they’re about two or three years old.
What color helps sleep?
Blue: the best bedroom color for sleep
Hands down, the best bedroom color for sleep is blue. Blue is a calming color and calm is conducive to sleep. More than that, your brain is especially receptive to the color blue, thanks to special receptors in your retinas called ganglion cells.
What color helps with anxiety?
Green – Quiet and restful, green is a soothing color that can invite harmony and diffuse anxiety. Blue – A highly peaceful color, blue can be especially helpful for stress management because it can encourage a powerful sense of calm.
What color LED lights should I not sleep with?
“So if you want to avoid light having a strong effect on your body clock, dim and blue would be the way to go.” Conversely, bright white or yellow light was better for staying awake and alert.
Is it bad to sleep with a fan on every night?
Circulating air from a fan can dry out your mouth, nose, and throat. This could lead to an overproduction of mucus, which may cause headaches, a stuffy nose, sore throat, or even snoring.
Is sleeping naked better for your health?
Sleeping nude can help decrease your body’s temperature and help you achieve higher quality sleep. If you get too warm while you sleep, it can disrupt your REM cycle. A decrease in body temperature also acts as a biological cue for your body to go to sleep. | https://exledusa.com/illumination/is-it-healthy-to-sleep-with-a-night-light.html |
Class III, division 1 is characterized [in both lateral halves of the dental arches] by mesial occlusion [that] is slightly more than one half the width of a single cusp on each side, but in cases that have been allowed to develop—and these cases are always progressive—the mesial occlusion becomes greater, even to the full width of a molar, or more.
Edward Hartley Angle, 1907
Angle’s Classification
Angle’s description of Class III malocclusion (Figure 16.1), also known as mesioclusion, in its symmetrical (division 1) and asymmetric (subdivision) patterns focuses not only on the occlusion between the teeth but also on individual variation. Angle describes “considerable crowding, especially in the upper arch, and lingual inclination of the lower incisors and canines.” Although Angle’s classification has been used for over 100 years around the globe, his assumptions on etiology and diagnosis of the malocclusion lack definitive evidence. These assumptions include the relation of mandibular incisor retroclination to lower lip pressure “in the effort to close the mouth and disguise the deformity” and his only explanation for the etiology of Class III, the enlarged tonsils associated with the “habit of protruding the mandible” to afford “relief in breathing.”
Angle assigns the “proportion” of Class III occurrence among 1000 malocclusions: Division—Bilaterally mesial (34/1000); subdivision—unilaterally mesial (8/1000). The incidence thus amounts to 4.2%, nearly similar to the reported incidence in American children in the 1970s (3%, Kelly, Sanchex & Van Kirk 1973) and 1990s (3.2%, Brunelle, Bhat & Lipton 1996; Proffit, Fields & Moray 1998; Proffit 2000). Higher incidences are reported among Asian populations.
General Characteristics
Angle’s observation on incisor retroclination preceded the age of cephalometrics, which demonstrated a corresponding proclination of maxillary incisors, reflecting dentoalveolar compensation by maxillary and mandibular incisors to an underlying skeletal discrepancy characterized by maxillary retrognathism, mandibular prognathism, or both.
Sometimes the incisal compensation is expressed with incisal edge-to-edge rather than crossbite, yet it is compatible with molar mesioclusion and an underlying Class III pattern. Variations and gradients of severity include the complex differentiation between macrognathism and prognathism in reference to the skeletal bases and between alveolar and skeletal bases. Therefore, the mosaic arrangement of the “parts” requires careful diagnosis, albeit the evidence of tailoring treatment modalities to specific diagnoses is not fully available. This shortcoming is related in great part to the availability of limited approaches to treatment and the inability of treatment to affect the cranial and facial skeletal parts compared to the relatively easier handling of dental components.
In a retrospective study of mesioclusion that comprised a group of Class II malocclusions, we investigated the underlying craniofacial morphology. Critical conclusions included (Figure 16.2) the following:
1. The prevalence of maxillary retrognathism is more than previously thought because its occurrence is more severe (SNA = 78.04 degrees ± 4.04 degrees; norm = 82 degrees ± 2 degrees) than mandibular prognathism (SNB = 81 degrees ± 2 degrees; norm = 80 degrees ± 2 degrees), the angles SNA and SNB yielding differences of 4 degrees and 1 degree from the respective norms.
2. A more cephalad position of the anterior cranial base is underscored by a higher position of sella relative to nasion, concomitant with the previously described decrease in the saddle angle (nasion-sella-basion).
3. A previously unreported superior-posterior tip of the palatal plane.
4. Possibility of environmental induction of mesioclusion: an anterior crossbite, not necessarily related to genetic factors but sustained by mandibular forward positioning caused by occlusal interferences, habits, or to improve breathing, may induce forces that produce maxillary retrognathism that otherwise would not exist and affect the palatal tip through the occlusion (in a manner similar to the action of a headgear; Figure 16.3).
(Vertical changes in mandible are not represented. Data from Efstratiadis et al. 2005.)
5. The thickness of the soft tissue envelope, which may differ from one region to another, can compensate or exacerbate the regional diagnosis.
Three-dimensional imaging of the craniofacial system has not yet generated new knowledge of Class III morphology to enable more accurate diagnosis, the aim of which is to formulate a corresponding individualized treatment approach.
Pseudo Class III
Pseudo Class III is also called functional crossbite, mandibular displacement, or positional malrelationship (Moyers 1988) because the mandible shifts forward after initial interference, often on canines or more posterior teeth. Compared with Class I malocclusion, pseudo Class III is characterized by retroclined maxillary incisors, retrusive upper lip, decreased midface length, and increased maxillary-mandibular difference (Rabie & Gu 2000; Ngan 2006). Because of closer to normal skeletal features, maxillary and mandibular incisors do not show the typical compensatory inclinations, facilitating resolution of the malocclusion by orthodontic means. In “true” Class III, functional forward shift may coexist with the underlying skeletal discrepancy.
Diagnostic Considerations
Available evidence on the development of Class III suggests the assessment of references used for more accurate cephalometic diagnosis. Maxillary and mandibular positions are commonly gauged by the angles SNA and SNB. The position of sella can induce misinterpretation of data if not corrected to the natural head position “true” horizontal. A high sella relative to nasion would yield smaller SNA and SNB values when corrected, thus less maxillary and mandibular prognathism; a low position of sella would have opposite consequences (Figure 16.4). Regarding SNB specifically, the deeper the overbite or the more anterior functional positioning of the mandible, the greater the SNB angle, thus the inference of more mandibular prognathism. Accurate appreciation of SNB would require “bite opening” or “rotating” the image of the mandible on the tracing to near normal overbite (20%–30%). Such exercise is further rationalized with anterior (functional) mandibular displacement, particularly in the diagnosis of pseudo Class III. These issues are not accounted for in research on Class III malocclusion.
(Ghafari 2006, Ghafari et al. 2007a).
Etiology of Class III Malocclusion
Class III is often associated with the image of mandibular prognathism (Figure 16.5). The role of genetics and environment in establishing the size and position of the craniofacial components is complex and not yet fully unraveled.
Environmental factors are increasingly recognized as potential determinants of at least individual malocclusions. King, Harris & Tolley (1993) reported the potential environmental influence on and lower genetic components of craniofacial size and form, a finding that would suggest the possibility of minimizing or avoiding the full expression of Class III, despite its ranking as the first most likely deformity to run in families (Proffit 2007). Our research findings and enunciation of the concept of developmental or “intragrowth” orthopedics support the effect of the environment (sustained early anterior crossbite) potentially affecting the position of the maxilla (Figure 16.6; Ghafari 2004). Development and/or severity of the maxillary retrognathism may be generated by functional forces initiated by mandibular anterior position and transferred through the occlusion, particularly in instances of deep overbite maintained during a long period of growth (Ghafari & Haddad 2005).
Nasal obstruction also induces a forward position of the mandible that helps clear the airway and reduce mouth breathing (Macari & Ghafari 2006). This theory was Angle’s only explanation for the etiology of Class III, beginning before or around the time of eruption of the first permanent molars and “always associated at this age with enlarged tonsils.” Angle further stated his “conviction” that other etiologic factors are of minor importance and that early treatment of the “throat” and correction of the molar occlusion and its retention for a few months would eliminate the problem. Angle also assigned Class II development at least partially to mouth breathing (1907). Angle’s hypotheses are not supported by definitive evidence, yet the suggested mechanism should be valid for at least a proportion of patients.
Based on our research findings, we postulate that a Class III malocclusion underlined with mandibular prognathism is mainly genetic in nature, whereas a mesioclusion associated with maxillary retrognathism is the result of environmental induction (which may include the restraining effect of an inherited macro and/or prognathic mandible as a primary or secondary factor of inhibiting maxillary forward growth). This premise implies the proper selection of a “true” Class III in researching the genetics of Class III. Such studies should not include maxillary retrognathism, at least not without the presence of a prognathic mandible with increased mandibular size by one standard deviation or more.
Foundations and Variations of Treatment
“[Class III malocclusions are] by far the worst type of deformities the orthodontist is called upon to treat, and when they have progressed until the age of sixteen or eighteen, or after the jaws have become developed in accordance with the malpositions of the teeth, the case has usually passed beyond the boundaries of malocclusion only, and into the realm of bone deformities, for which, with our present knowledge, there is little possibility of affording relief through orthodontic operations.”
Edward Hartley Angle, 1907
How different is the state of the art of treatment of Class III malocclusions over a century after this writing by the definer of the deformity? As the statement implies, treatment is considered differently in growing and adult patients.
Early Treatment
Treatment Approach
With increasing emphasis on repositioning the maxilla rather than the mandible in orthognatic surgery (Proffit, Turvey & Phillips 1996), the focus on maxillary orthopedics in the early treatment of the malocclusion is not surprising. Chin cups to restrain mandibular growth have largely been replaced by face masks (reverse headgears) to protract the maxilla, often with rapid palatal expansion. The rationale for expansion is to minimize the resistance of bony buttresses around the maxilla, splint the maxillary teeth, and correct a posterior crossbite when present. In the primary dentition, fixed palatal expanders or other appliances, such as the Porter arch and quad helix, may be used to achieve expansion. When no expansion is needed, passive appliances (e.g., Nance holding arch) are used.
An important clinical observation is warranted: unless a posterior crossbite exists, palatal expansion is not needed for transverse occlusal correction. An anticipated forward positioning of the maxillary dentition relative to the mandibular teeth produces an increased maxillary width (Figure 16.7). Therefore, many practitioners use the face mask without palatal expansion. However, long-term studies are based on the combined use of these appliances and on treatment in the mixed dentition. Most treatment regimens in the primary dentition are based on expert opinion or case reports or series.
Existing Evidence
Because of the need for long-term evaluation of early treatment and a lower incidence of Class III malocclusion within Caucasian populations, long-term studies of Class III treatment are limited. Research including the highest level of evidence indicates the following conclusions:
Treatment Timing: Treat Early for More Effect
The available evidence emphasizes the efficiency of early treatment, however, more because of its potential effect relative to late treatment. Based on bone age assessment, Suda et al. (2000) determined “more pronounced” treatment effects in younger children. In a meta-analysis, Kim et al. (1999) concluded that protraction face mask therapy is effective in growing patients but to a lesser degree in those older than age 10 years.
Accordingly, treatment in the primary dentition should be more efficient, and some supporting authors have hypothesized potential effect on the cant of the cranial base or saddle angle (Delaire 1980; Deshayes 2006) that would enhance more posterior positioning of the glenoid fossa and thus less prognathic mandible. Advocating optimal treatment timing for face mask therapy in the deciduous or early mixed dentition, Ngan (2006) cites the benefits to include favorable sutural response, elimination of any functional discrepancy between centric occlusion and centric relation, and improvement in facial profile and self-esteem.
Another reason for early treatment that requires focused research is its potential to reduce the worsening of the developing dentofacial dysmorphology. Given the increased severity in maxillary retrognathism if sustained by a forward mandibular position, the sooner the anterior crossbite is eliminated, the closer to normal development would be the dentofacial complex, especially maxillary development. Consequently, future treatment may be reduced to only orthodontic treatment (tooth alignment or compensation of dental inclination over bone) or limited orthognatic surgery (only mandibular surgery or if both jaws are involved, one instead of multiple-piece maxillary surgery).
Treatment Modality: Palatal Expansion May Not Be Required and Chin Cup Success Is Questionable
Palatal expansion is often indicated, particularly in the presence of maxillary constriction and crowding. In a meta-analysis, Kim et al. (1999) reported similar protraction with or without expansion, though the average treatment duration was longer without expansion. While protraction combined with an initial period of expansion was thought to provide more significant skeletal effects (Kim et al. 1999), such as greater forward movement of point A (Baik 1995), the need for expansion absent a transverse discrepancy (skeletal/dental crossbite) was not supported by the results of a prospective randomized clinical trial (Vaughn et al. 2005). The authors evaluated face mask treatment with and without palatal expansion in children at mean initial ages of 7.4 and 8.1 years, respectively, compared with an observation group (6.6 years). The treatment modalities produced equivalent dentofacial changes. Other authors concurred (Tortop, Keykubat & Yuksel 2007). Varied modalities have been advanced that comprise bonded splints with bite blocks for expansion or the addition of adjunct appliances, further demonstrating the variability of approaches to the same strategy. No evidence is available on more or less effectiveness over the basic strategy.
Our observations regarding palatal expansion and face mask in the primary dentition, with overcorrection and removable retention, include the possibility of emergence of the permanent incisors in retroclination, although skeletal changes seem more stable (Figure 16.8). Research is needed on this particular regimen. Thus, the use of a Porter arch or quad helix appliance might be sufficient to correct the anterior crossbite in the primary dentition, with a possible combination of palatal expansion with face mask in the mixed dentition.
A chin cup was initially thought to reduce the growth of a prognathic mandible. Although animal studies indicated the possibility of altering condylar growth (Petrovic, Stutzmann & Oudet 1975; Copray, Jansen & Duterloo 1985; Vardimon et al. 1994), clinical research reveals initial changes within the skeleton that were rarely maintained during pubertal growth (Sugawara & Mitani 1993). The face mask includes a chin cup component (Figure 16.8). The separate effect of the chin cup versus maxillary protraction is not known and would be difficult to determine. The chin cup may have an additive influence, maximizing the effect of the protraction, and/or mandibular rotation.
Mandibular headgear was used in Class III treatment, followed by fixed appliances with long-term improvement that contrasted with lack of self-improvement in corresponding controls (Baccetti et al. 2009). The results show compensatory changes that might be achieved with fixed appliances that would distalize the mandibular teeth. More research is required to explore such approaches.
Treatment Is Better Than No Treatment
Pangrazio-Kulbersh et al. (2007) reported that continued anterior growth after protraction is removed was greater than in control subjects. In a cohort study, they compared protraction treatment with surgical correction 7 years posttreatment, along with a corresponding control group. The authors found a “striking” general similarity between the protraction and surgical groups, suggesting that appropriate orthodontic treatment may avoid surgery.
Overtreatment Is Better for Stability of Results
“Aggressive overcorrection of Class III appears advisable.” Westwood et al. (2003) made this conclusion from a cohort study of the long-term effects of Class III treatment with rapid maxillary expansion and face-mask therapy followed by a second phase of treatment with preadjusted edgewise fixed appliances (average of 27 months). Between both treatment phases, patients wore a removable maxillary “stabilization plate.” In a few instances, phase 2 followed phase 1 immediately. The authors evaluated the stability of maxillary protraction in 34 patients at pretreatment (average age: 8 years, 3 months) and posttreatment (14 years, 10 months) compared with matched untreated controls. The treated patients had a more favorable skeletal change than control subjects in whom the mesioclusion was maintained (Figure 16.9). However, a close evaluation of the published illustrations reveals compensatory proclination of maxillary incisors and maintenance of retroclination of mandibular incisors, supported by the reported use of Class III elastics with fixed appliances.
Adapted from Westwood et al. (2003).
More research is needed that accounts for the various variables of a complex, multifactorial issue, namely age, nature of correction (skeletal vs. dentoalveolar), residual maxillary growth, and mandibular growth. The Cochrane database systematic review remains at the level of a protocol (Harrison et al. 2002).
The Difficulty of Forecasting Growth and the Dilemma of Overcorrection
The major problem with early treatment of mesioclusion with underlying skeletal discrepancy is the inability to precisely forecast its development. The orthodontist tries to anticipate the growth spurt to minimize its effect (e.g., favoring maxillary growth or mandibular rotation to counteract additional mandibular growth). Unlike Class II malocclusions, in which mandibular growth helps treatment, further mandibular growth in Class III is not balanced by concomitant maxillary growth. The maxilla grows at a slower rate than the mandible and ceases forward growth nearly 2 years before the mandible (Cortella, Shofer & Ghafari 1997).
The combination of maxillary expansion and face mask is advocated with overcorrection, i.e., increase of overjet, which results from both the maxillary protraction (with a side effect of counterclockwise rotation) and mandibular clockwise rotation (Figure 16.10). Given the inaccuracy of growth forecasting, the amount of overjet overcorrection cannot be determined precisely, leading to one of these possibilities:
1. Mandiblular forward growth equals the amount of overcorrection; then, the present compensatory incisor angulations are maintained; or
2. The mandible grows less than the amount of overcorrection; thus the mandibular incisors are proclined for the residual overjet correction.
In addition, the results must be retained and reevaluated throughout the period of growth, leading to longer treatment, particularly if started in the primary dentition and revisited in the mixed and later the permanent dentitions.
Class III malocclusions with prognathic, particularly macrognathic mandibles often require a surgical correction that is delayed until after or toward the end of mandibular growth (skeletal ages of 16–18 years in females, 18–20 years in males). Yet, early treatment may reduce the severity of the malocclusion by minimizing associated problems such as crowding of the maxillary arch. Left uncorrected, this problem may require later tooth extractions (usually premolars) that contract the maxillary arch, possibly necessitating maxillary surgical widening.
In many instances, parents pressure the orthodontist to start early correction of a noticeable mesioclusion. Early treatment becomes questionable when the patient ends up undergoing surgery at an older age. All the diagnostic and therapeutic components must be weighed carefully in the individual patient. Research should determine valid long-term options.
Adult Treatment
Orthodontic Options
Randomized studies are not available for nonsurgical (camouflage) treatment of mesioclusion with skeletal discrepancy in nongrowing or adult patients. However, case reports indicate that surgery may be avoided with a combination of mandibular rotation and compensatory inclination of teeth over basal bones. To this basic rationale may be added the extraction of premolars for further incisor retroclination or the distal movement of mandibular molars, possibly requiring the extraction of third molars, followed by the retraction of the more anterior teeth (Figure 16.11). This approach contributes to the correction of posterior transverse maxillary-mandibular relations. | https://pocketdentistry.com/16-class-iii-malocclusion-the-evidence-on-diagnosis-and-treatment/ |
Related literature {#sec1}
==================
For the synthesis, synthetic uses and properties of 4-(*N*,*N*-dimethylaminomethylene)-2-aryl-2-oxazolin-5-one derivatives, see: Singh & Singh (1994[@bb9], 2008[@bb10]); Takahashi & Izawa (2005[@bb13]); Singh *et al.* (1994[@bb11]); Kmetic & Stanovnik (1995[@bb4]). For the Vilsmeier--Haack reaction, see: Meth-Cohn & Stanforth (1991[@bb5]). For related structures, see Vasuki *et al.* (2002[@bb14]); Vijayalakshmi *et al.* (1998[@bb15]). For the treatment of twinned diffraction data, see: Spek (2009[@bb12]).
Experimental {#sec2}
============
{#sec2.1}
### Crystal data {#sec2.1.1}
C~12~H~11~N~3~O~4~*M* *~r~* = 261.24Monoclinic,*a* = 9.5313 (2) Å*b* = 9.5204 (3) Å*c* = 13.0349 (4) Åβ = 106.661 (2)°*V* = 1133.15 (6) Å^3^*Z* = 4Mo *K*α radiationμ = 0.12 mm^−1^*T* = 120 K0.42 × 0.38 × 0.22 mm
### Data collection {#sec2.1.2}
Nonius KappaCCD area-detector diffractometerAbsorption correction: multi-scan (*SADABS*; Sheldrick, 2007[@bb7]) *T* ~min~ = 0.661, *T* ~max~ = 1.00014210 measured reflections2581 independent reflections2030 reflections with *I* \> 2σ(*I*)*R* ~int~ = 0.071
### Refinement {#sec2.1.3}
*R*\[*F* ^2^ \> 2σ(*F* ^2^)\] = 0.065*wR*(*F* ^2^) = 0.220*S* = 1.192581 reflections176 parametersH-atom parameters constrainedΔρ~max~ = 0.33 e Å^−3^Δρ~min~ = −0.30 e Å^−3^
{#d5e520}
Data collection: *COLLECT* (Hooft, 1998[@bb3]); cell refinement: *DENZO* (Otwinowski & Minor, 1997[@bb6]) and *COLLECT*; data reduction: *DENZO* and *COLLECT*; program(s) used to solve structure: *SHELXS97* (Sheldrick, 2008[@bb8]); program(s) used to refine structure: *SHELXL97* (Sheldrick, 2008[@bb8]); molecular graphics: *ORTEP-3* (Farrugia, 1997[@bb2]) and *DIAMOND* (Brandenburg, 2006[@bb1]); software used to prepare material for publication: *publCIF* (Westrip, 2010[@bb16]).
Supplementary Material
======================
Crystal structure: contains datablocks global, I. DOI: [10.1107/S1600536810018635/ez2209sup1.cif](http://dx.doi.org/10.1107/S1600536810018635/ez2209sup1.cif)
Structure factors: contains datablocks I. DOI: [10.1107/S1600536810018635/ez2209Isup2.hkl](http://dx.doi.org/10.1107/S1600536810018635/ez2209Isup2.hkl)
Additional supplementary materials: [crystallographic information](http://scripts.iucr.org/cgi-bin/sendsupfiles?ez2209&file=ez2209sup0.html&mime=text/html); [3D view](http://scripts.iucr.org/cgi-bin/sendcif?ez2209sup1&Qmime=cif); [checkCIF report](http://scripts.iucr.org/cgi-bin/paper?ez2209&checkcif=yes)
Supplementary data and figures for this paper are available from the IUCr electronic archives (Reference: [EZ2209](http://scripts.iucr.org/cgi-bin/sendsup?ez2209)).
The use of the EPSRC X-ray crystallographic service at the University of Southampton, England, and the valuable assistance of the staff there is gratefully acknowledged. JLW acknowledges support from CAPES (Brazil).
Comment
=======
The preparations of 4-(*N*,*N*-dimethylaminomethylene)-2-aryl-2-oxazolin-5-one derivatives have been reported using the Vilsmeier-Haack reactions (Meth-Cohn & Stanforth, 1991) of acylaminoacetanilides with POCl~3~ and DMF (Singh & Singh, 1994; Takahashi & Izawa, 2005; Singh *et al.*, 1994; Kmetic & Stanovnik, 1995). The compounds have been used as precursors of 4-hydroxymethylene-2-aryl-2-oxazolin-5-one, which have been tested for anti-bacterial activities (Singh & Singh, 2008). The crystal structures of 4-(*N*,*N*-dimethylaminomethylene)-2-phenyl-2-oxazolin-5-one (Vasuki *et al.*, 2002) and 4-(*N*,*N*-dimethylaminomethylene)-2-(2-nitrophenyl)-2-oxazolin-5-one (Vijayalakshmi *et al.*, 1998) have been reported. We now report the crystal structure of 4-(*N*,*N*-dimethylaminomethylene)-2-(4-nitrophenyl)-2-oxazolin-5-one, (I).
The molecule of (I), Fig. 1, is essentially planar with the maximum deviations from the least-squares plane through all non-hydrogen atoms being 0.157 (4) Å for atom C5 and -0.158 (3) for atom O4; the r.m.s. = 0.068 Å. The sequence of C1--N1, N1--C2, C2--C4, and C4--N2 bond distances of 1.289 (4), 1.398 (4), 1.382 (5), and 1.317 (4) Å, respectively, indicate substantial delocalisation of π-electron density over these atoms. The geometric parameters in (I) match closely those found in the parent compound, namely 4-(*N*,*N*-dimethylaminomethylene)-2-phenyl-2-oxazolin-5-one (Vasuki *et al.*, 2002) and in the 2-nitro derivative (Vijayalakshmi *et al.*, 1998).
The crystal packing is dominated by C--H···O and π--π interactions; the N1 atom of the oxazolin-5-one is involved in an intramolecular C--H···N contact that shields this atom from forming intermolecular interactions, Table 1. Columns of molecules orientated along the *b* axis are stabilised by π--π contacts with the shortest of these occurring between centrosymmetrically related benzene rings \[ring centroid(C7--C12)···ring centroid(C7--C12)^i^ = 3.6312 (16) Å for *i*: 1-*x*, 1-*y*, 2-*z*\]. The benzene rings also form π--π interactions with the oxazolin-5-one rings \[ring centroid(C7--C12)···ring centroid(O1,N1,C1--C3)^ii^ = 3.7645 (17) Å for *ii*: 1-*x*, -*y*, 2-*z*\] to form a supramolecular chain, Fig. 2. The chains are connected by a series of C--H···O contacts, Table 1, to form a 3-D network, Fig. 3.
Experimental {#experimental}
============
The title compound was prepared as per published procedures (Singh & Singh, 1994; Singh *et al.*, 1994). Physical properties were in agreement with published data. The crystal used in the structure determination was grown from EtOH solution.
Refinement {#refinement}
==========
The C-bound H atoms were geometrically placed (C--H = 0.95--0.98 Å) and refined as riding with *U*~iso~(H) = 1.2--1.5*U*~eq~(C). For the treatment of twinned diffraction data, see: Spek (2009).
Figures
=======
![The molecular structure of (I) showing the atom-labelling scheme and displacement ellipsoids at the 50% probability level.](e-66-o1450-fig1){#Fap1}
![A view of the supramolecular chain aligned along the b axis in (I) sustained by π--π intercations (purple dashed lines). Colour code: O, red; N, blue; C, grey; and H, green.](e-66-o1450-fig2){#Fap2}
![View of the connections between chains in (I) with the C--H···O interactions shown as orange dashed lines. Colour code: O, red; N, blue; C, grey; and H, green.](e-66-o1450-fig3){#Fap3}
Crystal data {#tablewrapcrystaldatalong}
============
------------------------- ---------------------------------------
C~12~H~11~N~3~O~4~ *F*(000) = 544
*M~r~* = 261.24 *D*~x~ = 1.531 Mg m^−3^
Monoclinic, *P*2~1~/*c* Mo *K*α radiation, λ = 0.71073 Å
Hall symbol: -P 2ybc Cell parameters from 2714 reflections
*a* = 9.5313 (2) Å θ = 2.9--27.5°
*b* = 9.5204 (3) Å µ = 0.12 mm^−1^
*c* = 13.0349 (4) Å *T* = 120 K
β = 106.661 (2)° Block, red
*V* = 1133.15 (6) Å^3^ 0.42 × 0.38 × 0.22 mm
*Z* = 4
------------------------- ---------------------------------------
Data collection {#tablewrapdatacollectionlong}
===============
--------------------------------------------------------------- --------------------------------------
Nonius KappaCCD area-detector diffractometer 2581 independent reflections
Radiation source: Enraf Nonius FR591 rotating anode 2030 reflections with *I* \> 2σ(*I*)
10 cm confocal mirrors *R*~int~ = 0.071
Detector resolution: 9.091 pixels mm^-1^ θ~max~ = 27.4°, θ~min~ = 3.1°
φ and ω scans *h* = −12→12
Absorption correction: multi-scan (*SADABS*; Sheldrick, 2007) *k* = −12→11
*T*~min~ = 0.661, *T*~max~ = 1.000 *l* = −16→16
14210 measured reflections
--------------------------------------------------------------- --------------------------------------
Refinement {#tablewraprefinementdatalong}
==========
---------------------------------------------------------------- ----------------------------------------------------------------------------------------------------
Refinement on *F*^2^ Secondary atom site location: difference Fourier map
Least-squares matrix: full Hydrogen site location: inferred from neighbouring sites
*R*\[*F*^2^ \> 2σ(*F*^2^)\] = 0.065 H-atom parameters constrained
*wR*(*F*^2^) = 0.220 *w* = 1/\[σ^2^(*F*~o~^2^) + (0.0936*P*)^2^ + 1.6594*P*\] where *P* = (*F*~o~^2^ + 2*F*~c~^2^)/3
*S* = 1.19 (Δ/σ)~max~ = 0.001
2581 reflections Δρ~max~ = 0.33 e Å^−3^
176 parameters Δρ~min~ = −0.30 e Å^−3^
0 restraints Extinction correction: *SHELXL97* (Sheldrick, 2008), Fc^\*^=kFc\[1+0.001xFc^2^λ^3^/sin(2θ)\]^-1/4^
Primary atom site location: structure-invariant direct methods Extinction coefficient: 0.018 (5)
---------------------------------------------------------------- ----------------------------------------------------------------------------------------------------
Special details {#specialdetails}
===============
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Geometry. All esds (except the esd in the dihedral angle between two l.s. planes) are estimated using the full covariance matrix. The cell esds are taken into account individually in the estimation of esds in distances, angles and torsion angles; correlations between esds in cell parameters are only used when they are defined by crystal symmetry. An approximate (isotropic) treatment of cell esds is used for estimating esds involving l.s. planes.
Refinement. Refinement of *F*^2^ against ALL reflections. The weighted *R*-factor wR and goodness of fit *S* are based on *F*^2^, conventional *R*-factors *R* are based on *F*, with *F* set to zero for negative *F*^2^. The threshold expression of *F*^2^ \> 2σ(*F*^2^) is used only for calculating *R*-factors(gt) etc. and is not relevant to the choice of reflections for refinement. *R*-factors based on *F*^2^ are statistically about twice as large as those based on *F*, and *R*- factors based on ALL data will be even larger.
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Fractional atomic coordinates and isotropic or equivalent isotropic displacement parameters (Å^2^) {#tablewrapcoords}
==================================================================================================
----- ------------- ------------ -------------- -------------------- --
*x* *y* *z* *U*~iso~\*/*U*~eq~
O1 0.3806 (2) 0.5986 (2) 0.33548 (17) 0.0197 (5)
O2 0.2556 (3) 0.4481 (2) 0.20576 (17) 0.0239 (6)
O3 0.8419 (3) 1.1001 (3) 0.6327 (2) 0.0324 (6)
O4 0.7443 (3) 1.0746 (3) 0.76142 (19) 0.0307 (6)
N1 0.2875 (3) 0.5457 (3) 0.4711 (2) 0.0180 (6)
N2 0.0528 (3) 0.3059 (3) 0.4441 (2) 0.0203 (6)
N3 0.7556 (3) 1.0430 (3) 0.6733 (2) 0.0209 (6)
C1 0.3786 (3) 0.6220 (3) 0.4393 (2) 0.0166 (6)
C2 0.2186 (3) 0.4617 (3) 0.3831 (2) 0.0179 (6)
C3 0.2761 (3) 0.4921 (3) 0.2958 (2) 0.0195 (7)
C4 0.1130 (3) 0.3590 (3) 0.3735 (2) 0.0189 (7)
H4 0.0778 0.3199 0.3038 0.023\*
C5 0.0939 (4) 0.3462 (4) 0.5569 (3) 0.0237 (7)
H5A 0.1378 0.2655 0.6012 0.036\*
H5B 0.0066 0.3768 0.5761 0.036\*
H5C 0.1649 0.4233 0.5691 0.036\*
C6 −0.0548 (4) 0.1947 (4) 0.4138 (3) 0.0284 (8)
H6A −0.0780 0.1778 0.3366 0.043\*
H6B −0.1440 0.2223 0.4317 0.043\*
H6C −0.0152 0.1086 0.4526 0.043\*
C7 0.4765 (3) 0.7290 (3) 0.4994 (2) 0.0168 (6)
C8 0.4797 (3) 0.7571 (3) 0.6051 (2) 0.0184 (6)
H8 0.4184 0.7056 0.6375 0.022\*
C9 0.5715 (3) 0.8590 (3) 0.6624 (2) 0.0185 (6)
H9 0.5735 0.8794 0.7342 0.022\*
C10 0.6608 (3) 0.9314 (3) 0.6135 (2) 0.0178 (6)
C11 0.6608 (3) 0.9058 (3) 0.5089 (2) 0.0173 (6)
H11 0.7231 0.9571 0.4772 0.021\*
C12 0.5676 (3) 0.8035 (3) 0.4519 (2) 0.0175 (6)
H12 0.5655 0.7838 0.3800 0.021\*
----- ------------- ------------ -------------- -------------------- --
Atomic displacement parameters (Å^2^) {#tablewrapadps}
=====================================
----- ------------- ------------- ------------- -------------- ------------- --------------
*U*^11^ *U*^22^ *U*^33^ *U*^12^ *U*^13^ *U*^23^
O1 0.0239 (12) 0.0203 (11) 0.0177 (11) −0.0022 (9) 0.0105 (9) −0.0010 (8)
O2 0.0307 (13) 0.0242 (12) 0.0187 (11) −0.0017 (10) 0.0100 (10) −0.0029 (9)
O3 0.0323 (14) 0.0382 (15) 0.0291 (13) −0.0147 (12) 0.0127 (11) −0.0044 (11)
O4 0.0397 (15) 0.0328 (14) 0.0219 (12) −0.0061 (12) 0.0125 (11) −0.0075 (10)
N1 0.0195 (13) 0.0171 (12) 0.0183 (13) 0.0003 (10) 0.0067 (10) 0.0004 (10)
N2 0.0209 (13) 0.0163 (13) 0.0248 (13) 0.0015 (11) 0.0082 (11) 0.0025 (10)
N3 0.0219 (13) 0.0210 (13) 0.0195 (13) 0.0017 (12) 0.0056 (11) 0.0024 (11)
C1 0.0193 (14) 0.0179 (14) 0.0143 (13) 0.0046 (12) 0.0073 (11) 0.0029 (11)
C2 0.0198 (15) 0.0172 (14) 0.0173 (14) 0.0032 (12) 0.0065 (12) 0.0009 (11)
C3 0.0218 (15) 0.0165 (14) 0.0207 (15) 0.0022 (12) 0.0068 (12) 0.0030 (12)
C4 0.0222 (16) 0.0156 (14) 0.0198 (15) 0.0042 (12) 0.0076 (12) 0.0024 (11)
C5 0.0270 (17) 0.0235 (16) 0.0246 (16) 0.0024 (14) 0.0136 (14) 0.0033 (13)
C6 0.0247 (17) 0.0210 (16) 0.039 (2) −0.0050 (14) 0.0077 (15) 0.0059 (14)
C7 0.0182 (15) 0.0145 (14) 0.0185 (14) 0.0035 (12) 0.0062 (12) 0.0022 (11)
C8 0.0201 (15) 0.0188 (15) 0.0184 (14) 0.0018 (12) 0.0089 (12) 0.0040 (12)
C9 0.0215 (15) 0.0193 (15) 0.0158 (13) 0.0052 (13) 0.0070 (12) 0.0030 (12)
C10 0.0174 (14) 0.0152 (14) 0.0198 (15) 0.0025 (12) 0.0036 (12) −0.0005 (11)
C11 0.0180 (14) 0.0175 (14) 0.0178 (14) 0.0023 (12) 0.0073 (11) 0.0035 (11)
C12 0.0193 (14) 0.0184 (14) 0.0169 (14) 0.0029 (12) 0.0086 (12) 0.0014 (11)
----- ------------- ------------- ------------- -------------- ------------- --------------
Geometric parameters (Å, °) {#tablewrapgeomlong}
===========================
------------------- ------------ ---------------------- ------------
O1---C1 1.377 (3) C5---H5B 0.9800
O1---C3 1.411 (4) C5---H5C 0.9800
O2---C3 1.209 (4) C6---H6A 0.9800
O3---N3 1.226 (4) C6---H6B 0.9800
O4---N3 1.222 (4) C6---H6C 0.9800
N1---C1 1.289 (4) C7---C8 1.394 (4)
N1---C2 1.398 (4) C7---C12 1.396 (4)
N2---C4 1.317 (4) C8---C9 1.375 (4)
N2---C6 1.448 (4) C8---H8 0.9500
N2---C5 1.460 (4) C9---C10 1.385 (4)
N3---C10 1.466 (4) C9---H9 0.9500
C1---C7 1.450 (4) C10---C11 1.385 (4)
C2---C4 1.382 (5) C11---C12 1.383 (4)
C2---C3 1.428 (4) C11---H11 0.9500
C4---H4 0.9500 C12---H12 0.9500
C5---H5A 0.9800
C1---O1---C3 105.6 (2) H5B---C5---H5C 109.5
C1---N1---C2 105.0 (2) N2---C6---H6A 109.5
C4---N2---C6 120.5 (3) N2---C6---H6B 109.5
C4---N2---C5 123.9 (3) H6A---C6---H6B 109.5
C6---N2---C5 115.5 (3) N2---C6---H6C 109.5
O4---N3---O3 123.2 (3) H6A---C6---H6C 109.5
O4---N3---C10 118.1 (3) H6B---C6---H6C 109.5
O3---N3---C10 118.7 (3) C8---C7---C12 120.0 (3)
N1---C1---O1 115.2 (3) C8---C7---C1 119.8 (3)
N1---C1---C7 127.6 (3) C12---C7---C1 120.2 (3)
O1---C1---C7 117.2 (3) C9---C8---C7 120.2 (3)
C4---C2---N1 129.6 (3) C9---C8---H8 119.9
C4---C2---C3 120.5 (3) C7---C8---H8 119.9
N1---C2---C3 109.9 (3) C8---C9---C10 118.7 (3)
O2---C3---O1 120.4 (3) C8---C9---H9 120.7
O2---C3---C2 135.4 (3) C10---C9---H9 120.7
O1---C3---C2 104.3 (2) C11---C10---C9 122.7 (3)
N2---C4---C2 131.3 (3) C11---C10---N3 118.5 (3)
N2---C4---H4 114.4 C9---C10---N3 118.8 (3)
C2---C4---H4 114.4 C12---C11---C10 118.1 (3)
N2---C5---H5A 109.5 C12---C11---H11 120.9
N2---C5---H5B 109.5 C10---C11---H11 120.9
H5A---C5---H5B 109.5 C11---C12---C7 120.4 (3)
N2---C5---H5C 109.5 C11---C12---H12 119.8
H5A---C5---H5C 109.5 C7---C12---H12 119.8
C2---N1---C1---O1 −0.3 (3) O1---C1---C7---C8 −179.8 (3)
C2---N1---C1---C7 179.1 (3) N1---C1---C7---C12 −179.3 (3)
C3---O1---C1---N1 −0.1 (3) O1---C1---C7---C12 0.1 (4)
C3---O1---C1---C7 −179.5 (3) C12---C7---C8---C9 0.6 (5)
C1---N1---C2---C4 178.8 (3) C1---C7---C8---C9 −179.5 (3)
C1---N1---C2---C3 0.5 (3) C7---C8---C9---C10 −0.7 (5)
C1---O1---C3---O2 −178.9 (3) C8---C9---C10---C11 0.4 (5)
C1---O1---C3---C2 0.4 (3) C8---C9---C10---N3 178.2 (3)
C4---C2---C3---O2 0.1 (6) O4---N3---C10---C11 172.7 (3)
N1---C2---C3---O2 178.6 (4) O3---N3---C10---C11 −7.1 (4)
C4---C2---C3---O1 −179.1 (3) O4---N3---C10---C9 −5.1 (4)
N1---C2---C3---O1 −0.6 (3) O3---N3---C10---C9 175.0 (3)
C6---N2---C4---C2 −178.4 (3) C9---C10---C11---C12 −0.1 (5)
C5---N2---C4---C2 −2.4 (5) N3---C10---C11---C12 −177.9 (3)
N1---C2---C4---N2 −3.9 (6) C10---C11---C12---C7 0.1 (4)
C3---C2---C4---N2 174.2 (3) C8---C7---C12---C11 −0.3 (4)
N1---C1---C7---C8 0.9 (5) C1---C7---C12---C11 179.8 (3)
------------------- ------------ ---------------------- ------------
Hydrogen-bond geometry (Å, °) {#tablewraphbondslong}
=============================
-------------------- --------- --------- ----------- ---------------
*D*---H···*A* *D*---H H···*A* *D*···*A* *D*---H···*A*
C5---H5c···N1 0.98 2.28 3.074 (5) 137
C5---H5a···O2^i^ 0.98 2.53 3.504 (4) 177
C5---H5c···O4^ii^ 0.98 2.57 3.259 (5) 127
C9---H9···O1^iii^ 0.95 2.56 3.304 (4) 135
C11---H11···O2^iv^ 0.95 2.45 3.144 (4) 130
-------------------- --------- --------- ----------- ---------------
Symmetry codes: (i) *x*, −*y*+1/2, *z*+1/2; (ii) −*x*+1, *y*−1/2, −*z*+3/2; (iii) *x*, −*y*+3/2, *z*+1/2; (iv) −*x*+1, *y*+1/2, −*z*+1/2.
###### Hydrogen-bond geometry (Å, °)
------------------ --------- ------- ----------- -------------
Symmetry codes: (i) ; (ii) ; (iii) ; (iv) .
[^1]: Additional correspondence author, e-mail: j.wardell\@abdn.ac.uk.
| |
On February 16, 2016, California Attorney General Kamala D. Harris released the California Data Breach Report 2012-2015 (the “Report”), which provided a comprehensive analysis of the data breaches reported to the Attorney General’s office during the covered years, as well as set forth concrete recommendation for minimum data security that would be considered “reasonable” under California law.
According to the Report, in the past four years, the Attorney General has received reports on 657 data breaches, affecting a total of over 49 million records of Californians. These breaches occurred in all sectors of the economy. The greatest threat to security, both in the number of breaches and the number of records breached, was presented by malware and hacking, followed by physical breaches, breaches caused by insider errors, and breaches caused by insider misuse. The most breached data types were Social Security numbers and medical information.
With malware and hacking being responsible for 356 (or 54 percent) of the 657 breaches, it is important to note that according Verizon’s Data Breach Investigations Report 2015, 99.9 percent of exploited vulnerabilities were compromised more than a year after the controls for the vulnerability had been publicly available. The Report stated that if organizations choose to collect data and then neglect to secure their systems as to allow attackers to take advantage of uncontrolled vulnerabilities, the organizations are also culpable.
The Report stressed that businesses collecting personal data of Californians must employ strong privacy practices, such as have privacy policies that are easy to read and access, inform consumers about material changes to their data handling practices, and carefully design how data is collected, used, and shared. “Foundational to those privacy practices is information security,” the Report stated, “if companies collect consumers’ personal data, they have a duty to secure it.”
The Report provided the following recommendations to organizations on improving their data security:
- The Center for Internet Security’s (“CIS”) identifies 20 Critical Security Controls (“Controls” or “CSCs”). Organizations should determine which of those 20 controls apply to their environment and implement them. Failure to do so constitutes “a lack of reasonable security.”
- Organizations should make multi-factor authentication (as opposed to a simple username-and-password authentication) available on consumer-facing online accounts that contain sensitive personal information, such as online shopping accounts, health care websites and patient portals, and web-based email accounts.
- Organizations should consistently use strong encryption to protect personal information on laptops and other portable devices, and should consider it for desktop computers.
- Organizations should encourage individuals affected by a breach of Social Security numbers or driver’s license numbers to place a fraud alert on their credit files and make this recommendation prominent in their breach notices.
Perhaps the most important takeaway from these recommendations is that businesses collecting personal data of California residents should familiarize themselves with the CIS’s 20 Controls and ensure that their data security practices implement all those Controls that apply to their environment. Grouped by type of action, these Controls are summarized in the Report as follows:
- Count Connections: Know the hardware and software connected to your network. (CSCs 1 and 2).
- Configure Securely: Implement key security settings. (CSCs 3 and 11).
- Control Users: Limit user and administrator privileges. (CSCs 5 and 14).
- Update Continuously: Continuously assess vulnerabilities and patch holes to stay current. (CSC 4).
- Protect Key Assets: Secure critical assets and attack vectors. (CSCs 7, 10, and 13).
- Implement Defenses: Defend against malware and boundary intrusions. (CSCs 8 and 12).
- Block Access: Block vulnerable access points. (CSCs 9, 15, and 18).
- Train Staff: Provide security training to employees and vendors with access. (CSC 17).
- Monitor Activity: Monitor accounts and network audit logs. (CSCs 6 and 16).
- Test and Plan Response: Conduct tests of your defenses and be prepared to respond promptly and effectively to security incidents. (CSCs 19 and 20).
California has been now leading the data security discussion for over a decade. It was the first to enact a data breach notification law, which took effect in 2003, and it continuously updates its data breach statute to address the evolving state of technology and security threats, and to provide for greater privacy protections to its citizens. Many organizations prudently take the highest-common-denominator approach, in effect affording California-level protections to residents of all states. As such, multi-state organizations should closely examine the recommendations contained in the Report. Importantly, at the time when it is not easy to parse through the multitude of data security requirements and best practices, these recommendations provide a defined set of actions that, if properly implemented, may afford a safe harbor to organizations suffering a breach. | https://www.carpedatumlaw.com/2016/08/definition-reasonable-information-security-california/ |
And statistics! This is a response to part of the discussion that started in this thread on Physicist-Retired's seed about the tactics of anthropogenic global warming deniers. The contention arose at some point that, because of the highly random nature of weather, drawing long-term climate inferences from the day-to-day recording of temperature and carbon dioxide concentration is pointless. Herein I attempt to show how, using some very basic statistics, this can be shown to be absolute bollocks.
To do this, I set up a spreadsheet in Excel that mimics a cycle of seasons, of sorts. A cosine function cycles every 365 "days" between a maximum temperature and a minimum temperature. To mimic the randomness of daily temperatures, a random number generator produces a value between negative and positive 7.5 to add to the cyclical temperature. I also included a small addition to the temperature each day, calibrated to add up to a couple of degrees over the course of 50 years in the model - this is to mimic the claim of one degree Celsius (converted to about 2 degrees Fahrenheit) that climate scientists claim to have observed the globe warming by in the past 50 years. These numbers are all added up to give a daily temperature, which is what I'm plotting in graphs henceforth.
Here we have a chart representing one "month" - a 30-period interval at the start of the simulation. I reckon this would be the month of August. The temperatures look pretty much entirely random, within a few degrees of 80something. That makes sense for one month. I included a line of best fit through the data, as well as an important statistical measure called the R-squared value. The R-squared, or Coefficient of Determination, is a measure of how significant a trend is in a chart like this. It ranges from zero to one; zero means that there is absolutely no statistical relationship between the variables being tested (in this case, time and temperature) and one means that the relationship is perfect. Here it is 0.07 - quite close to zero, which is good as, at this level, it's essentially random.
Next, I made a plot over a whole year, 365 periods. Call it "1950," seeing as it's the start of the simulation. As you can see, there's a definite pattern to it: the band of random temperatures follows a quite precise curve - the effect of the seasonal variation programmed into the simulation. Real weather, of course, is more chaotic but as this is just a simulation for the purpose of demonstrating the statistical methods, it'll do. One thing worth noticing is that the R-squared value is still very low - because we are using the wrong kind of equation. The computer is still trying to draw a straight line through the data, when what it needs is a curve. Now, normally you could (if stupid OpenOffice had trigonometric best-fit functions, grr) simply fit it to a different kind of function - namely, a sine or cosine function - but because we are modeling climate, and climate doesn't necessarily repeat itself, we can use a different tool to show the trend more clearly here, called the moving average.
A moving average is calculated by averaging the last few periods of a graph such as these. For instance, if I were to set a 15-day moving average, the moving average for day 16 would be the average of the temperatures for the days 2 through 16 - 15 days in total. If I add a 15-day moving average (in red) to the year's worth of data, it looks like this:
As you can see, a lot of the noise from the randomness has been removed by applying the moving average, showing the pattern much more clearly. However, it's still a little jagged, and because I know (because I programmed it that way) that this is based on a smooth curve, I know that this jagged line can't be terribly close to the true pattern. When the moving average is increased, however, to 30 days, we get this:
Much smoother. However, we can start to see one problem of the moving average more clearly: it lags the actual data. Adding more days to the moving average does make it smoother, but it also causes the indicator to lag behind the actual pattern.
Moving up the timescale rather drastically, here is the graph of all 50 years of simulated temperatures, with the climate change added in:
It's total chaos. If you look closely, you can see the peaks get a little higher and the troughs get a little less low, but good old R-squared there reminds us that this dataset is worthless for drawing conclusions (if you can't see, it's to the left and says "0.002somethingsomethig"). If, however, we use a very long-scale moving average, we get this:
Bingo. R-squared is sitting pretty at 0.82 - not an excellent value, but a solid one. The trend has been made much more apparent in this version, though it is still choppy. Another thing worth noting is that while the significance of the trend has gone way up, the scale on the left has become much more narrow - in line with our programmed-in long-term change of 2 degrees Fahrenheit or so.
The point is not, of course, to prove absolutely here that climate change is happening as fast as my rough little model says it is; real climate science is based of course on actual data, of which there is a lot. The purpose here is simply to demonstrate how even very basic statistical tools can be used to take an initially chaotic dataset and glean from it underlying patterns in a way that isn't immediately apparent. Happy mathing! | http://bluejohnnyd.newsvine.com/_news/2012/07/11/12671767-lies-damn-lies |
Dublin, Ireland is the capital of a country that has been caught up in religious and political turmoil for much of its existence: fighting with England for its independence, fighting to keep Northern Ireland, fighting to determine its official identity, its official language and religion. At one point, this fight ripped the country in two – old, traditional, western Ireland and new, modern Ireland. Western Ireland hosted villages known as Gaeltachts, or places where Irish is still predominantly used and spoken. Towns like this got their own special name because they were so rare. After the economic boom in the 90’s however, there was a push to reunite both sides of the Irish culture. School children learned basic Irish in class, tourism advertisements began to include the charms of the old Irish ways, some families even sent their children to Irish classes in Gaeltachts over the summer. Even with all this, the most evident way this can be seen in the city is through the use of language on the Dublin streets.
Walk down any street in Dublin and you’ll see the same thing: signs with both Irish and English on them. You may notice one common thing about all the signs other than the language though, they are all street or PSA signs. Every street sign, every sign indicating parking time, every sign asking people to clean up their dog’s poop off the side of the street, every sign placed there by the government (local or national) is both in Irish and English. The Irish, usually placed above the English in smaller, italicized letters, appears to be largely symbolic. It’s a gesture meant to bridge the old and new Irelands together. This idea is further evidenced by the lack of visual Irish anywhere else in the city. All the shops are in English, flyers and advertisements are in English; English is used for business, for commerce, for personal and private affairs. Despite the fact that most Irish citizens now have a basic grasp of Irish, English is the dominant language in the city, no contest.
This inclusion of Irish in Dublin appears to be a passive, top-down regulation. That being said, there are many other active uses of language throughout the city, all found in the aural landscape of the people. Dublin is clearly an international city: walk down the street, take a bus ride, sit in a restaurant long enough – you will eventually hear a language that is not your own. French and Spanish are popular languages amongst tourists here, pointing to the unique advantage members of the European Union have when it comes to travel access. Thanks to the size of the countries and their shared, or at least close, borders, European citizens often take weekend trips to different countries, bringing their own native languages with them.
This unique aspect of the larger European culture extends to another landscape not often discussed – the digital one. While on the streets the inclusion of another language may appear passive and even obligatory, on the internet language is alive and often changing. Websites from different countries often offer translation services – usually provided by Google – but more often than not, the website will offer their own translated version in five or six different languages. This speaks to the fluid nature of language in an area so easily accessed by many different cultures, and Dublin is no exception to this dynamic integration.
Overall, while English may be the more common tongue found in Dublin, Ireland stays true to itself be including Irish on street and service signs – even if its just as a gesture. While Irish is rarely heard spoken on the streets of the capital city, many other languages are represented. While Irish may have become less of a tool for communication and more of a symbolic bridge connecting the traditional Gaelic culture to the modern European one, the spoken and digital Dublin linguistic landscape is still just as vibrant and dynamic as ever. | https://scholarblogs.emory.edu/interculturaldiscourse/2018/06/22/the-dublin-linguistic-landscape-maggie-higginbotham/ |
- Authors:
- Chris Wang, Michael J. Platow, Daniel Bar‐Tal, Martha Augoustinos, Dirk Van Rooy, Russell Spears
- Published Online:
- 19 May 2021
- DOI:
- 10.1002/ijop.12775
- Volume/Issue No:
- Early View Articles
When are intergroup attitudes judged as free speech and when as prejudice? A social identity analysis of attitudes towards immigrants
Additional Options
Although anti‐immigrant attitudes continue to be expressed around the world, identifying these attitudes as prejudice, truth or free speech remains contested. This contestation occurs, in part, because of the absence of consensually agreed‐upon understandings of what prejudice is. In this context, the current study sought to answer the question, “what do people understand to be prejudice?” Participants read an intergroup attitude expressed by a member of their own group (an “in‐group” member) or another group (an “out‐group” member). This was followed by an interpretation of the attitude as either “prejudiced” or “free speech.” This interpretation was also made by in‐group or an out‐group member. Subsequent prejudice judgements were influenced only by the group membership of the person expressing the initial attitude: the in‐group member's attitude was judged to be less prejudiced than the identical attitude expressed by an out‐group member. Participants' judgements of free speech, however, were more complex: in‐group attitudes were seen more as free speech than out‐group attitudes, except when an in‐group member interpreted those attitudes as prejudice. These data are consistent with the Social Identity Approach to intergroup relations, and have implications for the processes by which intergroup attitudes become legitimised as free speech instead of prejudice. | https://www.iupsys.net/app/journal/article/10.1002/ijop.12775/abstract?issue=earlyview |
PURPOSE: To scarcely impair superconducting characteristics of a circuit pattern in a circuit board having the pattern with superconducting substance.
3
CONSTITUTION: A multilayer circuit board 1 mainly has a first circuit board 2, a second circuit board 3, surface interconnections 4, 8 and an internal interconnection 7 laminated thereon. The interconnections 4, 8 and 7 are formed of a laminate of niobium layers 4a, 7a, 8a and NbAl layers 4b, 7b, 8b covering the layers 4a, 7a, 8a. Since the layers 4b, 7b, 8b prevent oxidation of the layers 4a, 7a, 8a of superconducting substance, superconducting characteristics of the interconnections 4, 7, 8 are not impaired.
COPYRIGHT: (C)1994,JPO&Japio | |
Definition of Social efficiency. This is the optimal distribution of resources in society, taking into account all external costs and benefits as well as the internal costs and benefits. Social Efficiency occurs at an output where Marginal Social Benefit (MSB) = Marginal Social Cost (MSC).
If a good has a negative externality – ignored by individuals, then in a free market, we tend to get over-consumption and social inefficiency.
In a free market, consumers ignore the external costs of consumption (e.g. you drive a car but don’t factor in the congestion you cause to other people). Therefore, the free market equilibrium is at Q1 (where S=D).
However, at Q1 the Marginal Social Cost is greater than the Marginal Social Benefit. Therefore by consuming at this point, the cost to society is greater than benefit (e.g. think of traffic jams and pollution because too many people drive at once). We say there is a deadweight welfare loss – indicated by the red triangle.
With a positive externality, we ignore the benefits to third parties.
The free market equilibrium (Q1) is less than the socially efficient level (Q2) where SMC = SMB.
At Q1, the Marginal Social Benefit (MSB) is greater than the Marginal Social Cost (MSC). Therefore, in this situation, if we increase output from Q1 to Q2, the addition to social welfare (MSB) is greater than the marginal social cost, therefore net social welfare increases until we get to point Q1 where SMB = SMC.
It is important to take into account externalities. (both positive and negative) It can be difficult to measure externalities, but we need to make an effort. Government intervention – taxes and subsidies can attempt to influence production and consumption to achieve social efficiency.
your website is very helpfull i like it. | https://www.economicshelp.org/blog/2393/economics/social-efficiency/ |
“It might be easy for you, but you cannot imagine how difficult is for us to enter outside”.
Few years ago, walking in the yards of Santa Maria Della Pietà, psychiatric hospital already closed, Thomas Lovanio, Franco Basaglia’s colleague, got these words from one of the guest of the hospital. He was referring to the difficulty of coming back into the city, a city that years ago had jailed and forgotten him. But now, just because someone have decided to close the psychiatric hospitals, this city has to absorb him again.
The articles in this issue reminded me those words. How difficult is for our artistic and cultural system to shape open and innovative relations with the environment and the urban space, and to generate an innovative and transparent management. These difficulties are surely a limit to innovation and an obstacle to the cultural growth of our country.
Urban studies have long observed that the most innovative systems are those capable of hybridizing different worlds.
In the Sixties, the architect Giancarlo De Carlo stated that the space for education and culture has not to be a gated island, but an integral part of the whole physical environment.
Doing so, this space has to be interpreted as dynamic structure, and not as a settled device, in order to articulate its continuous changes, “an unstable configuration always recreated by the direct participation of the community that uses it, introducing the disorder of his unpredictable expressions” (Giancarlo De Carlo, Gli spiriti dell’architettura, Editori Riuniti, 1994 p. 210).
This signal today is even more urgent. Cultural innovation arises from processes of production of knowledge inherently socialized. It is hard to isolate the innovator and pioneer from their natural context, from their relation with the scientific and cultural community of reference.
Innovation is a cultural product of the collective knowledge and the role of the subject is being re-shaped in relation with the community and the specific place. Similarly, places and cultural institutions could be generative only if they interact with a variety of places and activities. Today the closeness and the isolation of large cultural containers (expo, fairs, museums) do not match with fertile hybrid models. These models are able to create networks between actors and places.
The example of the Salone del Mobile and Fuori Salone in Milan – described by Federica Codignola – demonstrate the ability to integrate actors, spaces, with the creative resources of the environment. This model combines formal and informal resources, and mixes up diversified audiences in the same spatial context: an ability which produces interesting results of mutual benefit. High creative concentration in the same space rises the value of urban areas usually abandoned and leftover, and it reveals the unexpressed potentialities of the environment.
The Fuori Salone model, based on networks of initiatives “out of place”, is struggling to find its space in other fields and to become a tool for the growth of new cultural networks. In addition, it must be clarify that usually, ended up the specific period of the event, it is impossible to maintain alive even some of these activities.
As illustrated by Mark Granovetter studies, we know that all human activities are rooted in grids of personal relations and in social structures based on trust and reciprocity.
These kind of relations can generate an enlargement of the motivations compared to the strictly utilitarian mean, in order to improve social relations with unexpected combinations of economic transactions and social interactions.
Opposite, short distance friendship and networks might become a pathogenic elements. For this reason, as noticed by Monti and Bernabe, we all need a new culture of accountability. The lack of accountability both in terms of strategy and management of the cultural sector – together with the lack of transparency, the unpleasant presence of political interferences, low-level corporate heritage- is today the biggest obstacle to innovation in all fields. A brake that cannibalizes the social worlds.
In a territorial perspective, rooting refers to all those processes that trigger locally and modify the growth of the city. This action will produce a development of local economies, a multiplication of public services, and a cultural innovation. These processes transform the physical space, change the dynamics of economies, generate social relations and common goods.
The contributions presented here will help us to reflect about the importance of creating new strong synthesis between individual practices, the culture of accountability and the rooting of place. | http://www.tafterjournal.it/2014/05/08/what-an-effort-to-enter-out-closures-and-resistances-in-the-cultural-sector/ |
There are societal benefits of renewable methane produced through gasification and methanation of biomass and waste, bioSNG, in all the steps of the value chain, i.e. production, distribution and utilization.
Production: BioSNG has the highest conversion efficiency, from feedstock to final product, of all second generation biofuels and hence provides a resource efficient way to convert indigenous feedstock to a high quality transport fuel.
Distribution: Since bioSNG is miscible with natural gas in any proportion it can be distributed in an efficient and environmentally friendly way through the existing natural gas grid. The bioSNG produced in the GoBiGas plant is injected into the high pressure grid in Gothenburg.
Utilization: The versatility and the low combustion emissions make bioSNG an attractive renewable fuel not only within the transportation sector but also for efficient heat and power production and in industrial processes where clean and efficient combustion is required.
Other societal benefits: The greenhouse gas emissions are significantly reduced when bioSNG replace fossil fuels and for countries like Sweden with vast biomass resources the bioSNG route offers several other benefits such as increased security of supply, regional development and new job opportunities. | http://bioprogress.se/tag/high-efficiency/ |
?
Keyboard
Word / Article
Starts with
Ends with
Text
A
A
A
A
Language:
English
Español
Deutsch
Français
Italiano
العربية
中文简体
Polski
Português
Nederlands
Norsk
Ελληνική
Русский
Türkçe
אנגלית
Twitter
Get our app
Flashcards
?
My bookmarks
?
+
Add current page to bookmarks
Register
Log in
Sign up with one click:
Facebook
Twitter
Google
Yahoo
Get
our app
Dictionary
Thesaurus
Medical
Dictionary
Legal
Dictionary
Financial
Dictionary
Acronyms
Idioms
Encyclopedia
Wikipedia
Encyclopedia
Tools
A
A
A
A
Language:
English
Español
Deutsch
Français
Italiano
العربية
中文简体
Polski
Português
Nederlands
Norsk
Ελληνική
Русский
Türkçe
אנגלית
Mobile Apps:
apple
android
For surfers:
Free toolbar & extensions
Word of the Day
Help
For webmasters:
Free content
Linking
Lookup box
Close
Judgmental Forecast
Also found in:
Encyclopedia
.
Judgmental Forecast
A
forecast
made on subjective information. A judgmental forecast is made by a person thought to be knowledgeable about the company or
market
about which the forecast is being made. It may consider quantitative information, but it relies on a great deal of subjective feeling.
References in periodicals archive
?
The Greenbook forecast is a detailed
judgmental forecast
that until March 2010 (after which it became known as the Tealbook) was produced eight times a year by staff at the Board of Governors of the Federal Reserve System.
How useful are estimated DSGE model forecasts for central bankers?
Econometric and
judgmental forecasts
were obtained from commercial forecasting agencies.
Forecasting foreign exchange rates using objective composite models
Research suggests that
judgmental forecasts
can also be improved by simply averaging the results of multiple independent forecasts.
Making the best use of judgmental forecasting
Improving Forecasting Accuracy by Combining Statistical and
Judgmental Forecasts
in Tourism.
Building forecasting models for restaurant owners and managers: a case study
In a later work Sanders and Ritzman (2004), distinguishes between the marketing function, which more typically generates
judgmental forecasts
, and operations, which rely more heavily on quantitative data, and suggests integration techniques for these approaches.
The PMI, the T-bill and inventories: a comparative analysis of neural network and regression forecasts
In general, they found the
judgmental forecasts
to be less than optimal.
An overview of forecasting error among international manufacturers
Future research related to the PMI should explore the use of combining
judgmental forecasts
with the quantitative forecasting of the PMI.
As the PMI turns: a tool for supply chain managers
The main claims Sims makes are, first, that the Federal Reserve forecasts well, especially when forecasting inflation; second, that the informational contents of different forecasts are highly correlated, so that strong claims of superiority of one forecast over another should be treated as suspect; and third, that there does not appear to be strong evidence that the
judgmental forecasts
of the Federal Reserve are superior (as measured by the root mean square forecast error) to its model-based forecasts.
The role of models and probabilities in the monetary policy process
One difficulty with
judgmental forecasts
, however, is that it is hard, if not impossible, for an outside observer to trace the source of systematic forecast errors because there is no formal model of how the data were used.
Vector Autoregressions: Forecasting and Reality
Much of this work has concentrated on forecasts produced by various time series methods of extrapolation for individual series, although there have also been other studies comparing econometric and/or
judgmental forecasts
with the consensus.
Consensus forecasts in planning
The advantages and limitations of economic models and
judgmental forecasts
are reviewed, and a process that incorporates features of both is recommended.
Economic forecasting in the private and public sectors
Averaging can also work with purely
judgmental forecasts
. The error associated with these forecasts, much less averages of them, have not been studied in nearly as much depth as quantitative forecasts, so the guidance on how many forecasts to average is not as definitive.
Structuring the revenue forecasting process
Financial browser
?
▲
Jones Act
Jonestown defense
JOR
Jordanian Dinar
Joseph Effect
Joseph P. Kennedy
Joseph Stiglitz
Joseph Wilson
journal
Journal of Government Financial Management
Jow
JP
JPN
JPY
JSE
JT
JTUM
JTWROS
Judet
judge-made law
judgment
judgment creditor
judgment debtor
judgment in rem
judgment lien
Judgmental Forecast
judgment-proof
judicial foreclosure
judicial landmark
Jugerum
Juice
Julliard v. Greenman
Jumbo CD
jumbo certificate of deposit
Jumbo loan
Jumbo Mortgage
Juminqu
Jump ball
Jumpru
Junior Capital Pool
junior debt
Junior issue
junior lien or mortgage
Junior mortgage
Junior refunding
Junior security
Junior Stock
junk bond
junk fees
junk financing
junk mail
▼
Full browser
?
▲
Judgment of God
Judgment of God
Judgment of Line Angle and Position
Judgment of Line Orientation Test
Judgment of Line Orientation Test
Judgment of Peter
Judgment of Peter
Judgment of the Jedi
judgment on the merits
judgment on the merits
judgment on the pleadings
judgment on the pleadings
Judgment poll
Judgment proof
judgment sample
judgment sample
judgment sample
judgment sample
judgment sample
judgment sample
judgment seat
Judgment summons
Judgment Under Stress Training
Judgment, Day of
Judgment, Last
Judgment, Last
Judgment, Last
judgment-proof
judgmental
judgmental
Judgmental Forecast
Judgmental Forecasts
Judgmental Forecasts
judgmentally
Judgments
Judgments
Judgments
Judgments
Judgments
Judgments
judgship
JUDI
Judi Dench
Judiasm
Judiasm
Judica Sunday
Judica Sunday
judicable
judicablely
Judicare
Judicary act of 1789
judication
Judicative
Judicative
Judicative
judicator
judicatorial
judicatories
judicatory
judicatory
judicature
▼
Facebook Share
Twitter
CITE
Site:
Follow:
Facebook
Twitter
Rss
Mail
Share: | https://financial-dictionary.thefreedictionary.com/Judgmental+Forecast |
Miscarriage rates in the general population with no fertility problems range around 15-20%. In other words, one out of every five couples who achieve pregnancy suffers a spontaneous miscarriage, and 5% of these couples suffer it more than once. Even when pregnancy is achieved with the help of assisted reproduction techniques, miscarriage rates do not vary. For this reason, it is important when couples come to our clinic seeking reproductive counselling to perform comprehensive testing and design an adequate protocol for their case to secure the best result, which is a healthy baby at home, minimising the chances of miscarriage. In order to do that, one should know that miscarriages and pregnancy losses are caused by different reasons (uterine problems, immunological problems, and so on), and yet in half the cases there are chromosomal abnormalities in the embryo that prevent pregnancies to progress, hence causing miscarriages. Normal embryos have two copies of each chromosome, one inherited from the father and the other from the mother, and the chromosomal anomalies they may suffer involve a change in the number of copies, producing an imbalance in their genetic load which might block embryo development.
The high rates of chromosomally abnormal embryos are due to:
- On the one hand, the high rates of chromosomal abnormalities that human gametes (oocytes and sperm) have, which can be transferred to embryos. This rates increases as women age, especially from age 35. 40-year-old women have a high number of abnormal oocytes and, therefore, little chance of having healthy embryos.
- On the other hand, the fact that these abnormalities can also occur spontaneously during embryo splitting.
In assisted reproduction techniques (IVF or ICSI), assessing embryo morphology does not suffice in order to determine if the embryo is chromosomally normal. As a consequence, the results are worse than expected, given that chromosomally anomalous embryos, which do not implant and cause miscarriages, are also transferred. The recent development of Comprehensive Chromosome Screening (CCS) has been a useful tool to determine the chromosomal state of the embryos produced by these techniques prior to being transferred to the mother’s uterus, thus preventing pregnancy losses for chromosomal reasons and, therefore, reducing miscarriage rates and increasing the rates of babies at home. Until relatively recently, PGS/CCS was performed using the array CGH technique that consists of comparing the embryo’s DNA with a control DNA. Chromosomal excesses or deficiencies in the embryo can be detected in this way and transfer can, therefore, be avoided. However, since the incorporation of next generation sequencing (NGS) techniques to PGS (PGT-A), most of the embryos in our clinic are analysed using NGS because, in comparison with array CGH, this technique can be used to analyse multiple embryos with increased precision in diagnosis. This further decreases the likelihood of the embryo having a chromosomal abnormality and leading to pregnancy loss.
At Instituto Bernabeu we incorporate this state-of-the-art technology to guarantee the best results to our patients.
Dr. Ruth Morales, molecular biologist at Instituto Bernabéu. | https://www.institutobernabeu.com/foro/en/why-does-comprehensive-chromosome-screening-ccs-by-array-cgh-reduce-miscarriage-rates/ |
The Archdiocese of Philadelphia seeks an energetic, experienced and dynamic leader for the Executive Director of Athletic Affairs.
Position Summary
To provide leadership, collaboration, and coordination in the area of the athletic programs to the high schools of the Archdiocese of Philadelphia. The Executive Director is responsible for ensuring compliance with PIAA and PCL By-Laws, consulting with school principals and athletic directors, as well as the PCL Board of Governors and Directors. The EDA is a member of the Superintendent for Secondary School’s team.
The Executive Director is ultimately responsible for maintaining and sustaining a culture that promotes good sportsmanship, integrity and professionalism among the athletes, coaches and staff. He/she is responsible for maintaining an environment that teaches teamwork and respect for all participants inside and outside the arena of competition.
Responsibilities
Serve as a consultant to the Philadelphia Catholic League Board of Directors and Board of Governors.
Supervise the athletic directors in collaboration with principals to ascertain cooperation and needs to insure consistency of compliance, quality, and performance of the Athletics.
Supervises all athletic directors to include shared responsibility for hiring, discipline or firing decision and training, mentoring and evaluating coaches and athletic staff.
Organize, develop and direct interscholastic athletic programs which will promote, protect, and conserve the health and overall welfare of all student-athletes and participants.
Develop, organize, and maintain objective metrics for supervising and evaluating directors, coaches, and athletic staff. This would include end of the year reviews of each athletic director in conjunction with their respective principal. Additionally, the EDA would ensure that the local AD is performing “end of season” reviews of all head coaches.
Advise and ensure all programs conform to PA Dept. of Education relevant laws; National Federation High School Guidelines; as well as PIAA & PCL rules.
To formulate and maintain policies that will safeguard the educational values of interscholastic athletics and cultivate the high ideals of sportsmanship.
Monitor and maintain responsibility for athletic programs to be in compliance with all Sports Equity and Title IX laws.
Assist in the development and maintenance of a master sports calendar.
Ensure that local athletic directors are collecting, processing and maintaining proper documentation for student athletes relevant to participation, i.e., physicals, eligibility forms, academic records, game contracts, health insurance, guarantees concussion forms, staff and officials’ clearances, etc
Develop and deliver a leadership-training program for all Athletic Directors of the 17 Archdiocesan High Schools.
Develop a professional orientation program for new coaches, as well as athletic based professional development opportunities for seasoned coaches.
Review and ensure accurate student-athlete eligibility lists.
Perform site visits of athletic offices, practice sites, and competition sites.
Work with the Director for Financial Aid to ensure that all Archdiocesan, PCL, and PIAA regulations are adhered to in the awarding of financial assistance to student-athletes.
Ensure that all athletic programs conform to the Catholic Values of the Church.
Work with regional media outlets for the purpose of consistent and fair exposure. Coordinate messaging with the Archdiocesan Communication and the OCE designees.
Coordinate with the Director of CYO athletics to create a seamless and homogeneous delivery of athletic opportunities in accordance with Catholic values.
Maintain a Board position with District XII of the PIAA.
Attend all meetings related to interscholastic athletics, locally, regionally and nationally.
Education - Experience
Bachelor’s degree required, Master’s degree preferred. Five to seven years of experience in Athletic Administration, Sports Administration Education, Coaching or a related field.
Position requires a working knowledge of Pennsylvania Interscholastic Athletic Association and Philadelphia Catholic League rules and regulations; good organizational and communication skills; and ability to work cooperatively with others.
The Archdiocese offers a competitive salary and benefits package commensurate with experience and education.
Those interested in the position should send their cover letter, resume, salary requirements, and transcripts to Archdiocese of Philadelphia, Human Resources, 222 North 17th St. Philadelphia, PA 19103 attention James Molnar or email to [REMOVED - SEE ORIGINAL LISTING]
Please mark the subject line: Candidate: Executive Director of Athletic Affairs. (0)
Incoming search terms: | https://www.dfwcatholic.org/files/executive-director-of-athletic-affairs-archdiocese-of-philadelphia-philadelphia-pa-93606/-html/ |
Pakistan all-rounder Azhar Mahmood in his exclusive blog for PakPassion.net addresses the hot topic of the player objections against Mohammad Amir's probable inclusion in the Pakistan team, expresses his disappointment at not being picked in the Pakistan Super League (PSL) draft and looks forward to his upcoming participation in the Masters Champions League (MCL).
Mohammad Amir’s inclusion in training camp
I think it is the right thing to do and I support the PCB in this decision. Look, we as human beings are prone to making mistakes. This is human nature. In Amir’s case, he made a mistake and he has served his punishment of a five year ban. Now that the ban has lapsed, it's time for everyone to move forward and give him another chance. Even from a religious point of view, we need to find it in our hearts to forgive him and move on.
Problems between players should be sorted away from the media
Let me say that if the situation was this serious and the players have issues amongst themselves, this should have been sorted away from the glare of the media. There was no need to make this into a public spat. Simply put, if some players had issues with Mohammad Amir returning to the national team, they should've spoken to PCB rather than running to the media. After all this is a decision for the cricket board and not one that can be taken by players or anyone else for that matter. Now that this matter has come to light, the PCB should step in and resolve this issue among players in an amicable fashion. The problem now is if this matter is not handled sensibly then there will always be suspicions about player rivalry if and when the team does not perform well in the future. Regardless of how PCB will handle this, let me reiterate that what Mohammad Hafeez did was wrong, there was absolutely no need to go to the media when he could have easily gone to his employer, the PCB, and explained his point of view to them in private. All that has happened now is that Pakistan cricket will become the object of ridicule in front of the world and the problem will remain where it is.
The decision to play or not play Mohammad Amir rests with the PCB alone
There is no doubt in my mind that the whole debate about Mohammad Amir’s inclusion and re-admittance into international cricket has only one arbiter and that is the PCB. The issue here is if someone makes a statement that they will not play if Mohammad Amir plays and the PCB does decide to play Amir then are the dissenting players happy to make that sacrifice of their career? The PCB needs to stand firm even though their resolve will be tested. PCB needs to remind some players that the cricket team is not run by and does not belong to individuals. However, to be honest, apart from posturing by some, not much will come out of this whole affair and everyone will bow to the wishes of the PCB.
Looking forward to Masters Champions League (MCL)
I am very excited about playing in this league which will consist of some legendary players, although am a bit disappointed that Wasim Akram won’t be there. The really nice thing about this league is that apart from some retired players like Brian Lara, Virender Sehwag or Graeme Smith, many others are still active cricketers. So we have players like Michael Lumb who are still very much in the game which will definitely make the MCL a pretty competitive league.
Disappointment at not being picked by any PSL team
The word disappointment doesn’t even begin to describe my feelings about not being picked by any PSL team in the draft. I am not too sure what logic was used to pick players by the franchises but it appears that they didn’t get good advice. It appears that these are the same advisors who did not select me in the past and are repeating their mistakes again. So apparently players like Yasir Arafat and I who have been considered good enough to be selected for Twenty20 teams around the globe are not considered eligible for playing in the PSL! You can probably say that I haven’t played enough domestic cricket in Pakistan but then someone like Yasir Arafat who was captain for Rawalpindi in the Quaid-e-Azam trophy was also overlooked which is mind boggling. If availability is an issue, then Yasir has been always available to play for this league in his home country. If the idea was to select young players then what are thirty-nine year olds doing in the list of picked players?
Of course I am disappointed but we have to move on and that is what I will be doing. The fact is that I am and have always been available for Pakistan. My ability is not under any doubt as how many players do we know have made over four thousand runs and taken two hundred and fifty wickets in the Twenty20 format. If the clash with the MCL schedule was a factor then the supplementary category was also available which could have been used to take me on, after all Kumar Sangakkara and Tillakaratne Dilshan are picked as part of that same group of players, weren't they? I don’t really blame the franchise owners as they are not cricketers but the think tank associated with the PSL appear to be providing incorrect advice which is a matter of concern.
Coaching clinics in Dubai
Yes, it is that time of the year when I head out to Dubai to setup the coaching clinics but the timing will need to be adjusted due to the advent of PSL and MCL and also to take into account the holiday season in that part of the world. I am in discussions with the ICC as well about these coaching sessions and hopefully these clinics will be another big success story for us, God Willing. | http://pakpassion.net/literature/pp-blogs/ask-azhar/6422-pcb-needs-to-stand-firm-on-mohammad-amir-issue-azhar-mahmood.html |
The drive to forge sustainable, equitable "green" economies and develop and deploy renewable energy and clean technologies all front-and-center in the media in the run-up to the Rio+20 United Nations Conference on Sustainable Development, which is to take place in the iconic Brazilian city June 20-22.
Three bits of essential reading, and reference, relating to global renewable energy growth and its role in sustainable development programs were released yesterday: the UN Environment Programme's (UNEP) Global Trends in Renewable Energy Investment 2012, the Renewable Energy Policy Network for the 21st Century's (REN21) 2012 Renewables Global Status Report, and the Natural Resources Defense Council's Renewable Energy Scorecard.
Record renewable energy growth
Renewable energy investment continued to grow strongly across all three end-use sectors tracked in UNEP's report. In sum, renewable energy investments in power, heating and cooling, and transport increased 37 percent in 2010 and 17 percent in 2011 to a reach a record $257 billion.
That's a six-fold increase over 2004's total and 94 percent higher than that of 2007, a year that saw the onset of the "Great Recession." Even more impressively, the gains have come despite strong economic, and in some cases, political headwinds, according to UNEP's report, which is based on data provided by Bloomberg New Energy Finance.
Developing economies accounted for 35 percent of 2011's $257 billion renewable energy investment total, with developed countries accounting for 65 percent. The US closed the gap on word-leader China as renewable energy increased 57 percent, to $51 billion. India exhibited the fastest growth among the largest national renewable energy markets, with investment surging 62 percent to $12 billion.
"There may be multiple reasons driving investments in renewables, from climate, energy security and the urgency to electrify rural and urban areas in the developing world as one pathway towards eradicating poverty-whatever the drivers the strong and sustained growth of the renewable energy sector is a major factor that is assisting many economies towards a transition to a low carbon, resource efficient Green Economy" stated UNEP executive director Achim Steiner.
Following are some of the UNEP and REN21 reports' highlights:
- The top seven countries for renewable electricity capacity excluding large hydro - China, the United States, Germany, Spain, Italy, India and Japan - accounted for about 70 percent of total non-hydro renewable capacity worldwide. The ranking among these countries was quite different for non-hydro capacity on a per person basis: Germany, Spain, Italy, the US, Japan, China and India. By region, the EU was home to nearly 37 percent of global non-hydro renewable capacity at the end of 2011, China, India and Brazil accounted for roughly one-quarter.
- Total investment in solar power jumped 52 percent to $147 billion and featured booming rooftop photovoltaic (PV) installations in Italy and Germany, the rapid spread of small-scale PV to other countries from China to the UK and big investments in large-scale concentrating solar thermal (CSP) power projects in Spain and the US.
- Competitive challenges intensified sharply, leading to sharp drops in prices, especially in the solar market -- a boon to buyers but not to manufacturers, a number of whom went out of business or were forced to restructure.
- Renewable power, excluding large hydro-electric, accounted for 44 percent of all new generating capacity added worldwide in 2011 (up from 34 percent in 2010). This accounted for 31 percent of actual new power generated, due to lower capacity factors for solar and wind capacity.
- Gross investment in fossil-fuel capacity in 2011 was $302 billion, compared to $237 billion for that in renewable energy capacity excluding large hydro.
The Road to Rio+20: Targets, technology & capital
Not surprisingly, European countries have led the way forward among G20 countries when it comes to deploying and making use of renewable energy over the past decade, part-and-parcel of an emerging, more integrated approach to sustainable development, that addresses economic, social and environmental issues, NRDC found.
Renewable energy and clean technology figure to play a central role in at Rio+20, as representatives and observers look for follow-through on goals on renewable energy targets, technology transfer and investment capital agreed to at the UN Framework Convention on Climate Change's (UNFCCC) COP17 conference, which took place in Durban, South Africa Nov.-Dec. 2011.
Yet while progress has been substantial in developing, as well as developed economies, including those of China and the US, Brazil and India, the overhanging threat of another financial crisis and global recession is testing governments' resolve to initiate, maintain and intensify integrated policy frameworks that address challenges at the 'water-food-energy' nexus.
Burn, baby, burn: Eliminating fossil fuel subsidies
While real progress has been made here in the US-- renewable energy production has increased more than 300 percent in the past decade, NRDC highlights-- Congressional ambivalence, evident in the lack of an integrated federal renewable energy policy framework characterized by "stop-start" policy and action, has led to repeated boom-bust cycles, NRDC's Jake Schmidt and University of California, Berkeley renewable energy expert Dan Kammen noted in a press briefing.
G20 governments' ongoing support of the production and burning of fossil fuels is another aspect of the report that stands out. Supporting a highly profitable, well-established fossil fuel industry that's the primary agent of man-made climate change, environmental degradation, externalized costs foisted on public finances, G20 fossil fuel subsidies nonetheless remain some 5x-6x or more higher than those for renewable energy, Schmidt and Kammen noted.
It's clear, and increasingly urgent that the energy playing field has to be leveled, and that means eliminating fossil fuel subsidies. Pressure must be brought to bear on and support given to policy makers willing and able to counter the extravagantly well-funded political lobbying and campaign funding, as well as the pernicious misinformation and disinformation campaigns, of fossil fuel industry media and public relations machine.
Renewable Energy Policy: Demand-Push + Supply-Pull
A combination of national policies has proven effective in increasing renewable energy demand (demand-pull) on the one side and boosting renewable energy production capacity (supply-push), UCal-Berkeley's Kammen noted during NRDC's press briefing.
"Overall [renewable energy] investment of about $160 billion in 2011 is very impressive, but it's also worth keeping in mind that with estimates of global subsidies of fossil fuels of $400-$500 billion, the landscape is far from truly level. There's a huge amount of work governments can do, must do, to balance that out," Kammen stated.
"It's critical to note that it is a global marketplace. Developing nations are playing key roles in addition to G20 countries. A wide range of tools is proving to be effective-- Renewable Portfolio Standards (RPS) in US states. In Europe, and increasingly and other countries Feed-in Tariffs (FiT), as well as carbon pricing is playing a role.
"There really is a diverse set of technologies, scales and market approaches being used today; the challenge is to move this forward dramatically in coming years. By 2020, [a renewable energy goal of] 15% is within reach. It's beyond what's currently on the table in terms of international agreements, but clearly within reach." | https://earthmaven.io/planetwatch/energy-economics/renewable-energy-key-vehicle-on-rio-20-sustainable-development-roadmap-kj610z3-tEKbOlAcUtRPug/ |
Climate change is an ongoing process that slowly causes changes across the country.
Between 1901 and 2020, global temperatures rose just over 2 degrees Fahrenheit. But this is only one aspect of climate change. There are also rising sea levels and changes in weather patterns like drought and flooding.
Why do these changes matter? All of them impact necessities we all depend on like water, energy, wildlife, agriculture, ecosystems, transportation, and health.
Climate change impacts vary by region. Some areas may experience warmer winters and increased precipitation, while other regions may experience extended drought.
As we look at climate change facts in general, you’re probably wondering how climate change may affect the plants in your yard.
Typically, trees that are native to a particular area are already adapted. So as the climate changes, they find themselves in a situation they aren’t accustomed to, and this can stress them and make them susceptible to drought, insect infestations, and disease infections.
Let’s look at how climate change impacts planting so you can better understand how it may affect your landscape.
How Climate Change Affects Winter Hardiness Zones
As winters warm, climate change is causing plant hardiness zones to shift north in the U.S. at 13 miles per decade. In fact, most of the plant hardiness zones have shifted northward by half a zone warmer since 1990, according to the U.S. Department of Agriculture.
This may change what plants you choose to plant in your yard. Plants you previously had to dig up protect over the winter, or perennials that were once a zone out of reach may now be possibilities.
Using research and experience, Davey has created climate change fact sheets for each region of the United States. Let’s look at each region more closely to see what changes you can expect:
- Northwest Region
- Southwest Region
- Northern Great Plains
- Southern Great Plains
- Midwest Region
- Northeast Region
- Southeast Region
Northwest Region
The Northwest, which includes Washington, Oregon, and Idaho, has experienced record-setting episodes of heat, drought, and snowpack loss due to climate change. Davey scientists predict all seasons will continue to warm.
- A warming climate: The plant hardiness zone for Seattle will transition from zone 8 to zone 9 by mid-century, which will change the palette of planting options.
- Precipitation patterns will remain variable: Forecasts also show precipitation increasing throughout the Northwest in winter and spring and decreasing in summer as the climate warms. But rain and snowfall will continue to be highly variable with periods of prolonged drought mixing with years of above-average rainfall.
- Trees and forests are susceptible: Higher temperatures and decreasing summer precipitation have increased trees’ susceptibility to insects and disease. Tree mortality from bark beetles, wildfires, and drought will intensify.
>> View Fact Sheet for the Northwest Region
Southwest Region
The Southwest was warm, to begin with, but it continues to get even warmer with California and western Colorado experiencing the greatest increases of average annual temperatures at about 3 degrees Fahrenheit. This region includes California, Colorado, Nevada, Arizona, New Mexico, and Utah.
- A warming climate: Average annual temperatures in the Southwest are projected to increase 8.6 degrees Fahrenheit, depending on emissions. Davey scientists expect California to experience 20 to 40 additional days with maximum temperatures of at least 90 degrees Fahrenheit, and Arizona and New Mexico will have at least 40 to 60 additional extreme heat days by the end of the century.
- Increasing megadroughts: Increasing temperatures are intensifying Southwest droughts, particularly in California and the upper Colorado River Basin. Hotter temperatures result in drying earlier in the season and greater evapotranspiration, contributing to megadroughts, which have persisted for a decade or longer.
- Insect outbreaks and wildfires: Tree mortality has doubled over the last 50 years due to drought, wildfires, and insect outbreaks. Warming and drying of the climate has increased wildfire frequency, duration, and season length, doubling the area of forest that has burned on average each year in the western U.S. since 1984. Davey scientists predict the trend will intensify.
>> View Fact Sheet for the Southwest Region
Northern Great Plains
In the Northern Great Plains, which includes Montana, Wyoming, Nebraska, North Dakota, and South Dakota, average annual temperatures have increased from as much as 1 degree Fahrenheit in Nebraska to as much as 3 degrees Fahrenheit in Wyoming as a result of climate change.
- A warming climate: Davey scientists predict extreme heat days to rise with the number of days above 100 degrees Fahrenheit doubling in Wyoming by mid-century. The increasing temperatures will transition plant hardiness zones across the region.
- Variable and unpredictable precipitation: Eastern states will receive more precipitation, while western states will receive less. As the weather becomes more variable, Davey scientists predict summer droughts will be more severe, while flooding will also intensify with more intense rainstorms and associated high precipitation events.
- Declining water availability: Everything from summer droughts to decreased snowpack to shrinking glaciers to a rise in evapotranspiration will lead to continued decreased average flow rates of rivers and streams. Irrigation restrictions will continue to put pressure on tree and plant health and maintenance.
- Changing forests: Forest composition will shift as pines become less dominant, and a warmer climate will favor aspen.
>> View Fact Sheet for the Northern Great Plains
Southern Great Plains
Southern Great Plains states, including Texas, Oklahoma, Kansas, and Nebraska, have warmed 1 to 2 degrees Fahrenheit over the last 100 years with winters having warmed more than summers due to climate change.
- A warming climate: Davey scientists predict warming will intensify with temperatures increasing by 3.6 to 5.1 degrees Fahrenheit by mid-century and 4.4 to 8.4 degrees Fahrenheit by the end of the century.
- Water management and its impact on trees: Since plant growth begins earlier in spring and summers become hotter and extreme heat more frequent, soils will become drier. This trend will intensify tree stress, increasing susceptibility to pests and diseases.
- Hurricane outlook: Sea levels in the Texas Gulf have been rising at twice the global average rate, increasing flooding and hurricanes, which draw their energy from the heat of the warming ocean.
>> View Fact Sheet for the Southern Great Plains
Midwest Region
The Midwest region, which includes Minnesota, Iowa, Missouri, Illinois, Wisconsin, Indiana, Michigan, and Ohio, is one of the fastest-warming areas of the continental U.S. with Minnesota having warmed more than 3 degrees Fahrenheit on average due to climate change. Wisconsin and Michigan have warmed 2 degrees Fahrenheit on average, and the southern stretch of the region along the Ohio River has warmed about 1 degree Fahrenheit.
- A warming climate: By the end of the century, Minnesota and Ohio will experience 5 to 15 more days per year with temperatures exceeding 95 degrees Fahrenheit, and Illinois will experience 15 to 20 more days. Michigan’s climate is projected to resemble the current climates of Missouri and Oklahoma, and Illinois that of the current climate of Texas.
- Changing winter hardiness zones: Winters are warming throughout the Midwest, changing plant hardiness zones. Columbus, Ohio has warmed from zone 5 to zone 6 and will reach zone 7 by the second half of the century. Minneapolis and Chicago will transition from zone 4 to zone 6 or even zone 7, depending on future patterns of emissions.
- More rain extremes: Over the last 50 years, average annual precipitation has increased 5% to 10% over most of the Midwest. Midwesterners should expect more rain, extreme rainfall events, and flooding.
- A changing forest: Paper birch, quaking aspen, balsam fir, and black spruce trees will decline, and oak, hickory, and pines will increase due to their heat and drought tolerance.
- Rising tree stress: Higher precipitation and humidity will increase plant diseases, while droughts will dry soils, giving way to more insect infestations.
>> View Fact Sheet for the Midwest Region
Northeast Region
The northeast region extends from West Virginia to Maine. Since the beginning of the 20th century, average annual temperatures have warmed from just under 1 degree Fahrenheit in West Virginia to 3 degrees Fahrenheit in New England as a result of climate change.
- A warming climate: By 2050, average temperatures in the northeast may increase to between 4 and 5 degrees Fahrenheit (depending on global greenhouse gas emissions) with winters becoming milder and spring arriving earlier. USDA plant hardiness zones currently range from zone 6 in West Virginia to zone 4 in Maine, but they are predicted to transition to zones 7 and 5, respectively, by mid-century and zones 8 and 7, respectively, by the end of the century. Scientists predict the growing season will increase by 2 to 3 weeks by mid-century.
- Increasing precipitation: Northeasterners should expect further increases in rainfall and snowfall during the winter and spring. In fact, rainfall intensity has increased more in the Northeast than in any other part of the country, increasing flooding frequency.
- Changing tree species: A longer growing season, increased precipitation, and CO2 will continue to boost forest productivity. In central West Virginia, Davey scientists predict oak and hickory to increase in dominance, while sugar maple, beech, and gray birch will decrease. In northern Maine, spruce-fir forests are projected to transition to maple-beech-birch forests.
- Diseases and pests impacting trees: A warmer, wetter, and more humid climate favors the growth and spread of pathogens and pests. From foliar diseases on white pines increasing defoliation to southern pine beetles spreading to never-before-infested areas like New Jersey, New York, and Massachusetts, Davey scientists predict greater insect and disease pressure.
>> View Fact Sheet for the Northeast Region
Southeast Region
In the Southeast region, which includes Arkansas, Louisiana, Mississippi, Alabama, Tennessee, Kentucky, Virginia, North Carolina, South Carolina, Georgia, and Florida, nights and winters are warming faster than days and summers as a result of climate change. Average daily minimum temperatures have actually increased 3 times faster than average daily maximum temperatures.
- An unevenly warming climate: Davey scientists predict winters will continue to warm throughout the region, and summer heatwaves will continue to become more frequent, intense, and longer. Daily maximum temperatures above 95 degrees Fahrenheit could become the norm, with temperatures above 100 degrees Fahrenheit becoming more frequent during summer. This will move plant hardiness zones from 8a to 9a along the Gulf coast and 6b to 7b in central Kentucky by mid-century.
- Expect extreme precipitation: Heavy downpours and flooding will increase by 2050. Even as total rainfall increases, intermittent summer droughts will become more frequent as well. Add to those hotter days and drier soils.
- More hurricanes and flooding: Hurricanes draw their energy from the heat of the ocean, which is warming. The frequency of strong hurricanes -- classes 4 and 5 -- as well as associated precipitation and flooding, has increased substantially, as a result. Davey scientists predict the trend to intensify.
- A changing forest: Wildfires ignited by lightning are expected to rise an average of 30% throughout the region by 2060. More intense droughts will impact the forests; oaks may increase in dominance at the expense of loblolly and shortleaf pines in southern regions of the Southeast, while beech and maple will decline as oak and hickory increase in central Kentucky. | https://blog.davey.com/climate-change-projections-the-impact-why-you-should-care/ |
You are here
News Story
Leite Research Group’s Invited Review Published in ACS Energy
Leite Research Group’s Invited Review Published in ACS Energy
Much like a blind person uses a cane to chart their surroundings, atomic force microscopy (AFM) uses a probing tool called a cantilever to measure the morphology of a sample under examination. Unlike optical and electron microscopes, AFM – a type of scanning probe microscopy – is a powerful tool that measures in all three dimensions, at the nanoscale, to gather precise data about surface characterization, a technique essential to the field of renewable energy.
Accordingly, researchers in the Department of Materials Science and Engineering (MSE) and the Institute for Research in Electronics and Applied Physics (IREAP) at the University of Maryland (UMD) - Elizabeth Tennyson (MSE Ph.D. student), Chen Gong (MSE Ph.D. student), and Marina Leite (MSE Asst Professor) – were invited by the editor of ACS Energy Letters to offer a review of the AFM field on both energy harvesting and energy storage materials.
The review offers a detailed discussion of AFM use for 1) energy harvesting systems, such as solar cells and 2) energy storage systems, such as rechargeable batteries, to identify electrical, chemical, and optical properties and their impact on device performance.
Tennyson, first author on the review, uses scanning probe microscopy techniques – specifically, AFM – in her research on solar cell materials to better understand how the local electrical properties influence the overall behavior of a photovoltaic device.
“The goal is to engineer better, higher performing materials for future solar cells, which will ultimately help lower the cost,” Tennyson stated.
The latter part of the review is devoted to probing research that investigates the electrical, chemical, and optical properties of perovskites – a material that mimics the structure of calcium titanium oxide, and a popular form of energy harvesting. Perovskites are fragile – notoriously sensitive to humidity and oxygen – so, being able to image this material at the nanoscale will contribute to a more fundamental understanding of why they’re so volatile. If scientists can fully understand them, then perhaps they can be used as a next generation solar cell with stable power conversion efficiencies.
“The micro- and nanostructure of heterogeneous photovoltaic (PV) and battery/storage materials is well known to influence their overall performance,” the team stated in the review.
According to Chen Gong, 5th year graduate student and co-author on this work, “identifying the chemical reactions that lead to capacity fade in Li-ion all-solid-state batteries is imperative to advancing the knowledge of the field, which requires resolving where lithium preferentially accumulates as the batteries are charged and discharged.”
In order for the field to progress, the team suggests that AFM systems – which allow in-situ characterizations of chemical, electrical, and electrochemical behaviors of batteries and fuel cells – should be a development priority in order to understand how the devices function under their actual operating conditions. They further suggest that advances in fast AFM imaging that enable capturing the time-dependent electrical behavior of perovskite solar cells, will be the next step towards revealing their unknown fast electrical mechanisms.
“Revealing the fundamental properties of these systems is crucial to increasing our understanding of the underlying mechanisms that define device performance at the nanoscale,” the team added. “Ultimately, we envision that these high spatial resolution investigations will lead to the redesign of materials with improved performance, as well as [improved] devices.”
This review was published on the web on October 30, 2017, and highlighted on the print journal cover, which was released December 8. For additional information:
| |
LONDON/ BIRMINGHAM:
Malala Yousafzai has written to Prime Minister Imran Khan after the recent Taliban takeover of Afghanistan, urging him to take Afghan refugees into the country and to ensure that girls have access to education.
Speaking of child refugees, in an interview with the BBC, the Nobel peace laureate said, “Their futures are not lost, they can enroll in local schools, they can receive education in refugee camps.” Malala added that the girls should also have access to “security” and “protection”.
The Nobel laureate stated that she had “not yet made contact” with British Prime Minister Boris Johnson, but reiterated that “every country has a role and responsibility” to play in the current Afghanistan situation and needs to “open their borders to Afghan refugees".
According to Malala, Afghanistan is currently undergoing an “urgent humanitarian crisis” and the world is “seeing some shocking images on our screens right now”.
Read Pakistan in no hurry to recognise new Kabul set-up
“We are living in a world where we are talking about advancements, equality and gender equality. We cannot see a country going decades and centuries back,” she remarked.
Malala emphasised that a “bold stance” must be taken for “the protection of women and girls, for the protection of minority groups, and for peace and stability in that region”.
She said that a stance for the protection of human rights is necessary not just for peace in Afghanistan but peace globally.
Malala Yousafzai, a Pakistani activist was shot in the head by Taliban gunmen in 2012 because she campaigned for girls' education.
Nobel peace laureate Malala Yousafzai has termed the Afghanistan situation "an urgent humanitarian crisis," and requested all countries to play an active role.— The Express Tribune (@etribune) August 17, 2021
Video via @BBCNewsnight pic.twitter.com/Mcwc68IUiP
COMMENTS
Comments are moderated and generally will be posted if they are on-topic and not abusive. | https://tribune.com.pk/story/2316050/malala-urges-pm-imran-to-protect-afghan-refugees-give-girls-education-in-camps |
Malala Yousafzai Lobbies US for Global Education Support
Nobel Prize laureate and education activist Malala Yousafzai visited Capitol Hill to encourage more funding for her cause, advocating that every country in the world to be able to provide 12 years of free education to both boys and girls — and she wants the US to help.
17-year old Malala and her father Ziauddin visited with Senators Dick Durbin (D-Ill.), Mark Kirk (R-Ill.), and John McCain (R-Ariz.), as well as Reps. Kay Granger (R-Tex.) and Nita Lowey (D-N.Y.), writes Ali Weinberg of ABC News.
Her goal was to encourage this bipartisan group to increase spending on the education of girls all over the world in keeping with the goals of the Global Partnership for Education and Michelle Obama’s Let Girls Learn initiative.
In a written statement prior to her arrival, Yousafzai said:
It is time that a bold and clear commitment is made by the US to increase funding and support governments around the world to provide 12 years of free primary and secondary education for everyone by 2030.
She made the same appeal to the World Bank and the United Nations the previous day, reports Radio Free Europe.
According to Emily Heil of the Washington Post, when speaking with the lawmakers she used particularly effective rhetoric: What would you want for your own children?
Yousafzai, born in 1997, became known globally in 2012 when the Taliban attempted to assassinate her for supporting girls’ schools in Pakistan by writing a widely shared blog post for the BBC under a pseudonym. She was shot in the head and survived, writes Danielle Hayes of UPI. In 2014, she was the co-recipient of the Nobel Peace Prize along with Kailash Satyarthi.
On her 16th birthday, addressing the UN, Yousafzai said:
The terrorists thought they would change my aims and stop my ambitions, but nothing changed in my life except this: weakness, fear and hopelessness died. Strength, power, and courage was born. … I am not against anyone, neither am I here to speak in terms of personal revenge against the Taliban or any other terrorist group. I’m here to speak up for the right of education for every child. I want education for the sons and daughters of the Taliban and all terrorists and extremists.
Girls in a variety of countries and cultures worldwide are denied opportunity, whether for religious reasons (as evidenced by the Taliban’s retaliation against Malala’s efforts), being forced to stay home and help with household tasks, early marriage, or simply the devaluation of women’s education. | https://www.educationnews.org/international-uk/malala-yousafzai-lobbies-us-for-global-education-support/ |
The BDC Films Fellowship Program invites traditionally underrepresented documentary filmmakers to participate in a free year-long documentary filmmaking fellowship at the BDC. Applications are accepted on a rolling basis through Tuesday, September 21, 2021.
The Bronx Documentary Center (BDC) created BDC Films in response to the lack of support for traditionally underrepresented documentary filmmakers in the Bronx, as well as the changes in storytelling professions that require a broader skillset for potential employment in creative industries.
Through such offerings as documentary filmmaking courses and professional development workshops; mentorships; documentary film screenings and panel discussions; equipment loans and low-cost rentals; access to free meeting and work space; and a video editing suite, Bronx filmmakers will have the necessary tools to tell their own stories and gain employment in creative industries.
In addition to enhancing skills, BDC Films is dedicated to strengthening the BDC’s artistic community by creating a peer-to-peer support network and hub which fosters dialogue and empowers Bronx filmmakers. The BDC is expanding its existing documentary film programming to include regular screenings of work by emerging Bronx filmmakers and an annual Documentary Film Festival.
Participants are provided with:
• Workshops with established documentary filmmakers
• Professional development courses
• Professional filmmaking equipment
• Production and post-production space
• Co-working space
• Mentorship from BDC staff and visiting filmmakers
• Opportunities to present their work to the public
• $3,000 stipend + screening fee for presentation of final work
• Peer support
The BDC seeks to identify fellows who:
• Identify racially, culturally or economically with a historically underrepresented population;
• Are at least 18 years old;
• Live and work in New York City, preferably in the Bronx;
• Are not currently students or enrolled in any degree-granting program at the time of the fellowship;
• Are not participating in a comparable development, fellowship, or residency program;
• Have not written, directed, and/or produced a full-length documentary film that has screened at a national or international film festival.
• Are not currently full-time employees or board members of the BDC.
• Are not traveling within the months of September to November.
*Due to the nature of fieldwork, participants must be fully vaccinated for COVID-19. | https://www.bronxmama.com/2021/09/07/bronx-documentary-center-free-films-fellowship-program/ |
are hardwired for Math, while 74% are not? After working with thousands of kids,
I can tell you, this isn’t the case at all. Kids don’t understand Math because we’ve been teaching it
as a dehumanized subject. But if we make Math human again,
it will start to make sense again. You’re probably wondering: “How was Math ever human
in the first place?” So, think about it. (Laughter) Math is a human language,
just like English, Spanish or Chinese, because it allows people
to communicate with each other. Even in ancient times,
people needed the language of Math to conduct trade, to build monuments, and to measure the land for farming. This idea of Math as a language
isn’t exactly new. A great philosopher once said: “The laws of nature are written
in the language of mathematics.” So you see?
Even Galileo agrees with me. (Laughter) But somewhere along the line, we’ve taken this language of math, which is about the real world around us, and we’ve abstracted it
beyond recognition. And that’s why kids are confused. Let me show you what I mean. Read this 3rd grade
California Math Standard and see if it would make sense
to an eight year-old. “Understand a fraction 1/b
as the quantity formed by 1 part when a whole is partitioned
into b equal parts.” Understand the fraction a/b as the quantity formed
by a parts of size 1/b. (Laughter) And if you gave this description
to an 8 year-old, you’d probably get a reaction…
like this. (Laughter) To a Math expert,
this standard makes sense, But to a kid, it’s absolute torture. I chose this example
specifically because fractions are fundational to algebra,
trigonometry and even calculus. So if kids don’t understand fractions
in elementary and middle school, they have a tough road
ahead of them in high-school But is there a way to make fractions
simple and easy for kids to understand? Yes! Just remember that Math is a language
and use that to your advantage. For example, when I teach 5th graders
how to add and subtract fractions, I start with the apples + apples lesson. First I ask,
“What’s 1 apple plus 1 apple?” And kids will often say 2,
which is partially correct. Have them include the words as well
since math is a language. So it’s not just 2, it’s 2 apples. Next is 3 pencils plus 2 pencils. You all know that pencils + pencils
give you pencils, so everyone, how many pencils? Audience: 5 pencils. 5 pencils is right. And the key is you included the words. I tried this lesson
with my 5-year-old niece once. After she added pencils and pencils,
I asked her, “What’s 4 billion plus 1 billion?” And my aunt overheard this
and she scolded me and said, “Are you crazy? She’s in kindergarten! How’s she supposed to know
4 billion plus 1 billion?!” (Laughter) Undaunted, my niece finishes counting,
looks up and says: “5 billion?” And I said:
“That is right, it is 5 billion.” My aunt just shook her head and laughed because she did not expect that
from a 5-year-old. But all you have to do
is take a language approach and Math becomes intuitive
and easy to understand. Then I asked her a question that kindergartners
are definitely not supposed to know: “What’s one third plus one third?” And immediately she answered:
“2 thirds”. So if you’re wondering
how could she possibly know that when she doesn’t know about
numerators and denominators yet? You see, she wasn’t thinking
about numerators and denominators. She thought of the problem this way. And she used 1 apple + 1 apple
as her analogy to understand 1 third plus 1 third. So if even a kidergartner
can add fractions, you better believe that
every 5th grader can do it as well. (Applause) Just for fun, I asked her
a high-school algebra question: What’s 7 x² plus 2 x²? And this little 5-year-old girl
correctly answered, 9 x². And she didn’t need any exponent rules
to figure that out. So when people say that we are
either hardwired for math or not, it’s not true. Math is a human language, so we all have the ability
to understand it. (Laughter) We need to take a language
approach to math urgently because too many kids are lost
and are anxious about math and it doesn’t have to be that way! I worked with an angry,
frustrated high-school student once who couldn’t pass algebra because she only knew 44%
of her multiplication facts. I told her, “That’s like trying to read
and only knowing 44% of the alphabet. It’s holding you back.” She couldn’t factor or solve equations
and she had no confidence in Math. As a result, this teenager
had no confidence in herself. I told her,
“We have to start with multiplication because once you know all your facts
by heart, everything gets easier, and it’ll be like having a fast pass
to every ride of Disneyland.” (Laughter) What do you think?” And she said “Ok.” So she systematically learned
her times tables in 4 weeks and yes, even multiplication
has language embedded in it. You’d be surprised how many kids
don’t realize 7 times 3 can be spelled out as “seven times” 3, which just means 3 seven times,
just like this. So when kids see it this way, they quickly realize
that repeated addition is slow and inconvenient, so they gladly memorize
that 3 seven times always gives you 21. So for this teenager
who was at risk of dropping out, becoming fluent
and confident in multiplication was a game changer. Because for the first time
she could focus on problem solving instead of counting on her fingers. I knew she had turned the corner when she figured out
that a 2-year car lease at $445 a month
would cost you $10,680 and she looked at me disapprovingly
and said: “Mr Polisoc, that’s expensive!” (Laughter) At that moment, math was no longer
causing problems for her, but she was using math to solve problems
as a responsible adult would. As an educator, it’s my duty
to challenge kids to reach higher, so I leave you with this challenge. Our country is stuck at 26% proficiency, and I challenge you
to push that number higher. This is important because mathematical
thinking not only builds young minds, but our kids need it to imagine
and build a future that doesn’t yet exist. Meeting this challenge can be
as simple as apples + apples. Insist that we teach Math
as a human language and we will get there sooner, | https://radio-inspire.com/math-isnt-hard-its-a-language-randy-palisoc-tedxmanhattanbeach/ |
The Foundation funds locally based research projects on the causes and treatments of different forms of arthritis. This includes funding of researchers' salaries, research equipment and the research laboratory, and funding for supporting activities, such as presenting research findings at national and international meetings. Our work contributes to the understanding of arthritis and its treatment as part of the international research community. Work funded by the Haywood Foundation has been published on over 200 scientific papers in international journals.
One of the current research projects underway at the Haywood Rheumatology Centre, through funding provided by the Haywood Foundation, focuses on the role of epigenetic factors (factors which cause changes to the DNA/genes) in the development of rheumatoid arthritis (RA). This is a chronic disease of the joints in which the immune system attacks the joints causing inflammation and damage, leading to functional impairment and disability.
2) Investigate whether these new factors might be used to help predict which patients will respond to treatment.
To do this, we are exploiting recent advances in technology that allow us to examine the DNA of patients on a much larger, 'genome-wide' scale - whereas previous methods allowed us to study a single gene at a time, we are now able to analyse more than 20,000 genes simultaneously. Furthermore, and in contrast with other studies that have examined DNA derived from all of the cells in blood together, we are taking a unique approach and looking at individual types of cell that are important in this disease. This includes different types of white blood cell (T-cells and B-cells), and also a type of cell from the joint itself (fibroblasts). With this approach, and for the first time, we have discovered previously unknown disease-specific changes to the DNA in T-cells and B-cells from patients with RA. We have also identified similar changes in fibroblast cells from the joint in these patients. Identification of these new targets contributed to a better understanding of the factors involved in joint inflammation and damage, and will be important for the development of new therapies to treat the disease.
Using these methods we are also studying whether changes to the DNA in T-cells and B-cells can be used to 'predict' the likely treatment response in newly diagnosed patients with RA. The ability to identify at diagnosis which patients are likely to respond to treatment will be a highly significant advancement for improved clinical management of this disease and would have direct patient benefits, for example through the targeted use of treatments that are most likely to be effective (and the earlier use of alternative therapies in those patients who are less likely to respond to standard treatments). | http://www.haywoodfoundation.org.uk/research_funding.html |
The concept LITERARY CRITICISM -- Books & Reading represents the subject, aboutness, idea or notion of resources found in University of San Diego Libraries.
The Resource
LITERARY CRITICISM -- Books & Reading
Resource Information
The concept LITERARY CRITICISM -- Books & Reading represents the subject, aboutness, idea or notion of resources found in University of San Diego Libraries.
- Label
- LITERARY CRITICISM -- Books & Reading
- Source
- bisacsh
ContextContext of LITERARY CRITICISM -- Books & Reading
Subject of
No resources found
No enriched resources found
- 'Paper-contestations' and textual communities in England, 1640-1675
- A dictionary of literary devices : gradus, A-Z
- A rationale of textual criticism
- A reader on reading
- Addressing the letter : Italian women writers' epistolary fiction
- Agent of change : print culture studies after Elizabeth L. Eisenstein
- Agitations : essays on life and literature
- American children through their books, 1700-1835
- American literature and the culture of reprinting, 1834-1853
- Andrés González de Barcia and the creation of the colonial Spanish American library
- Anti-book : on the art and politics of radical publishing
- Around the book : systems and literacy
- Art and Craft: Thirty Years on the Literary Beat
- Bibliography and the book trades : studies in the print culture of early New England
- Bound to read : compilations, collections, and the making of Renaissance literature
- British India and Victorian literary culture
- British children's fiction in the Second World War
- Come, bright improvement! : the literary societies of nineteenth-century Ontario
- Companionship in grief : love and loss in the memoirs of C.S. Lewis, John Bayley, Donald Hall, Joan Didion, and Calvin Trillin
- Controlling readers : Guillaume de Machaut and his late Medieval audience
- Dirt for art's sake : books on trial from Madame Bovary to Lolita
- Early Canadian printing : a supplement to Marie Tremaine's A bibliography of Canadian imprints, 1751-1800
- Editing modernity : women and little-magazine cultures in Canada, 1916-1956
- Editors, scholars, and the social text
- Forms and meanings : texts, performances, and audiences from codex to computer
- From codex to hypertext : reading at the turn of the twenty-first century
- Grand strategies : literature, statecraft, and world order
- Hot books in the Cold War : the CIA-funded secret book distribution program behind the Iron Curtain
- How to do things with books in Victorian Britain
- In the Public Eye : a History of Reading in Modern France, 1800-1940
- Institutions of reading : the social life of libraries in the United States
- John Dryden : a survey and bibliography of critical studies, 1895-1974
- Knowing books : the consciousness of mediation in eighteenth-century Britain
- Literary Studies and the Pursuits of Reading
- Literature of an independent England : revisions of England, Englishness and English literature
- Little magazine, world form
- Making the modern reader : cultural mediation in early modern literary anthologies
- More day to dawn : Thoreau's Walden for the twenty-first century
- Neatness counts : essays on the writer's desk
- No trespassing : authorship, intellectual property rights, and the boundaries of globalization
- Off the books : on literature and culture
- Play and the politics of reading : the social uses of modernist form
- Popular print and popular medicine : almanacs and health advice in early America
- Pressing the fight : print, propaganda, and the Cold War
- Print culture and the Blackwood tradition, 1805-1930
- Prizing literature : the celebration and circulation of national culture
- Protocols of reading
- Reading Frames in Modern Fiction
- Reading children : literacy, property and the dilemmas of childhood in ninteenth-century America
- Reading contagion : the hazards of reading in the age of print
- Reading culture and writing practices in nineteenth-century France
- Reading popular Newtonianism : print, the Principia, and the dissemination of Newtonian science
- Reading women : literacy, authorship, and culture in the Atlantic world, 1500-1800
- Reading women : literary figures and cultural icons from the Victorian age to the present
- Removable type : histories of the book in Indian country, 1663-1880
- Renegade : Henry Miller and the making of Tropic of Cancer
- Romantic readers : the evidence of marginalia
- Silent reading and the birth of the narrator
- Southern bound : a Gulf coast journalist on books, writers, and literary pilgrimages of the heart
- Still in print : the Southern novel today
- Textual cultures of medieval Italy
- The Harlem Renaissance and the Idea of a New Negro Reader
- The crafty reader
- The culture of the book in Tibet
- The early Christian book
- The event of literature
- The hidden history of South Africa's book and reading cultures
- The literary legacy of the Macmillan Company of Canada : making books and mapping culture
- The myth of print culture : essays on evidence, textuality and bibliographical method
- The power of knowledge : how information and technology made the modern world
- The professional literary agent in Britain, 1880-1920
- The rub of time : Bellow, Nabokov, Hitchens, Travolta, Trump : essays and reportage, 1994-2017
- The space of the book : print culture in the Russian social imagination
- The woman reader
- Thinking outside the book
- Twentieth-century sentimentalism : narrative appropriation in American literature
- Uncle Tom's cabin and the reading revolution : race, literacy, childhood, and fiction, 1851-1911
- Uncommon readers : Denis Donoghue, Frank Kermode, George Steiner and the tradition of the common reader
- Unpacking my library : writers and their books
- Used books : marking readers in Renaissance England
- Victorian periodicals and Victorian society
- What Middletown read : print culture in an American small city
- Why Trilling matters
- Women's bookscapes in early modern Britain : reading, ownership, circulation
Embed
Settings
Select options that apply then copy and paste the RDF/HTML data fragment to include in your application
Embed this data in a secure (HTTPS) page:
Layout options:
Include data citation: | http://link.sandiego.edu/resource/afGv2JtYB3E/ |
Last week’s Food and Drink Federation Awards ceremony was once again a welcome reminder of the excellent work being carried out in this industry, in areas such as exports, environmental leadership, education, health and innovation.
Opening the evening, director general of the Food and Drink Federation Ian Wright noted the record number of entries received this year which highlight the “quality, variety and depth of talent across the industry”.
Food and Drink Federation president Gavin Darby, chief executive of Premier Foods, also addressed attendees. “This industry in unique in many ways,” he said. “One distinctive quality is its geographical spread – no other manufacturing industry delivers an operation or factory in every single political constituency.”
At a time of uncertainty for the UK food and drink sector, returning host Hardeep Singh Kohli, broadcaster, writer and comedian, reminded guests that “whatever is coming over the next few months, we have the best of the best and need to make sure people know about it”.
He summed up the night saying, “This is an industry I love, full of people working really hard.”
It was an honour to sit on the judging panel for the third time this year and to celebrate the achievements of both businesses and individuals at the ceremony.
The list of winners can be found here. | https://www.foodanddrinktechnology.com/18494/editors-blog/cause-for-celebration/ |
Q:
The correct formula for weighted average
The formula for a weighted average is: sum of values, multiplied by respective weights, divided by count of values. Right? That’s what I thought it was until I saw other variations, which are basically the expected value—no dividing by the count of values.
Does the weighted average require dividing the sum of weighted values by their count or is it exactly the same as the expected value formula?
A:
The general idea is that a weighted average (or mean) is for a variable $x$ and weights $w$ $$\bar x = \sum_i w_i x_i / \sum_i w_i.$$
If additionally or alternatively, weights are defined as adding to $1$, say in this notation $w'_i = w_i/\sum_i w_i$, then it follows that you can write $\bar x = \sum_i w'_i x_i$.
The usual unweighted average fits this pattern too. Consider the average $(1 + 2 + 3)/3,$ which we could write in terms of $w_i = 1/3$
$$[(1/3) 1 + (1/3) 2 + (1/3) 3] / [(1/3) + (1/3) + (1/3)]$$
or in terms of $w_i = 1$
$$[1 \times 1 + 1 \times 2 + 1 \times 3] / [1 + 1 + 1]$$
or indeed using any other positive constant $w_i, 42, 666,$ or whatever else takes our fancy.
The more general weighted average is often used without using that name. Suppose the variable is the number of bedrooms per household and in $100 $ households we observe $1$ bedroom $30$ times, $2$ bedrooms $30$ times and $3$ bedrooms $40$ times. Then the appropriate average uses the frequencies as weights, and is thus $(30 + 60 + 120) / 100 = 2.1.$.
The weights do not have to be integers or even counts or frequencies. Thus one simple moving average (in time series analysis, and in some other contexts) is often presented as $0.25 \times$ previous value $+\ 0.5 \times$ present value $+\ 0.25 \times$ next value. That one was called Hanning by John W. Tukey, after Julius von Hann.
There is no requirement that weights are all positive, just that their sum $\sum w_i$ is positive. For example, negative weights arise naturally for certain moving averages in time series analysis. And zero weights are allowed too in a definition, and just result in values not being included at all. An example would be a moving average using a finite window, where implicitly or explicitly observations not in the window get zero weight.
| |
New PGCPS recommendation calls for teachers to shift their teaching methods
Despite the fact that we’re still in a pandemic, school resumes in person with new protocols put in place by the county to keep everyone safe. One of the many recommendations set out by PGCPS is the encouragement for teachers to limit the use of paper being handed out to students. This would ensure people are socially distancing and help prevent contact tracing.
This recommendation has teachers shifting their methods of teaching. Unlike previous years where teachers were accustomed to handing out tons of worksheets, teachers now upload their assignments onto Google Classroom and/or Canvas. These two platforms give students the opportunity to submit assignments directly onto a platform that will allow teachers to grade assignments without having to hand out and receive worksheets from students.
This new system used by numerous teachers took some students by surprise upon arriving at school this past September.
“I was a little surprised because it was one of the things I actually missed about in-person school,” said senior Ozichi Onyejiuwa. “With the decision to return to in-person school, I would assume that a lot of things such as how we work would go back to normal but that was not the case.”
While this method was less common pre-COVID, quite surprisingly some students did not find it hard adjusting to our new norm.
“I got used to it, but sometimes it’s frustrating,” said Onyejiuwa, one of the many students who did not find it very difficult adjusting to the new system.
As teachers continue to use Google Classroom and Canvas, students’ biggest fear is having WiFi issues that can prevent them from submitting assignments on time.
“Having slow wifi or my wifi not working is something I’m still concerned about,” said senior Lesley Velazquez even though students have returned to the building.
It is evident the fear of wifi issues is something many are worried about, especially when the majority of students’ assignments are found online.
Teachers, on the other hand, are adjusting quickly to the new format. “Canvas has made it so much easier to assign & receive work,” said Parkdale American Sign Language (ASL) teacher Ms. Whitney McDonald.
As students and teachers continue to move forward throughout this school year, it begs the question: will computers replace the use of worksheets in the future?
“I think for certain subjects, it’s much easier to submit everything online,” said Onyejiuwa. “For one, there is less paper wasted. Also, it is a more organized way for teachers to review and grade assignments as well as for students to keep track of their assignments. On the other hand, I think with certain subjects, assignments are easier to do on paper.”
The perks of using Chromebooks in classroom settings have risen over this past year. According to a survey conducted by the Washington Post, a school district in Virginia predominantly uses computers for “quizzes, standardized tests, internet searches, and presentations.” “… The [computers] have several advantages — they make learning more interesting, allow students to move at their own pace and boost collaboration.”
This school year will serve as guidance for upcoming years even after social distancing guidelines are lifted. We’re living through times that are paving the way teachers teach in the future and prove just how efficient computers are in classroom settings.
“… I also think that this applies to online teaching as well,” explained Ms. McDonald. “COVID-19 pandemic has been challenging, but it has also helped us notice that lots of tasks can be done online.”
Your donation will support the student journalists of Parkdale High School. Your contribution will allow us to cover our annual website hosting costs and publish some printed editions, as well.
This is Alexsis Tapia, and he is currently a senior at Parkdale. This is his second year on The Paw Print. He is a reporter who mainly writes articles... | https://phspawprint.org/1356/news/new-pgcps-recommendation-calls-for-teachers-to-shift-their-teaching-methods/ |
In a pun, the drummers are usually known to ask about the strength of the male in the community, as well as the beauty and womanliness of the females in the tribe. The drums are known to be a basic part of the life in the community. The schedule of the festivals in the community is labeled by the drum. The drums also tallied the deaths and births in the community.
Today, radio, electronic news, and paper are used to transmit the news in Nigeria. During the start, Nigeria news were transmitted via state owned electronic news and newspapers, but were transmitted later on using TV stations that were owned by the government. During those years, the government in Nigeria has the total control on how to spread the information to the local people. But later one, the corporations that were owned by the government of Nigeria were challenged by private owned tv stations, newspapers, and electronic news.
Even if there are a lot of alternative methods to get information, none of them will ever be better than the electronic news. That is the reason why taking note of these factors is really important. In order to make a difference in the world today, one must always do his or her best. You should be aware on the number of different websites in the internet that can provide you with sources of information. You will be aware on the more important news if you will be visiting these websites. One of the most famous electronic news out there are the news from Nigeria. In regards to news, there are so many agencies out there that would give importance to the general taste of the people, and the electronic news from Nigeria is one of them They will be accurately saying the names in order for the people to fully understand the content. | https://www.world-travel-packages.com/how-to-achieve-maximum-success-with-press/ |
Ninth case of rat lungworm disease in 2019 confirmed on Kauai
HONOLULU (KHON2) — Hawaii health officials have confirmed a case of rat lungworm disease in an adult on Kauai.
This brings the statewide total to nine cases of individuals confirmed with angiostrongyliasis in 2019.
This includes eight individuals who likely contracted the disease on Hawaii Island.
The Kauai adult traveled to Hawaii Island in mid-December and became ill later that month. The individual experienced symptoms of headaches, nausea, vomiting, neck stiffness and joint pain and sought medical care. The investigation was not able to identify an exact source of infection.
“Thoroughly inspecting and rinsing all fresh fruits and vegetables under clean, running water can go a long way in making our food safer to eat, and it is the most effective way to remove pests and other contaminants,” said Dr. Sarah Park, state epidemiologist. “When in doubt, cooking food by boiling for 3 to 5 minutes or heating to an internal temperature of 165 degrees Fahrenheit for at least 15 seconds can kill the parasite that causes rat lungworm disease.”
DOH provides the following recommendations to prevent rat lungworm disease:
Wash all fruits and vegetables under clean, running water to remove any tiny slugs or snails. Pay close attention to leafy greens.
Control snail, slug, and rat populations around homes, gardens and farms. Get rid of vectors safely by clearing debris where they might live, and also using traps and baits. Always wear gloves for safety when working outdoors.
Inspect, wash and store produce in sealed containers, regardless of whether it came from a local retailer, farmer’s market or backyard garden.
For more information about rat lungworm disease and how to prevent its spread, visit: | |
This week on The Biggest Loser, a golden disc of extra poundage much like Survivor‘s “medallion of power” was introduced, celebrity chef Curtis Stone and his asymmetrical collar showed the contestants how to bake low-calorie cupcakes, Bob had everyone below the yellow line over to his house (!!!) for a colorful vegan feast, and two women had an intense come-to-Jillian moment during training.
Two players went home instead of one: Sophia, who fell below the new and ominous RED LINE at the weigh-in, and Burgandy, who was forced by Dr. Ranch Dressing to sit out the elimination challenge due to tendonitis in her leg. Both women’s Biggest Loser Transformation Moments indicate that they’re doing extremely well on their own — Sophia’s been a cheer coach for years (which explains the perky hair ribbons), is obsessed with spinning, and has lost 47 pounds. A cheerleader testified on her behalf. Burgandy’s lost 51 pounds, and she became more likeable in her post-ranch footage. I liked her enthusiasm about interacting with the physical world (and other people) instead of sweating it out in the gym. They’re paying it forward!
I got the impression last week that some of you would prefer to read about specific contestants. So instead of a play-by-play, I’ll recap each remaining player. Here’s how they ranked at the weigh-in.
PERCENTAGE OF WEIGHT LOST
Frado 5.93
Ada 4.27
Brendan 3.37
Rick 2.94
Adam 2.81
Aaron 2.78
Patrick 2.73
***YELLOW LINE***
Jessica 2.68
Lisa 2.63
Elizabeth 2.62
Jesse 2.43
Burgandy 2.29
Mark 2.17
***RED LINE***
Sophia 0.79
Frado, the loudest exerciser on the ranch, doesn’t even know his own limits. “Loooooove working out strong people like you, Frado,” Bob marveled with his crazy Bob eyes. “LOVE IT.” And Frado’s loving the fruits of his labor, even if he does try to get away with slacking off when the trainers aren’t looking. Before The Biggest Loser, Frado was taking seven injections and a small pharmacy of pills every day. Now he’s off medication. “It shows you that exercise is the purest form of pharmaceuticals,” raved the newly reformed Staten Island papa. “The body will give you what it needs as long as you show it some love.”
Ada had a banner week, practically begging for an emotional breakdown from Jillian during the lengthy weigh-in taping. (Everyone needs one! It’s a rite of passage.) She lost 10 pounds and is consistently strong on the ranch, but had trouble giving herself any credit for her achievements. Ada became my favorite person of the season this week — not because of her terrible, abusive childhood (her brother drowned in a pool next to her when she was 2 or 3, her parents blamed her, gave her the impression it should have been her who died instead, considered her worthless because she was always fat) but because she’s so smiley and upbeat and genuinely earnest despite all that. Jillian ran her through the ringer physically in the gym in order to get her to open up emotionally, outside. Also, Ada seems to be an honorary member of the B-F-P alliance.
Brendan has big plans to vote Adam out as soon as he (along with his alliance of Frado and Patrick) gets the chance. “He’s so cocky,” Brendan muttered after Adam had the audacity to compete in the cupcake challenge. He lost 11 pounds and is down to 315.
NEXT: The award for the best pre-commercial faces at the weigh-in goes to…
Rick nearly swiped the gold circle in the temptation challenge, but Adam found the PINK CUPCAKE with a RASPBERRY ON TOP at the last second. Rick lost 9 pounds this week and now weighs 297. He said he couldn’t remember the last time he was under 300. My favorite Rick moment of the episode was at the end of the “Subway has breakfast!” segment, when he cracked himself up after calling the food at Subway “delicious.”
Adam dropped 10 pounds, despite having consumed 1350 extra calories in the cupcake challenge. It was worth it to win the gold circle, he said. “It wasn’t the temptation of the cupcakes; it was the temptation of wanting to solidify my place on the ranch.” Each week Adam holds on to the gold circle, he’ll get an extra pound to cash in at the weigh-in. It could eventually be a huge advantage. For now, the Brendan-Frado-Patrick alliance has decided Adam is afraid of them.
Aaron lost 12 pounds and is down to 419 in “the first week I actually felt like I earned my spot,” he said. It was a huge week for Aaron because Jillian helped him realize he could actually run. His legs felt like Jell-O on the treadmill but he pushed through, thinking of his son. I like the constant encouragement Aaron is getting from Frado in the gym.
Patrick had a relatively quiet episode; what stuck with me was that 100 calories means so much more to him now than in his pre-ranch days, when he was eating 5500 calories just to maintain his 400-pound heft. He lost 10 pounds and is down to 356.
Jessica has the best pre-commercial faces at the weigh-in. Is she stunned because of a huge loss or stunned because of a huge failure? It could go either way! The editors love her. Jessica didn’t have to compete in the elimination challenge — Frado saved her because she was less than a pound away from making it above the yellow line. Seven pounds in a week: “not something to be upset about.”
Lisa is a hugger! She barreled in for Bob after he announced the invitation to his house, and she lingered in an embrace with Curtis Stone that was reminiscent of Brendan’s extra-long hug with Anna Kournikova last week. Lisa kept herself safe by rolling out a 900-pound rug faster than Elizabeth. That elimination was totally stacked in favor of the burly-armed men, huh?
Elizabeth placed fourth/last in the elimination rug-rolling challenge, but just finishing was a triumph, and the injured Burgandy ended up being no contest for her at the final vote. Elizabeth’s come-to-Jillian moment was, in her own way, just as profound or more so than Ada’s. Jillian convinced her she was using her asthma as an excuse for her physical limitations. I am in complete awe of Jillian sometimes.
NEXT: The Jillian-Elizabeth exchange worth rewinding.
Jillian: What are you choking down?
Elizabeth: It’s burning to breathe.
Jillian: No. That’s not it.
She then got Elizabeth to admit she doesn’t want to die like her father! THANKS, FREUD. But seriously, this was a huge breakthrough for Elizabeth, who agreed with her guru that it’s counter-productive to stifle her emotions like she stifles her ability to breathe. “There’s nothing wrong with you,” said Master J. “The more you challenge, the more you stuff, the more you suffocate.”
Jesse, who lost only 8 pounds this week, finished the carpet-unrolling elimination challenge in second place — a strong comeback from his poor showing as a participant in the cupcake challenge. I loved this quintessentially Biggest Loser line from him: “Everything basically changed when Adam and Rick LAPPED me in cupcakes.” Surprisingly, Bob and Jillian didn’t kick his ass for eating three cupcakes. “You’re trying to stay here; who could really fault you for that?” Jillian wondered. What had happened to Jillian? “Where’s, like, the hellfire?” Sophia wondered.
Mark hurt his back after a rowing machine mishap, but valiantly told Frado not to save him; he was okay to compete in the elimination challenge. And he won! “I just nailed that thing,” said Mark, in reference to the 900-pound rug he rolled out. Mark doesn’t fear Bob, Jillian, or the Temptation Challenges. He just doesn’t want to go home early.
Was anyone else reminded of Seinfeld (“Who? Who doesn’t want to wear the ribbon?”) when Jillian barked at her lowly sufferers, “WHO? WHO IS GOING TO BE BELOW THE RED LINE?”
WHO? Who are you rooting for, so far? Are you interested in any of the “players left behind,” who are about to show up on campus next week? Does Bob inspire you to put some color back into your world? Discuss this week’s show, below!
TV ADDICTS, STOP WHERE YOU ARE! Embedded below, listen to the second edition of EW.com’s TV Insiders podcast. Dalton Ross, Annie Barrett, Michael Slezak, Michael Ausiello, and Tim Stack (EW’s resident Gleek) break down the week in television—specifically Glee, Dancing With the Stars, and Survivor—and present it to you in an easily digestible audio format. Or click here to download TV Insiders to your MP3 player! | https://ew.com/recap/biggest-loser-season-10-episode-4/ |
As a (Senior) Advisor for the action field ‘Income generation/livelihoods’ you will be supporting the implementation of our partner projects as well as the conceptual development of the projects’ livelihood strategy. This includes on the basis of the constant monitoring of the project activities the provision of advice for the possible redesign/adjustment and planning of follow up and new livelihoods measures. Frequent visits to Mosul and other districts in Nineveh are necessary.
A. Context
- The objective of the projects ‘Recovery and Rehabilitation Mosul’ (RRM) and ‘Stabilizing Livelihoods in Nineveh’ (SLN) is to strengthen the resilience of vulnerable populations in selected communities in Nineveh, specifically in the districts of al-Hamdaniya, Mosul, Sinjar, Tel Afar and Tel Kaif. The projects focus on achieving structural impacts at the household and subnational levels by strengthening public social services, access to income and peacebuilding measures. The projects further aim at creating synergies between so called hard components e.g. construction and soft components including programming in civil conflict transformation and promoting peaceful coexistence. Thus, the projects comprise the following four fields of action 1) rehabilitation of basic public social infrastructure, 2) Income generation/livelihoods 3) approaches to peacebuilding and 4) prospects for youth.
-
The action field ‘Income generation/livelihoods’ aims at a structural impact at the household level by improving access to measures designed to support independent income generation in the medium and short term (vocational training, support for start-ups, cash for work). In this context, the project also focuses on creating opportunities for a fresh start for the local population and returnees. B. Overall Purpose of Post
C. Tasks and Responsibilities
General Responsibilities
The (Senior) Advisor for the action field ‘Income generation/livelihoods’ is responsible for
-
Oversee and monitor day-to-day project activities, providing technical support to ensure the project activities are implemented in line with the planned timeline and results;
-
Support the management of the grant agreements with several international NGOs;
-
Promptly report to Project Manager and relevant actors whenever the issues that require their attention arise;
-
Provides technical advice on ongoing efforts for successful implementation as well as design and development of complementing livelihood activities
-
Represents the unit at partner meetings, stakeholder meetings and further technical platforms for exchange;
-
Analyzes and reports on relevant outputs from partner meetings, cluster meetings and further technical platforms for exchange.
-
Keeps up-to-date with local and national implementation partners’ (Think Tanks, NGOs, etc.) intervention strategies, as well as governmental approaches to and strategies on reconciliation and peacebuilding
-
Coordinates and cooperates with all relevant stakeholders including identification of interfaces and synergies.
-
Coordinates closely with the monitoring and evaluation unit to assess impact and coverage of activities, as well as quality and relevance of the context monitoring system
- D. Required qualifications, competences and experience
General Qualifications
-
Degree in Development Economics, economics, social science, political science, law or related field of study.
-
At least 6 years of professional experience in a field relevant to the position, preferably in an INGO or International Development Organization.
-
Very good command of ICT technologies
-
Strong coordination and organization skills with attention to details and deadlines regarding effective implementation, planning, monitoring and evaluation of the project as well as reporting of project activities in a context and conflict sensitive manner.
-
Good communication and networking skills.
-
Excellent communication skills in English, Kurdish and Arabic.
Specific Qualifications
-
Experience in supporting and monitoring livelihood, or recovery programs in Iraq or a similar context.
-
Context knowledge of the implementation area (Ninewa)
-
Experience with micro-financing and/or grant schemes
-
Experience in liaising with government authorities, other national/international technical counterparts and NGOs, and building effective partnerships
-
Experience in working with multi-cultural teams. | http://www.mselect.iq/jobs/senior-advisor-livelihoods |
The Secretary-General is pleased to announce the following job opening: Executive Director, United Nations Office on Drugs and Crime. In order to ensure a wide pool of candidates for this position, the United Nations Secretariat welcomes applications to supplement the Secretary-General’s search and consultations.
UNODC is the global leader in the fight against illicit drugs and international crime. Established in 1997 through a merger between the United Nations Drug Control Programme and the Centre for International Crime Prevention, UNODC operates in all regions of the world through an extensive network of field offices.
To get more, join Diplomacy Opportunities Facebook Group, and follow us on Twitter and Instagram
The Executive Director is accountable to the Secretary-General and is responsible for all the activities of the UNODC as well as its administration. The core strategic functions of the Executive Director include:
- Coordinating and providing effective leadership for all United Nations drug control and crime prevention activities in order to ensure coherence of action within the Office as well as the coordination, complementarity and non-duplication of such activities across the United Nations system;
- Representing the Secretary-General at meetings and conferences on international drug control and crime prevention;
- Acting on behalf of the Secretary-General in fulfilling the responsibility that devolves upon him or her under the terms of international treaties and resolutions of United Nations organs relating to international drug control or crime prevention.
The Director-General of UNOV is accountable to the Secretary-General and is responsible for all activities of the UNOV. The Director-General serves as the representative of the Secretary General; performs representation and liaison functions with the host Government, permanent missions and intergovernmental and non-governmental organizations based in Vienna; provides executive direction and management to the programme on the peaceful uses of outer space; provides executive direction and management to the programmes of administration, conference services and other support and common services; is responsible for the management of the United Nations facilities in Vienna; and provides executive direction for the work of the United
Nations Information Service in Vienna.
The Secretary-General is seeking an individual with:
- Demonstrated extensive knowledge and experience in the area of drug control, crime prevention and international terrorism in the context of sustainable development and human security with a track record of accomplishment at the regional, national or international level;
- Ability to be a powerful and convincing advocate on all aspects of the fight against illicit drugs and international crime and the broader sustainable development agenda worldwide and within the United Nations system;
- Demonstrated leadership experience with strategic vision and proven skills in leading transformation in, and managing complex organizations, such as intergovernmental, international non-governmental or multinational private sector entities;
- Proven track record of change management in complex organizations and accomplishments at the regional, national or international level with strong resource mobilization, political and diplomatic skills;
- Demonstrated ability to work harmoniously in a multi-cultural team and establish harmonious and effective working relationships both within and outside the organization;
- High commitment to the values and guiding principles of the United Nations and familiarity with the United Nations system, including peacekeeping, humanitarian, human rights and development settings and challenges.
To get more, join Diplomacy Opportunities Facebook Group, and follow us on Twitter and Instagram
To Apply
Applications must include a detailed curriculum vitae with full contact information (e-mail and telephone). Applications must be sent to the Secretariat of the United Nations at the following e-mail address: [email protected]
Deadline: Monday 1 July 2019.
Applications of women candidates is strongly encouraged.
Further information on UNODC and UNOV is available in the Secretary-General’s bulletins ST/SGB/2004/5 and ST/SGB/2004/6, and on the following website: https://www.unodc.org/
Human Rights Screening
Individuals who seek to serve with the United Nations in any individual capacity will be
required, if short-listed, to complete a self-attestation stating that they have not committed, been convicted of, nor prosecuted for, any criminal offence and have not been involved, by act or omission, in the commission of any violation of international human rights law or international humanitarian law.
Conflicts of Interest
All United Nations staff members are expected to uphold the highest standards of efficiency, competence and integrity. Senior leaders in particular, have the responsibility to serve as role models in upholding the organization’s ethical standards. A conflict of interest occurs when, by act or omission, a staff member’s personal interests interfere with the performance of his/her official duties and responsibilities, or call into question his/her integrity, independence and impartiality. Risk for conflicts of interest may arise from a staff member’s engagement in outside (non-UN) employment or occupation; outside activities, including political activities; receipt of gifts, honours, awards, favours or remuneration from external (non-UN) sources; or personal investment. In particular, no staff member shall accept any honour, decoration, favour, gift or remuneration from any Government (staff regulation 1.2 (j)). Where a real or perceived conflict of interest does arise, senior leaders are obligated to disclose this to the organization without delay. In order to avoid real or perceived family influence or preferential treatment and conflicts of interest that could stem from such situations, the UN Staff Rules provide that appointments “shall not be granted to anyone who is the father, mother, son, daughter, brother or sister of a staff member” (staff rule 4.7 (a)).
Short-listed individuals will also be required to complete the pre-appointment declaration of interests for senior positions to identify possible conflicts of interest that may arise and to proactively prevent and manage, as much as possible and in a timely manner, situations in which personal interests may conflict or appear to conflict with the interests of the United Nations, should the individual be appointed to this position. | https://diplomacyopp.com/2019/06/11/call-for-applications-executive-director-at-united-nations-office-on-drugs-and-crime/ |
Occupational Groups:
- Ombudsman and Ethics
- Legal - International Law
- Civil Society and Local governance
- Human Resources
- Conflict prevention
- Corporate Social Responsibility (CSR)
- Democratic Governance
- Labour Market Policy
- Closing Date: 2019-11-10
Click "SAVE JOB" to save this job description for later.
Sign up for free to be able to save this job for later.
IMPORTANT INFORMATION
Close relatives (Close relatives refer to spouse, children, mother, father, brother and sister, niece, nephew, aunt and uncle) of ADB staff, except spouses of international staff, are not eligible for recruitment and appointment to staff positions. Applicants are expected to disclose if they have any relative/s by consanguinity/blood, by adoption and/or by affinity/marriage presently employed in ADB.
This is a senior staff fixed-term appointment for a period of 3 years. This vacancy is open to internal and external applicants.
If the selected candidate is an external hire, the appointment may be extended for a period of up to 3 years per extension, or not renewed. In case of extension, staff may continue in the position for another term of up to 3 years, or be reassigned to any suitable position in ADB.
Fixed-term appointments or assignments are subject to Section 3 of Administrative Order (AO) 2.01 (Recruitment and Appointment) and Section 8 of AO 2.03 (Selection, Talent and Position Management) and its Appendices.
Whether the selected candidate is internal or external, and regardless of the type of appointment, any extension of staff beyond age 60 shall be subject to such terms and conditions determined by ADB, including, where relevant, those provided in Section 10 of AO 2.05 (Termination Policy) and its Appendices.
Overview
Asian Development Bank (ADB) is an international development finance institution headquartered in Manila, Philippines and is composed of 68 members, 49 of which are from the Asia and Pacific region. ADB is committed to achieving a prosperous, inclusive, resilient, and sustainable Asia and the Pacific, while sustaining its efforts to eradicate extreme poverty. ADB combines finance, knowledge, and partnerships to fulfill its expanded vision under its Strategy 2030.
ADB only hires nationals of its 68 member countries.
The position is responsible for establishing and managing the Office of Professional Conduct (OPC).The OPC is established to (i) develop and deliver awareness raising and training on conduct in the workplace; (ii) provide advice to Management and staff on the application of the Code of Conduct; and (iii) contribute to workplace resolutions. The OPC will support an enabling environment for a positive and professional work environment of dignity and mutual respect (regardless of hierarchical role or rank), both at headquarters and in the field, and contribute to the resolution of concerns about workplace conduct in a constructive and timely manner.
To view ADB Organizational Chart, please click here.
Job Purpose
The Director, OPC will provide guidance to ADB staff and all persons covered by the Code of Conduct (Covered Persons) (both in headquarters and in the field) on their questions relating to the Code of Conduct and how to resolve their workplace concerns. Together with the Professional Conduct Coordination Committee, Budget, Personnel, and Management Systems Department (BPMSD), Office of Anticorruption and Integrity (OAI), and the Office of the Ombudsperson (OOMP), and related offices, Director, OPC will monitor and assess the effectiveness of ADB’s policies, procedures, controls and systems with regard to professional conduct, identifying gaps and needs, and recommend modifications and improvements to existing rules and systems; and support and collaborate with ADB’s Departments and Offices to enhance and promote professional conduct; and advise Management on strategies that can be developed to foster and encourage professional conduct in ADB. The incumbent will report to the President, who may delegate the day-to-day oversight to the Vice-President for Administration and Corporate Management; and will supervise International Staff, National Staff, and Administrative Staff.
Responsibilities
Training and Awareness Raising
· Reinforces and promotes a clear understanding among Management and staff of the staff rules and regulations, procedures and practices regarding the standards of conduct that ADB requires its Management and staff to adhere to at all times.
· Educates Covered Persons on their obligations under the Code and provides advice to them on how to ensure their compliance.
· In consultation with the Professional Conduct Coordination Committee, BPMSD, OAI and OOMP, develops and rolls out trainings and capacity building programs, including mandatory training programs for both new and current ADB staff members and consultants on professional conduct.
· In consultation with the Professional Conduct Coordination Committee, BPMSD, OAI and OOMP, identifies high-priority areas, and develops and roll-out targeted trainings and capacity buildings to respond to such needs at both headquarters and field offices.
· In collaboration with the Professional Conduct Coordination Committee, BPMSD, OAI and OOMP supports the development and implementation of tools for promoting and enhancing professional conduct (e.g., communications strategies).
· Develops and delivers presentations and knowledge products on the importance of professional conduct, and the role of the OPC in promoting these values.
· Contributes to ADB’s broader campaigns on positive workplace behavior and organizational health.
Individual Advice on the Application of the Code of Conduct
· Provides advice to Covered Persons on questions they may have relating to the Code of Conduct and related policies and procedures.
· Guides Covered Persons in understanding their obligations in practice, and to apply these rules to their individual situations.
· Serves as the central source of clearance for certain activities, as provided for under the Code of Conduct, (e.g. gifts, external activities, public statements), in coordination with the relevant departments.
Workplace Resolutions
· Receives Covered Persons who have concerns about th
e workplace and helps them assess the issue and determine the most appropriate method for resolution.
· Provides guidance on the different workplace resolution options available to them and what to expect from these options.
· Obtains the involvement of relevant parties to promote a resolution. This may include colleagues, supervisors, Head of Department, BPMSD, or OOMP.
· Refers all concerns relating to integrity violations to OAI.
· Refers concerns relating to misconduct other than integrity violations (i.e., other misconduct) to OAI for investigation if the OPC determines that an investigation is warranted.
· Develops and maintains procedures for the handling and processing of workplace concerns with due regard to confidentiality obligations commensurate with the functions of the OPC.
· Undertakes follow-up of cases as needed.
· Develops and maintains a records management system for all cases handled by OPC.
Policy Review
· In consultation with the Professional Conduct Coordination Committee, BPMSD, OAI, OOMP and other relevant offices, evaluates the effectiveness of existing policies, procedures, controls and systems for enforcing accountability and mitigating risks among ADB staff members, in the area of professional conduct, and identifies areas for improvement.
· Prepares and submits regular reports to the President on OPC operations.
· Develops a system to gather and analyze statistical data on cases and concerns brought to the OPC.
Staff Supervision
· Creates and leads multi-disciplinary teams and ensures the overall quality of work.
· Manages the OPC, and supervises the performance of teams and individuals, providing clear direction and regular monitoring and feedback on performance.
· Provides coaching and mentoring to teams and individuals and ensures their on-going learning and development.
Others
· Provides regular feedback to the President on OPC’s activities.
· Prepares an annual report to all staff on its activities, which preserves confidentiality.
· Undertakes other work as may be assigned by the President, related to carrying out of the Job Purpose.
Desired Skills and Experience
Relevant Experience & Requirements
· Master’s degree in ethics, law, corporate governance, human resources or other related fields. University degree in ethics, law, corporate governance, human resources or other related fields, combined with relevant experience in similar organization/s may be considered in lieu of a Master’s degree.
· Demonstrated leadership in professional work relevant to the position.
· At least 15 years of relevant professional work experience demonstrating progression of responsibilities in areas such as conflict resolution, organizational management, development and ethics, corporate responsibility and/or corporate governance, employment law, human resources or other fields that demonstrate application of analytical skills with sound judgment.
· Demonstrated mediation and/or negotiation skills.
· Experience in constructive handling of concerns relating to bullying, harassment, sexual harassment and/or retaliation.
· Ability to perform under pressure and interact with others with the utmost diplomacy and professionally at all times.
· Ability to balance multiple work priorities effectively and adapt priorities in any environment.
· Demonstrated ability to work with multiple stakeholders to build consensus and achieve constructive outcomes.
· Demonstrated teamwork (ability to work with others to achieve effective results), leadership (apply interpersonal influence to inspire others to move in a meaningful direction with competence and commitment), and conceptualization skills (developing viable solutions based on an understanding of institutional perspectives and needs).
· Excellent oral and written communication skills in English, including the ability to clearly and concisely prepare, present, discuss and defend issues, findings, and recommendations at senior levels and to produce briefs, reports, papers, etc.
· International experience working in several countries, with diverse groups and issues.
· Strong emotional intelligence with excellent interpersonal skills, and the ability to exercise sound and independent judgment, prudence and maturity in complex and sensitive cases.
· Ability to work with discretion in handling sensitive and confidential matter.
General Considerations
The selected candidate, if new to ADB, is appointed for an initial term of 3 years.
ADB offers competitive remuneration and a comprehensive benefits package. Actual appointment salary will be based on ADB’s standards and computation, taking into account the selected individual’s qualifications and experience.
ADB seeks to ensure that everyone is treated with respect and given equal opportunities to work in an inclusive environment. ADB encourages all qualified candidates to apply regardless of their racial, ethnic, religious and cultural background, gender, sexual orientation or disabilities. Women are highly encouraged to apply.
Please note that the actual level and salary will be based on qualifications of the selected candidate. | https://www.impactpool.org/jobs/543559 |
We wrap up our International Women’s Day Campaign with a Reflection piece on Gender and “the Cyprus problem” by Sophia Papastavrou in which she encourages those with a vested interest in peace building processes to think critically and explore openly the consequences “resisting gender” will bring to Cyprus.
We would also like to draw attention to a Blog by Rachel Warden, the Gender Justice Program Coordinator at KAIROS, which was posted on rabble.ca on International Women’s Day. In the piece, Rachel discusses the work Naty Atz, a highly respected defender of human rights in Guatemala, is doing to raise awareness about the impact resource extraction is having on Indigenous peoples’ lives.
Please take some time to read both articles.
Much has been accomplished over the years in the fight for equality and we celebrate our achievements, as we learn from the experiences that were not so successful. We also remember that while much ground has been covered, the journey is long and the road to change is paved with dust; sharp bends; speed bumps; potholes and roadblocks. Despite this, we keep moving forward knowing that we don’t journey alone. Knowing, that even when we can’t see it, change is happening and progress is being made.
In the course of the last two weeks we profiled several activists and activist organizations who are tackling inequalities in their local and global communities:
- Diane Redsky, the Executive Director at Ma Mawi Wi Chi Itata Centre, Inc, shed light on the issue of Sex Trafficking in Canada, the risk factors involved and work that is being done to end this form of modern day slavery. We are reminded in her interview that “We all have the power to work together to make Canada a safer place for women and girls.”
- The Grandmothers Advocacy Network discussed how the philosophy of Ubantu guides the work they do in Sub-Saharan Africa and they invited anyone (including those who are not grandmothers :)) with their passion for social justice, equality and human rights to join them.
- In her interview, Jessica Chandrasheka, calls our attention to the fact that in Sri Lanka, justice remains elusive for many of the victims and survivors of the Tamil Genocide. She discusses the importance of solidarity activism and reminds us that “we must continue to speak out even when our voices are not always heard, because the struggle for peace and justice can unfortunately be a long and arduous one.”
- Amnesty International shared with us how they commemorated international women’s day in Ottawa, in Toronto and in Regina. They have several International women’s days’ priority items that run through the Spring which means that there is still time to take action! Please read their interview for more information.
- Sarah Tuckey talked about the role of education in creating social change, the important role social media platforms are playing in consciousness raising and advocacy efforts and the importance of walking the talk: “If individuals, groups, and entire nations call for gender equality, we need to come up with ways to create true equality and balance in our communities and beyond”
- The Nobel Women’s Initiative discussed some of the projects they are working on, and left us with this inspiring quote by Jody Williams:“Worrying about an issue is not a strategy for change. Ordinary people can accomplish extraordinary things when they work together”. They also left us with this activism action plan: Get involved, get organized and celebrate every small step of success.
- And finally, in her interview, which was posted earlier today, Corrina Keeling talked about the importance of owning privilege, being open to learning, being open to change and the importance of self-love and community support in social justice movements.
We hope you enjoyed reading the interviews as much as we enjoyed profiling them.
As we noted at the beginning of the International Women’s Day blog series, the main purpose of the campaign was to draw attention to some of the work that is already being done to make the world a better place, in the hopes that it would inspire and motivate others to step up, get involved and take action. You may choose to walk besides the change-makers profiled above (most of the organizations welcome volunteers) or may choose to create your own path, you may also decide to do both. The important thing is to keep moving (and find rest along the way) because “a journey of a thousand miles, starts with a single step” (Lao Tzu).
Let’s keep moving forward, towards our goals in 2015, and beyond.
With much gratitude, | https://wpsn-canada.org/2015/03/13/makeithappen-campaign-overview-and-summary/ |
25 Intuition QuotesLet these Intuition Quotes inspire you to always listen to your inner voice. It's arguable that humans have survived the centuries due to the little voice inside telling you to back off or get stuck in.
-
If your inner voice always said “I'm lost" simply repeat “I know the way.
John Francis King, Wise Guy and Other Fables
Inspirational Quotes
-
By learning to trust yourself and hear and believe your intuition you could liberate yourself and realize your true potential.
Neil Crofts, Authentic: How to Make a Living By Being Yourself
Potential
-
Follow what your intuitive feeling is telling you, as opposed to logical reasoning, what you think is right, or what others think.
Jawara D. King, World Transformation
Feeling Quotes
-
Pay attention to your vibes and trust your gut. We all are born with intuition.
Jen Mazer, Manifesting Made Easy
Trust
-
Know your purpose. Feel it. When you go off the path, your intuition will guide you back on sometimes with a tickle, sometimes with a cattle prod.
Rachelle Chartrand, Chrysalis: A Dark and Delicious Diary of Emergence
Purpose
-
As your intuition guides you to live from your heart and Soul, you attract others who live from their heart and soul.
Susann Taylor Shier, Soul Mastery: Accessing the Gifts of Your Soul
Law of Attraction
-
When evil is present, your body knows.
Sky M. Armstrong, Courage, You've Got It!
Wise
-
Over time, as you listen to your intuition, your true voice will become stronger and more clear.
Flora Bowley, Brave Intuitive Painting-Let Go, Be Bold, Unfold!
Time
-
It takes tremendous courage to stop the doubt and questioning and to trust our intuition and feelings.
Dancing with Your Skeletons: Healing Through Dance
Courage Quotes
-
You'll probably be right the first time, so don't try to second-guess yourself. Have the courage and conviction to trust your instincts.
Alan N. Schoonmaker, Your Worst Poker Enemy: Master The Mental Game
Quotes About Strength and Courage
-
And the only way to be so is to listen to and honor our intuition, our deepest knowing, our most powerful inner wisdom. Choosing to claim and follow our sixth sense is our strongest natural protector, our greatest psychic liberation, and the only way to be truly safe in life.
Sonia Choquette, Ph.D., The Time Has Come to Accept Your Intuitive Gifts!
Listening
-
So have courage, because you have a deep wisdom. No one said it would be easy to trust our intuition, and it can be a long, hard road, but when a choice feels right for you, pursue it.
Rebecca Perkins, Best Knickers Always: 50 Lessons For Midlife
Choice
-
Your intuition is a powerful tool that can be sharpened and honed to the point that you can rely on and trust it within the daily context of your life.
Laura Alden Kamm, Intuitive Wellnes
Life Quotes
-
Prayer places you in a spiritual connection with your higher power in the way that is useful to you at the time. It is always connected to you because, like your intuition, the power of your higher power lies within you.
Connie Omar, Sacred Journey to Ladyhood a Woman’S Guide Through Her Write of Passage
Spiritual Quotes
-
A woman in touch with her intuition is a formidable force.
Yvonne J Douglas, Trust Your Intuition Your Protector Your Guide
Strong Women
-
We long for truth but often settle for a lie.
John Meddling, Human Wholeness- the Articles of Self Discovery
Lies
-
Through following intuition, you often go for one thing and get another.
Florence Scovel Shinn, The Magic Of Intuition
Life is Beautiful
-
Increasing our emotional openness leads to a deepening of our relationships. This is another key principle of intuition.
Maura McCarley Torkildson, The Inner Tree
Relationship
-
When in doubt, consult your intuition or gut feelings by asking, "is this the right decision?" Whatever strong and clear response arises in your body is the answer.
Luna and Sol, Awakened Empath
Decision
-
Trust your intuition and find the rhythm that works best for you. Stick to it. Connect.
Jill Sylvester, LMHC, Trust Your Intuition
Positive Life
-
Remember, you are your own god, goddess, angel, and guide, all rolled into one beautiful, heart-centered human package. You are your own higher self and your own teacher.
Heidi Jane, Intuition on Tap Workbook
Words of Encouragement
-
To be your own bible is to have faith first in your own intuition, in your own heart, in your own experience, your own conscience.
Sunirmalya M. Symons, The Simplest Book God Ever Wrote
Faith
-
Think with your head but also go with the intuitive mind that will take you to places within your self that you have never been before.
Lillian Too, The Book of Golden Wisdom
Believe in Yourself
-
If you need rest, allow yourself to crawl into bed regardless of the time of day. If you need nourishment, visit the supermarket and splurge on a rare treat. Trust your intuition. It is the voice of your soul and will guide you to fulfill your need.
Alan D. Wolfelt, The Mourner's Book of Faith:
Live Life
-
Sitting and waiting can be torture. Intuition can give us a glimmer of hope, solace and guidance during difficult times. | https://www.wow4u.com/intuitionquotes.html |
With the non-conference portion of the schedule in our rear view mirror, it's now time to focus on what truly matters: conference play. As it has been for the last umpteen years, the MAC will be getting only one bid to the NCAA Tournament as really only one team truly impressed during non-conference play. Surely others will get into post-season play, but the number of really good teams in the conference has fallen since last season.
Note: Rankings are compiled through Sundays' games:
|Rank||Team (1st-Place Votes)||Points||Last Week|
|1||Akron (13)||21||1|
|2||Toledo||43||2|
|3||Northern Illinois||54||4|
|4t||Kent State||60||3|
|4t||Ohio||60||5|
|6||Ball State||85||10|
|7||Eastern Michigan||90||6|
|8||Buffalo||102||7|
|9||Central Michigan||104||8|
|10||Bowling Green||120||11|
|11||Western Michigan||122||9|
|12||Miami||145||12|
The big riser here is Ball State, who's win over Valparaiso just keeps looking better and better. It's true that the Cardinals haven't beaten anyone of note outside of the Crusaders, but they're the feel-good story of the MAC this year. After winning only 7 games last year, this season's win total of 9 feels like an accomplishment on its own. Sure, conference play will knock them down a few pegs. But who knows, maybe this Ball State team can contend this season?
Western Michigan didn't lose this week, but needed to really work to beat 7-9 Jacksonville at home, a team ranked outside of KenPom's top 300. This could be a down year in Kalamazoo, and the MAC didn't give them an easy start to the conference schedule. The Broncos open MAC play tonight against Kent State before heading to Akron on Friday. That certainly smells like an 0-2 start, which is not what WMU needs right now. | https://www.hustlebelt.com/mac-basketball/2016/1/5/10713058/mac-mens-basketball-power-rankings-ball-state-rises-while-western |
2019 has been a memorable year for space exploration. Not only did we mark the 50th anniversary of Apollo 11’s lunar landing, but we’re entering an exciting new age of space exploration. What’s exciting about this new era of space exploration is not necessarily where we’re planning on going, but how we’re planning on getting there through collaboration and partnerships with allies.
Back in 1969 when Neil Armstrong transmitted those famous words, “One small step for a man, one giant leap for mankind,” there was very little collaboration when it came to space exploration. NASA was working with a limited number of industry partners, like Raytheon whose guidance computers steered some of the first space capsules, including Apollo 11. But overall, collaboration was scarce.
Fast forward to today and the situation has changed dramatically. Not only are partnerships between NASA and the private sector burgeoning, opening up new possibilities, and hastening accomplishments, but we’re also turning to our allies to forge partnerships to stay ahead of the competition when it comes to space technology.
At the heart of this revitalized mission is partnership between the United States and the United Kingdom. The long-standing relationship between these two allies is what makes collaboration in this highly sensitive theater possible. And, in turn, the advancements made by sharing key technologies, training capabilities, and mission-critical data will further strengthen these bonds and the benefits they convey.
Most recently, this collaboration has expanded in the private sector to include industry leaders.
“With the rapid advance of technology and explosive growth in the commercialization of space, the old tool kit for our use of space is less and less relevant to the world we live in today,” said Gil Klinger, Vice President of Space and Intelligence at Raytheon. “The question now is how the pace of our evolution in space-based capabilities can maintain the speed of relevance to match or exceed that of the adversary. That is why Raytheon is partnering on the Team ARTEMIS program to advance the UK’s sovereign space capability, utilizing agile development processes to speed innovation, and taking an enterprise approach to satellite ground systems.”
Technology transfer models such as this, that bring US space technology to the UK enable UK companies and the MOD to develop bespoke solutions more quickly, deploy satellite constellations, and transmit, receive, and process data more quickly.
“Our relationship with UK companies and the MOD builds on Raytheon’s pioneering work with the US government in transforming how they develop and deploy satellite control and planning systems,” Klinger shared. “We’re leveraging our experience developing ground control systems for GPS and other critical satellite constellations.”
Over the next decade, as these partnerships ramp up and deliver on their promise, we expect to see a new golden age of space exploration. With the considerable investments that both the UK and US governments are making in talent, skills, launch facilities, satellites, and neural networks, to name just a few high-value items, will help focus the mission, speed decision-making, and deliver high-quality results. “The work we’re doing in space ground control is very interesting and is mission critical to our customers,” said Klinger. “But what is most important, though, is the way in which they will enhance civil society and protect the nation more effectively,” he concluded. | http://governmenttechnologyinsider.com/the-future-of-space-exploration-lies-is-in-collaboration/ |
Introduction {#s1}
============
Episodic memories represent past autobiographical events and include rich details about the context in which those events occur. For example, a memory of your high school prom might include where the dance was held, when it occurred, how you traveled to the dance, and of course, who your date was for the evening. The contextual information encoded during an experience supports the later retrieval of that information, a phenomenon supported by encoding specificity (Tulving and Thomson, [@B114]) and *contextual retrieval* (Hirsh, [@B53]). That is, the content of *what* is remembered about a particular event is often critically dependent on *where* that memory is retrieved. Deficits in contextual retrieval are associated with memory impairments accompanying a variety of neural insults including age-related dementia, traumatic brain injury, stroke, and neurodegenerative disease. As such, understanding the neural circuits mediating contextual retrieval is essential for targeting interventions to alleviate memory disorders and associated cognitive impairments.
Decades of research in both humans and animals have revealed that two brain areas, the hippocampus (HPC) and medial prefrontal cortex (mPFC), are essential for the encoding and retrieval of episodic memories (Kennedy and Shapiro, [@B69]; Hasselmo and Eichenbaum, [@B49]; Diana et al., [@B29]; Preston and Eichenbaum, [@B91]). Indeed, considerable data suggests that communication between these brain areas is essential for episodic memory processes (Simons and Spiers, [@B103]; Preston and Eichenbaum, [@B91]). Anatomically, neurons in the HPC have a robust projection to the mPFC, including the infralimbic (IL) and prelimbic (PL) cortices in rats (Swanson and Kohler, [@B108]; Jay and Witter, [@B60]; Thierry et al., [@B111]; Varela et al., [@B115]). In primates, there are projections that originate from hippocampal CA1 and terminate in the orbital and medial frontal cortices (areas 11, 13, 14c, 25 and 32; Zhong et al., [@B123]). These PFC connectivity patterns seem to be similar in humans and monkeys; for example, both humans and monkeys have fimbria/fornix fibers (which originate from the hippocampus and subiculum) terminating in the medial orbital PFC (Cavada et al., [@B18]; Croxson et al., [@B20]). For these reasons, models of episodic retrieval have largely focused on the influence of contextual representations encoded in the HPC on memory retrieval processes guided by the mPFC (Maren and Holt, [@B80]; Hasselmo and Eichenbaum, [@B49]; Ranganath, [@B92]). Yet emerging evidence suggests that the mPFC itself may be critical for directing the retrieval of context-appropriate episodic memories in the HPC (Navawongse and Eichenbaum, [@B86]; Preston and Eichenbaum, [@B91]). This suggests that indirect projections from the mPFC to HPC may be involved in episodic memory, including contextual retrieval (Davoodi et al., [@B24], [@B25]; Hembrook et al., [@B51]; Xu and Südhof, [@B122]). Moreover, abnormal interactions between the HPC and mPFC are associated with decreased mnemonic ability as well as disrupted emotional control, which are major symptoms of psychiatric disorders such as schizophrenia, depression, specific phobia, and post-traumatic stress disorder (PTSD; Sigurdsson et al., [@B101]; Godsil et al., [@B43]; Maren et al., [@B82]). Here we will review the anatomy and physiology of the HPC-mPFC pathway in relation to memory and emotion in an effort to understand how dysfunction in this network contributes to psychiatric diseases.
Anatomy and Physiology of Hippocampal-Prefrontal Projections {#s2}
============================================================
It has long been appreciated that there are both direct monosynaptic projections, as well as indirect polysynaptic projections between the HPC and the mPFC (Hoover and Vertes, [@B55]). In rats, injections of retrograde tracers into different areas of the mPFC robustly label neurons in the VH and subiculum (Jay et al., [@B61]; Hoover and Vertes, [@B55]). In addition, injections of the anterograde tracer, *Phaseolus vulgaris*-leucoagglutinin (PHA-L), into the HPC reveal direct projections to the mPFC (Jay and Witter, [@B60]). Hippocampal projections to the mPFC originate primarily in ventral CA1 and ventral subiculum; there are no projections to the mPFC from the dorsal hippocampus or dentate gyrus. Therefore, the direct functional interactions we discuss below focus on ventral hippocampal and subicular projections to the mPFC. Hippocampal projections course dorsally and rostrally through the fimbria/fornix, and then continue in a rostro-ventral direction through the septum and the nucleus accumbens (NAcc), to reach the IL, PL, medial orbital cortex, and anterior cingulate cortex (Jay and Witter, [@B60]; Cenquizca and Swanson, [@B19]). Afferents from CA1 and the subiculum are observed throughout the entire rostro-caudal extent of the mPFC, with only sparse projections to the medial orbital cortex.
Indirect multi-synaptic pathways from the HPC to mPFC include projections through the NAcc and ventral tegmental area (VTA), amygdala, entorhinal cortex (EC), and midline thalamus (Maren, [@B78]; Russo and Nestler, [@B95]; Wolff et al., [@B121]). These complex multi-synaptic pathways from both subcortical and cortical areas are critically involved in higher cognitive functions that are related to several major psychiatric disorders. For example, it has been reported that NAcc receives convergent synaptic inputs from the PFC, HPC and amygdala (Groenewegen et al., [@B48]). This cortical-limbic network has been shown to mediate goal-directed behavior by integrating HPC-dependent contextual information and amygdala-dependent emotional information with cognitive information processed in the PFC (Goto and Grace, [@B45], [@B46]). In addition, the mPFC projects to the thalamic nucleus reuniens (RE), which in turn has dense projections to the HPC (Varela et al., [@B115]). Importantly, this projection is bidirectional, which provides another route for the HPC to influence the mPFC (Figure [1](#F1){ref-type="fig"}). Interestingly, it has been shown that single RE neurons send collaterals to both the HPC and mPFC (Hoover and Vertes, [@B56]; Varela et al., [@B115]). This places the RE in a key position to relay information between the mPFC and HPC to coordinate their functions (Davoodi et al., [@B24], [@B25]; Hembrook et al., [@B51]; Hoover and Vertes, [@B56]; Xu and Südhof, [@B122]; Varela et al., [@B115]; Griffin, [@B47]; Ito et al., [@B59]). The mPFC also has strong projections to the EC, which in turn has extensive reciprocal connections with hippocampal area CA1 and the subiculum (Vertes, [@B116]; Cenquizca and Swanson, [@B19]). Interestingly, the CA1 and subiculum send direct projections back to the mPFC, allowing these areas to form a functional loop that enables interactions between cortical and subcortical areas during memory encoding and retrieval (Preston and Eichenbaum, [@B91]).
![**Schematic representation of direct and indirect neural circuits between the medial prefrontal cortex and hippocampus/subiculum.** Hippocampal area CA1 and the subiculum (SUB) have strong direct projections to the mPFC, but there are no direct projections from the mPFC back to the HPC. The reuniens (RE) and amygdala has reciprocal connections with both the mPFC and HPC. NAcc receives inputs from mPFC, HPC, RE and amygdala. mPFC also project to entorhinal cortex (EC) which in turn has reciprocal projections with HPC. SUB, subiculum; EC, entorhinal cortex; Amy, amygdala; NAcc, nucleus accumbens; RE, nucleus reuniens; mPFC, medial prefrontal cortex.](fnsys-09-00170-g0001){#F1}
The physiology of projections between the HPC and mPFC has been extensively investigated in rodents. These projections consist of excitatory glutamatergic pyramidal neurons that terminate on either principle neurons or GABAergic interneurons within the mPFC (Jay et al., [@B62]; Carr and Sesack, [@B17]; Tierney et al., [@B112]). Electrical stimulation in hippocampal area CA1 or the subiculum produces a monosynaptic excitatory postsynaptic potential (EPSP) followed by fast and slow inhibitory postsynaptic potentials (IPSPs); the latter are due to both feedforward (Jay et al., [@B62]; Tierney et al., [@B112]) and feedback inhibition (Dégenètais et al., [@B27]). Excitatory responses evoked in mPFC neurons by electrical stimulation of the HPC are antagonized by CNQX but not by AP5, indicating that these responses are AMPA-receptor dependent (Jay et al., [@B62]). Hippocampal synapses in the mPFC exhibit activity-dependent plasticity including long-term potentiation (LTP), long-term depression (LTD), and depotentiation (Laroche et al., [@B74], [@B75]; Jay et al., [@B64]; Burette et al., [@B15]; Takita et al., [@B110]). These forms of plasticity are NMDA receptor-dependent and involve activation of serine/threonine kinases such as CaMKII, PKC, and PKA (Dudek and Bear, [@B32]; Bliss and Collingridge, [@B8]; Jay et al., [@B63], [@B65]; Burette et al., [@B15]; Takita et al., [@B110]).
Within the indirect mPFC-RE-HPC pathway, a large proportion of RE projection neurons are glutamatergic (Bokor et al., [@B9]). RE stimulation produces strong excitatory effects on both HPC and PFC neurons (Dolleman-Van der Weel et al., [@B31]; Bertram and Zhang, [@B7]; McKenna and Vertes, [@B83]), suggesting that the RE is capable of modulating synaptic plasticity in both the HPC and mPFC (Di Prisco and Vertes, [@B28]; Eleore et al., [@B36]).
Working Memory {#s3}
==============
Both the HPC and mPFC have been implicated in working memory and mounting evidence suggests that communication between these two structures is critical for this process. Working memory is a short-term repository for task-relevant information that is critical for the successful completion of complex tasks (Baddeley, [@B4]). For example, in a spatial working memory task, animals must hold in memory the location of food rewards to navigate to those locations after a delay. Disconnection of the HPC and mPFC with asymmetric lesions disrupts spatial working memory (Floresco et al., [@B38]; Churchwell and Kesner, [@B21]). PFC lesions disrupt the spatial firing of hippocampal place cells whereas HPC lesions disrupt anticipatory activity of mPFC neurons in working memory tasks (Kyd and Bilkey, [@B73]; Burton et al., [@B16]). This suggests that interactions between the HPC and mPFC are crucial for this form of memory.
One index of the functional interaction of different brain regions is the emergence of correlated neural activity between them during behavioral tasks. For example, simultaneous recordings in the HPC and mPFC reveal synchronized activity during working memory tasks (Jones and Wilson, [@B67]; Siapas et al., [@B100]; Benchenane et al., [@B6]; Hyman et al., [@B58]). Hippocampal theta oscillations (4\~10 Hz), which are believed to be important in learning and memory, are phase-locked with both theta activity and single-unit firing in the mPFC (Siapas et al., [@B100]; Colgin, [@B22]; Gordon, [@B44]). Medial prefrontal cortex firing lags behind the hippocampal LFP, suggesting that information flow is from the HPC to mPFC (Hyman et al., [@B57], [@B58]; Jones and Wilson, [@B67]; Siapas et al., [@B100]; Benchenane et al., [@B6]; Sigurdsson et al., [@B101]). Interestingly, this synchronized activity is not static, but is modulated during tasks associated with working memory or decision-making. Recently, Spellman et al. ([@B105]) used optogenetic techniques to manipulate activity in the VH-mPFC pathway during a spatial working memory task (Spellman et al., [@B105]). They found that in a "four-goal T-maze" paradigm, direct projections from the VH to the mPFC are crucial for encoding task-relevant spatial cues, at both neuronal and behavioral levels. Moreover, gamma activity (30\~70 Hz) in this pathway is correlated with successful cue encoding and correct test trials and is disrupted by VH terminal inhibition. These findings suggest a critical role of the VH-mPFC pathway in the continuous updating of task-related spatial information during spatial working memory task.
Indirect projections from the mPFC back to the HPC are also involved in working memory. For example, lesions or inactivation of the RE cause deficits in both radial arm maze performance and a delayed-non-match-to-position task that has previously been shown to be dependent on both the HPC and mPFC. This suggests that the RE is required for coordinating mPFC-HPC interactions in working memory tasks (Porter et al., [@B90]; Hembrook and Mair, [@B50]; Hembrook et al., [@B51]). Recently, Ito et al. ([@B59]) proposed that the mPFC→RE→sHPC projection is also crucial for representation of the future path during goal-directed behavior. Therefore, the RE is considered to be a key relay structure for long-range communication between cortical regions involved in navigation (Ito et al., [@B59]). Hence, both direct and indirect connections between the HPC and mPFC contribute to the hippocampal-prefrontal interactions important for working memory processes as well as spatial navigation.
Episodic Memory {#s4}
===============
Episodic memory is a long-term store for temporally dated episodes and the temporal-spatial relationships among these events (Tulving, [@B113]). These memories contain "what, where and when" information that place them in a spatial and temporal context. Although animals cannot explicitly report their experience, their knowledge of "what, where and when" information suggests that they also use episodic memories (Eacott and Easton, [@B33]). For example, animals can effectively navigate in mazes that require them to remember "what-where" information that is coupled to time ("when"; Fouquet et al., [@B39]). Considerable work indicates that the HPC and mPFC are critically involved in encoding and retrieval of episodic-like memories (Wall and Messier, [@B118]; Preston and Eichenbaum, [@B91]). Within the hippocampal formation (HPC), the perirhinal cortex (PRh) is thought to be crucial in signaling familiarity-based "what" information, whereas the parahippocampal cortex (PH) is involved in processing "where" events occur (Eichenbaum et al., [@B35]; Ranganath, [@B92]). Both the PRh and PH are connected with the EC, which in turn has strong reciprocal projections with the HPC and subiculum; this provides an anatomical substrate for the convergence of "what" and "where" information in the HPC (Ranganath, [@B92]). In support of this idea, studies have shown that hippocampal networks integrate non-spatial and spatial/contextual information (Davachi, [@B23]; Komorowski et al., [@B72]). More recently, there is evidence that neurons in hippocampal CA1 code both space and time, allowing animals to form conjoint spatial and temporal representations of their experiences (Eichenbaum, [@B34]). These findings suggest a fundamental role of the HPC for encoding episodic memories.
There is also considerable evidence that the mPFC contributes to episodic memory through cognitive or strategic control over other brain areas during memory retrieval. Although prefrontal damage does not yield severe impairments in familiarity-based recognition tests (Swick and Knight, [@B109]; Farovik et al., [@B37]), impairments are observed in tasks that require recollection-based memory, which rely on the retrieval of contextual and temporal information and resolution of interference (Shimamura et al., [@B99], [@B98]; Dellarocchetta and Milner, [@B26]; Simons et al., [@B102]). It has been suggested that the mPFC is important for the integration of old and new memories that share overlapping features, whereas the HPC is more important in forming new memories (Dolan and Fletcher, [@B30]). These findings suggest that there is functional dissociation between the mPFC and HPC during episodic memory encoding and retrieval in some cases. However, these two structures interact with each other in order to complete memory tasks that require higher levels of cognitive control.
In line with this idea, human EEG studies have shown that HPC-mPFC synchrony is associated with memory recall. For instance, encoding of successfully recalled words was associated with enhanced theta synchronization between frontal and posterior regions (including parietal and temporal cortex), indicating that the interaction between these two areas is involved in memory encoding (Weiss and Rappelsberger, [@B119]; Weiss et al., [@B120]; Summerfield and Mangels, [@B107]). Furthermore, depth recordings in epilepsy patients reveal theta-oscillation coherence between the medial temporal lobe and PFC during verbal recall tests, suggesting that synchronized neural activity is involved in the encoding and retrieval of verbal memory (Anderson et al., [@B3]). Recently, work in monkeys has revealed that different frequency bands within the HPC and mPFC have different functional roles in object-paired associative learning (Brincat and Miller, [@B14]). Collectively, these data indicated that the HPC and mPFC interactions are dynamic during episodic memory encoding and retrieval.
Contextual Memory Retrieval {#s5}
===========================
When humans and animals form new memories, contextual information associated with the experience is also routinely encoded without awareness (Tulving and Thomson, [@B114]). Contextual information plays an important role in memory retrieval since the content of *what* is often critically dependent on *where* that memory is retrieved (Hirsh, [@B53]; Maren and Holt, [@B80]; Bouton, [@B12]). This "contextual retrieval" process allows the meaning of a cue to be understood according to the context in which it is retrieved (Maren et al., [@B82]). For example, encountering a lion in the wild might be a life-threatening experience to someone, but seeing the same lion kept in its cage in the zoo might be an interesting (and non-threatening) experience. Therefore, the same cue in different contexts has totally different meanings. Contextual processing is highly adaptive because it resolves ambiguity during memory retrieval (Bouton, [@B12]; Maren et al., [@B82]; Garfinkel et al., [@B41]). Decades of research in both humans and animals have revealed that the HPC and mPFC are essential for contextual retrieval (Kennedy and Shapiro, [@B69]; Hasselmo and Eichenbaum, [@B49]; Diana et al., [@B29]; Maren et al., [@B82]). Humans and animals with disconnections in the HPC-mPFC network have deficits in retrieving memories that require either source memories or contextual information (Schacter et al., [@B96]; Shimamura et al., [@B99]; Simons et al., [@B102]). Thus, these brain regions are key components of a brain circuit involved in episodic memory, and connections between them are thought to support contextual retrieval.
Contextual retrieval is also critical for organizing defensive behaviors related to emotional memories (Bouton, [@B12]; Maren and Quirk, [@B81]; Maren et al., [@B82]). Learning to detect potential threats and organize appropriate defensive behavior while inhibiting fear when threats are absent are highly adaptive functions linked to emotional regulation (Maren, [@B78]). Deficits in emotional regulation often result in pathological fear memories that can further develop into fear and anxiety disorders, such as PTSD (Rasmusson and Charney, [@B93]). Studies indicate that fear memories are rapidly acquired and broadly generalized across contexts. In contrast, extinction memories often yield transient fear reduction and are bound to the context in which extinction occurs (Bouton and Bolles, [@B10]; Bouton and Nelson, [@B11]). After extinction, fear often relapses when the feared stimulus is encountered outside the extinction context-a phenomenon called fear "renewal" (Bouton, [@B13]; Vervliet et al., [@B117]).
Recent work indicates that the HPC-mPFC network plays a critical role in regulating context-dependent fear memory retrieval after extinction (Maren and Quirk, [@B81]; Maren, [@B78]; Orsini and Maren, [@B87]; Maren et al., [@B82]; Jin and Maren, [@B66]). Disconnection of the VH from the mPFC impairs fear renewal after extinction (Hobin et al., [@B54]; Orsini et al., [@B88]). Inactivation of the VH also modulates the activity of both interneurons and pyramidal neurons in the PL, and influences the expression of fear behavior in extinguished rats (Sotres-Bayon et al., [@B104]). Moreover, VH neurons projecting to both the mPFC and amygdala are preferentially involved in fear renewal (Jin and Maren, [@B66]), suggesting that VH might modulate memory retrieval by coupling activity in the mPFC and amygdala. Ultimately, the hippocampus appears to gate reciprocal mPFC-amygdala circuits involved in the expression and inhibition of fear (Herry et al., [@B52]; Knapska and Maren, [@B70]; Knapska et al., [@B71]). It has also been shown that the vmPFC-HPC network is involved in the context-dependent recall of extinction memories in humans (Kalisch et al., [@B68]; Milad et al., [@B85]). These observations support the idea that the HPC-mPFC pathway is critically involved in the context-specificity of fear memories, whereby the transmission of contextual information from the HPC to the mPFC generates context-appropriate behavioral response by interacting with the amygdala.
In animals, extinction learning induces a potentiation of VH-evoked potentials in the mPFC, while low frequency stimulation of the VH disrupts this potentiation and prevents extinction recall (Garcia et al., [@B40]). Chronic stress impairs the encoding of extinction by blocking synaptic plasticity in the HPC-mPFC pathway (Garcia et al., [@B40]; Maren and Holmes, [@B79]). In addition, it has been shown that brain-derived neurotrophic factor (BDNF) in the VH-IL pathway is involved in extinction learning (Peters et al., [@B89]; Rosas-Vidal et al., [@B94]). Finally, histone acetylation in the HPC-IL network influences extinction learning (Stafford et al., [@B106]). These findings indicate that interactions between the HPC and mPFC are critical for encoding extinction memories.
HPC-PFC Interaction and Psychiatric Disorders {#s6}
=============================================
Abnormal functional interactions between the HPC and mPFC have been reported in several psychiatric disorders. For example, patients with schizophrenia exhibit aberrant functional coupling between the HPC and mPFC during rest and during working memory performance (Meyer-Lindenberg et al., [@B84]; Zhou et al., [@B124]; Lett et al., [@B76]). This has been confirmed in animal models of schizophrenia, which also exhibit impaired working memory as well as decreased hippocampal-prefrontal synchrony (Sigurdsson et al., [@B101]). Abnormal interaction between the HPC and mPFC also causes deficits in emotional regulation associated with psychiatric disorders. Considerable evidence associates major depressive disorders with structural changes as well as functional abnormalities in hippocampal-prefrontal connectivity in both animals and humans (Bearden et al., [@B5]; Genzel et al., [@B42]). For example, the HPC and mPFC exhibit increased synchrony in anxiogenic environments (Adhikari et al., [@B2]; Schoenfeld et al., [@B97]; Abdallah et al., [@B1]). Moreover, traumatic experiences and pathological memories are linked to abnormal hippocampal-prefrontal interactions in PTSD patients, which in turn are associated with impaired contextual processing that mediates emotional regulation (Liberzon and Sripada, [@B77]). These seemingly distinct psychiatric disorders share similar symptoms: dysregulated interactions between the HPC and mPFC may be common to this shared symptomatology. Thus, the neural network between the HPC and mPFC is a promising target for future therapeutic interventions associated with these psychiatric disorders.
Conclusion {#s7}
==========
Animal and human studies strongly implicate the HPC-mPFC network in cognitive process and emotional regulation associated with psychiatric disorders such as schizophrenia, anxiety disorders, and PTSD. Physical or functional disruptions in the HPC-mPFC circuit might be a form of pathophysiology that is common to many psychiatric disorders. Further, study of the physiology and pathophysiology of hippocampal-prefrontal circuits will be essential for developing novel therapeutic interventions for these diseases.
Funding {#s8}
=======
Supported by a grant from the National Institutes of Health (R01MH065961) and a McKnight Memory and Cognitive Disorders Award to SM.
Conflict of Interest Statement {#s9}
==============================
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
EC
: entorhinal cortex
HPC
: hippocampus
IL
: infralimbic prefrontal cortex
mPFC
: medial prefrontal cortex
NAcc
: nucleus accumbens
PH
: parahippocampal cortex
PL
: prelimbic prefrontal cortex
PRh
: perirhinal cortex
PTSD
: post-traumatic stress disorder
RE
: nucleus reuniens
SUB
: subiculum
VH
: ventral hippocampus
vmPFC
: ventromedial prefrontal cortex
VTA
: ventral tegmental area.
[^1]: Edited by: Avishek Adhikari, Stanford University, USA
[^2]: Reviewed by: Carsten T. Wotjak, Max Planck Institute of Psychiatry, Germany; Froylán Gómez-lagunas, Universidad Nacional Autónoma de México, Mexico
| |
The American Museum of Natural History is the largest natural history museum in the world with a mission commensurately monumental in scope. The entire museum spans 4 city blocks and consists of some 25 interconnected buildings. Though today the phrase "natural history" is restricted to the study of animal life, the museum—founded in 1869 on the heels of discoveries by Darwin and other Victorians—uses it in its original sense: that is, the study of all natural objects, animal, vegetable and mineral.
Explorer, the amNH’s interactive application for the iPhone and iPod Touch, serves as a navigational tool through the museum’s 570,000 square feet and provides in-depth tours through the halls and a scavenger hunt option. The museum has about 360 devices that can be borrowed during a visit.
The museum's scientists study the diversity of Earth's species, life in the ancient past and the universe. The museum contains more than 40 exhibition halls, displaying a portion of the institution's 32 million specimens and artifacts, many in lifelike dioramas. The exhibition program rotates as much of this material into public view as possible.
See the museum in a "Treasures of New York" special that shows the museum's amazing exhibits and goes behind the scenes with scientists who work there.
Possessing the most scientifically important collection of dinosaurs and fossil vertebrates in the world, the museum has six halls that tell the story of vertebrate evolution. The public's favorites include the Tyrannosaurus rex and Apatosaurus. Also on view in the Roosevelt rotunda is the tallest free-standing dinosaur exhibit in the world, which has been remounted to reflect current scientific theory about dinosaur behavior. This tableau depicts a massive mother Barosaurus trying to protect her calf from an attacking Allosaurus.
The Hall of Biodiversity is devoted to the most pressing environmental issues of our time: the critical need to preserve the variety and interdependence of Earth's living things. Other permanent exhibits, known for their striking dioramas portraying people and animals on indigenous ground, include the Margaret Mead Hall of Pacific Peoples, the Hall of Asian Peoples, the Hall of African Peoples, the Hall of South American Peoples, the Spizter Hall of Human Origins, the Hall of North American Mammals, the Hall of African Mammals, the Hall of Ocean Life, the Hall of Reptiles and Amphibians and the Hall of North American Birds. For geology buffs there are also the separate halls of meteorites, minerals and gems.
The stunning Rose Center for Earth and Space is a $200 million glass box created by architect James Stewart Polshek. Enclosing a great white sphere, it opened to international acclaim in early 2000. The center features the Heilbrunn Cosmic Pathway, where each step equals about 75 million years of cosmic evolution; the Scales of the Universe, which illustrates the vast range in sizes in our universe; the Cullman Hall of the Universe, focusing on discoveries in modern astrophysics; and the new Hayden Planetarium—the world's most technologically advanced—which offers an absorbing three-dimensional tour of the universe and a multisensory re-creation of the Big Bang.
The first wing, the Romanesque Revival exposure running along West 77th Street, dates from 1872 and is based on a design by Calvert Vaux and J. Wrey Mould. In 1892 the two turrets, central granite stairway and arcade of arches were added based on a design by J. C. Cady & Co. The 77th Street entrance leads to the Grand Gallery, which holds the 63-foot-long Great Canoe, carved from the trunk of a single large cedar tree. It was acquired in 1883 was created by craftsmen from more than one of the First Nations of British Columbia.
As one of the world's preeminent scientific research institutions, the museum sponsors more than 100 field expeditions each year, including ongoing research projects in Chile, China, Cuba, French Guiana, Madagascar, Mongolia and New Guinea. It maintains three permanent field stations: Great Gull Island, St. Catherine's Island and the Southwestern Research Station.
Children's Workshops: The museum offers a variety of participatory weekend workshops for children, primarily during the school year. Topics range from A Whale's Tail (for children age 4, accompanied by a parent) to Human Origins (for children ages 10 to 12). The schedule and offerings change seasonally.
Multicultural Programs: Some of these programs, which feature an international roster of performances, lectures, film programs and participatory workshops, are appropriate for older, more mature students who have a sustained attention span.
Discovery Room: Hands-on activities in a special room for children ages 5 to 9 and adults. Open the last weekend of every month October through July.
Multicultural Programs: Weekend and evening performances, talks, films, craft workshops and lecture demonstrations that impart information on diverse cultural traditions and issues are offered throughout the year. Past programs have included Indigenous Peoples Celebration, Women's History Month and Asian/Pacific American Heritage Month. School programs that enable students to experience cultures through the arts, in conjunction with their curriculum, are also offered.
Field Trips: The museum offers two types of field trips: guided visits with Teacher Volunteers through a specific exhibition hall or halls, and self-guided trips conducted by classroom teachers. School groups must register with the museum. For an extra fee, field trips can include "add-ons" such as an IMAX film or special exhibition.
The Moveable Museum: Developed through a partnership that includes the museum and six other New York City cultural and scientific institutions, this program visits the city's schools. The free, all-day program for elementary and junior high school students, comprises an exhibition installed in a refitted recreational vehicle and supplementary interactive workshops. Teachers are required to attend a preliminary workshop. Reservations must be made by school principals or assistant principals, and are limited to two reservation dates per school. October and Janaury are limited to one reservation date per school.
Junior High/High School Assemblies: Museum science educators are available to visit schools and present a program to assemblies or other large groups. Programs include videotape screening and talk.
Various workshops help teachers of children in Kindergarten through grade 12 create meaningful, self-guided class visits to the museum. Programs include a viewing of videos or slides, an examination of artifacts and specimens and a tour of the appropriate exhibiton halls. Resource lists and curriculum materials are provided.
More than 50% of our volunteers work directly with the public. Every year, more than a thousand volunteers help the museum meet its mission and goals. | https://www.nyc-arts.org/organizations/54/american-museum-of-natural-history |