Monday, February 22, 2016

Quadrant Model of Reality Book 9 Science

Similar to my previous Quadrant Model books, this Quadrant Model book will be organized and designed around the four fields of inquiry.
Science
Physics
Chemistry
Biology
Psychology
Sociology
Religion
Buddhism
Christianity
Islam
Hinduism
Judaism
Other
Art
Painting
Music
Dance
Literature
Cinema
Philosophy











Science chapter





Physics chapter

QMRThe Systeme of the World: in Four Dialogues is the original 1661 English translation, by Thomas Salusbury, of Galileo Galilei's DIALOGO sopra i due MASSI SISTEMI DEL MONDO (1632). Galileo's publication is more generally recognized under the title of Stilman Drake's English translation, Dialogue Concerning the Two Chief World Systems, published in 1953. A revised and annotated edition of the Salusbury translation was also introduced in 1953 by Giorgio de Santillana under the title Dialogue on the Great World Systems.
The complete title of the Salusbury translation is "THE SYSTEME OF THE WORLD: IN FOUR DIALOGUES. Wherein the Two GRAND SYSTEMES Of PTOLOMY and COPERNICUS are largely discoursed of: And the REASONS, both Phylosophical and Physical, as well on the one side as the other, impartially and indefinitely propounded: By GALILEUS GALILEUS LINCEUS, A Gentleman of FLORENCE: Extraordinary Professor of the Mathematicks in the UNIVERSITY of PISA; and Chief Mathematician to the GRAND DUKE of TUSCANY."
QMRGalileo Galilei (Italian pronunciation: [ɡaliˈlɛːo ɡaliˈlɛi]; 15 February 1564[3] – 8 January 1642), was an Italian astronomer, physicist, engineer, philosopher, and mathematician who played a major role in the scientific revolution during the Renaissance. Galileo has been called the "father of observational astronomy",[4] the "father of modern physics",[5][6] and the "father of science".[7] His contributions to observational astronomy include the telescopic confirmation of the phases of Venus, the discovery of the four largest satellites of Jupiter (named the Galilean moons in his honour), and the observation and analysis of sunspots. Galileo also worked in applied science and technology, inventing an improved military compass and other instruments.

QMRFrancesco Ingoli
In addition to Bellarmine, Monsignor Francesco Ingoli initiated a debate with Galileo, sending him in January 1616 an essay disputing the Copernican system. Galileo later stated that he believed this essay to have been instrumental in the action against Copernicanism that followed in February. [31] According to Maurice Finocchiaro, Ingoli had probably been commissioned by the Inquisition to write an expert opinion on the controversy, and the essay provided the "chief direct basis" for the ban.[32] The essay focused on eighteen physical and mathematical arguments against heliocentrism. It borrowed primarily from the arguments of Tycho Brahe, and it notedly mentioned Brahe's argument that heliocentrism required the stars to be much larger than the sun. Ingoli wrote that the great distance to the stars in the heliocentric theory "clearly proves ... the fixed stars to be of such size, as they may surpass or equal the size of the orbit circle of the Earth itself."[33] Ingoli included four theological arguments in the essay, but suggested to Galileo that he focus on the physical and mathematical arguments. Galileo did not write a response to Ingoli until 1624, in which, among other arguments and evidence, he listed the results of experiments such as dropping a rock from the mast of a moving ship.[34]


QMRPitted terrain has been observed in four craters on Vesta: Marcia, Cornelia, Numisia and Licinia.[81] The formation of the pitted terrain is proposed to be degassing of impact-heated volatile-bearing material. Along with the pitted terrain, curvilinear gullies are found in Marcia and Cornelia craters. The curvilinear gullies end in lobate deposits, which are sometimes covered by pitted terrain, and are proposed to form by the transient flow of liquid water after buried deposits of ice were melted by the heat of the impacts.[66] Hydrated materials have also been detected, many of which are associated with areas of dark material.[82] Consequently, dark material is thought to be largely composed of carbonaceous chondrite, which was deposited on the surface by impacts. Carbonaceous chondrites are comparatively rich in mineralogically bound OH.[


QMRA tetraquark, in particle physics, is an exotic meson composed of four valence quarks. In principle, a tetraquark state may be allowed in quantum chromodynamics, the modern theory of strong interactions. Any established tetraquark state would be an example of an exotic hadron which lies outside the quark model classification.

Contents [hide]
1 History
2 See also
3 References
4 External links
History[edit]

Colour flux tubes produced by four static quark and antiquark charges, computed in lattice QCD.[1] Confinement in Quantum Chromo Dynamics leads to the production of flux tubes connecting colour charges. The flux tubes act as attractive QCD string-like potentials.
In 2003 a particle temporarily called X(3872), by the Belle experiment in Japan, was proposed to be a tetraquark candidate,[2] as originally theorized.[3] The name X is a temporary name, indicating that there are still some questions about its properties to be tested. The number following is the mass of the particle in 100 MeV/c2.

In 2004, the DsJ(2632) state seen in Fermilab's SELEX was suggested as a possible tetraquark candidate.[citation needed]

In 2007, Belle announced the observation of the Z(4430) state, a ccdu tetraquark candidate. In 2014, the Large Hadron Collider experiment LHCb confirmed this resonance with a significance of over 13.9σ.[4][5] There are also indications that the Y(4660), also discovered by Belle in 2007, could be a tetraquark state.[6]

In 2009, Fermilab announced that they have discovered a particle temporarily called Y(4140), which may also be a tetraquark.[7]

In 2010, two physicists from DESY and a physicist from Quaid-i-Azam University re-analyzed former experimental data and announced that, in connection with the ϒ(5S) meson (a form of bottomonium), a well-defined tetraquark resonance exists.[8][9]

In June 2013, two independent groups reported on Zc(3900).[10] [11]

QMRA pentaquark is a subatomic particle consisting of four quarks and one antiquark bound together.

As quarks have a baryon number of +1⁄3, and antiquarks of −1⁄3, the pentaquark would have a total baryon number of 1, and thus would be a baryon. Further, because it has five quarks instead of the usual three found in regular baryons (aka 'triquarks'), it would be classified as an exotic baryon. The name pentaquark was coined by Harry J. Lipkin in 1987,[1] however, the possibility of five-quark particles was identified as early as 1964 when Murray Gell-Mann first postulated the existence of quarks.[2] Although predicted for decades, pentaquarks have proved surprisingly difficult to discover and some physicists were beginning to suspect that an unknown law of nature prevented their production.[3]

The first claim of pentaquark discovery was recorded at LEPS in Japan in 2003, and several experiments in the mid-2000s also reported discoveries of other pentaquark states.[4] Others were not able to replicate the LEPS results, however, and the other pentaquark discoveries were not accepted because of poor data and statistical analysis.[5] On 13 July 2015, the LHCb collaboration at CERN reported results consistent with pentaquark states in the decay of bottom Lambda baryons (Λ0
b).[6]

Outside of particle physics laboratories pentaquarks also could be produced naturally by supernovae as part of the process of forming a neutron star.[7] The scientific study of pentaquarks might offer insights into how these stars form, as well as allowing more thorough study of particle interactions and the strong force.

QMRA quark is a type of elementary particle that has mass, electric charge, and colour charge, as well as an additional property called flavour, which describes what type of quark it is (up, down, strange, charm, top, or bottom). Due to an effect known as colour confinement, quarks are never seen on their own. Instead, they form composite particles known as hadrons so that their colour charges cancel out. Hadrons made of one quark and one antiquark are known as mesons, while those made of three quarks are known as baryons. These 'regular' hadrons are well documented and characterized, however, there is nothing in theory to prevent quarks from forming 'exotic' hadrons such as tetraquarks with two quarks and two antiquarks, or pentaquarks with four quarks and one antiquark. The fourth is always different

QMRSuperfluid helium-4
From Wikipedia, the free encyclopedia
A superfluid is a state of matter in which the matter behaves like a fluid with zero viscosity. The substance, which looks like a normal liquid, will flow without friction past any surface, which allows it to continue to circulate over obstructions and through pores in containers which hold it, subject only to its own inertia.

Known as a major facet in the study of quantum hydrodynamics and macroscopic quantum phenomena, the superfluidity effect was discovered by Pyotr Kapitsa[1] and John F. Allen, and Don Misener[2] in 1937. It has since been described through phenomenological and microscopic theories. The formation of the superfluid is known to be related to the formation of a Bose–Einstein condensate. This is made obvious by the fact that superfluidity occurs in liquid helium-4 at far higher temperatures than it does in helium-3. Each atom of helium-4 is a boson particle, by virtue of its zero spin. Helium-3, however, is a fermion particle, which can form bosons only by pairing with itself at much lower temperatures, in a process similar to the electron pairing in superconductivity.

In the 1950s, Hall and Vinen performed experiments establishing the existence of quantized vortex lines in superfluid helium.[3] In the 1960s, Rayfield and Reif established the existence of quantized vortex rings.[4] Packard has observed the intersection of vortex lines with the free surface of the fluid,[5] and Avenel and Varoquaux have studied the Josephson effect in superfluid helium-4.[6] In 2006 a group at the University of Maryland visualized quantized vortices by using small tracer particles of solid hydrogen.[7]

QMRPauli proposed in 1924 a new quantum degree of freedom (or quantum number) with two possible values, in order to resolve inconsistencies between observed molecular spectra and the developing theory of quantum mechanics. He formulated the Pauli exclusion principle, perhaps his most important work, which stated that no two electrons could exist in the same quantum state, identified by four quantum numbers including his new two-valued degree of freedom. The idea of spin originated with Ralph Kronig. George Uhlenbeck and Samuel Goudsmit one year later identified Pauli's new degree of freedom as electron spin.

QMRGeneralizing Bell's original inequality,[4] John Clauser, Michael Horne, Abner Shimony and R. A. Holt,[18] introduced the CHSH inequality,[18] which puts classical limits on the set of four correlations in Alice and Bob's experiment, without any assumption of perfect correlations (or anti-correlations) at equal settings

(1) \quad \rho(a,b) + \rho(a,b') + \rho(a',b) - \rho(a',b') \leq 2,
where ρ denotes correlation in the quantum physicist's sense: the expected value of the product of the two binary (+/-1 valued) outcomes.

Making the special choice a'=a+\pi, denoting b'=c, and assuming perfect anti-correlation at equal settings, perfect correlation at opposite settings, therefore \rho(a,a+\pi)=1 and \rho(b,a+\pi)=-\rho(b,a), the CHSH inequality reduces to the original Bell inequality. Nowadays, (1) is also often simply called "the Bell inequality", but sometimes more completely "the Bell-CHSH inequality".

QMRScheme of a "two-channel" Bell test
The source S produces pairs of "photons", sent in opposite directions. Each photon encounters a two-channel polariser whose orientation (a or b) can be set by the experimenter. Emerging signals from each channel are detected and coincidences of four types (++, −−, +− and −+) counted by the coincidence monitor.

QMRStatement of the inequality[edit]
The usual form of the CHSH inequality is:

- 2 \leq S \leq 2

(1)
where
S = E(a, b) - E(a, b^\prime) + E(a^\prime, b) + E(a^\prime, b^\prime)

(2)
a and a′ are detector settings on side A, b and b′ on side B, the four combinations being tested in separate subexperiments. The terms E(a, b) etc. are the quantum correlations of the particle pairs, where the quantum correlation is defined to be the expectation value of the product of the "outcomes" of the experiment, i.e. the statistical average of A(a)·B(b), where A and B are the separate outcomes, using the coding +1 for the '+' channel and −1 for the '−' channel. Clauser et al.'s 1969[1] derivation was oriented towards the use of "two-channel" detectors, and indeed it is for these that it is generally used, but under their method the only possible outcomes were +1 and −1. In order to adapt it to real situations, which at the time meant the use of polarised light and single-channel polarisers, they had to interpret '−' as meaning "non-detection in the '+' channel", i.e. either '−' or nothing. They did not in the original article discuss how the two-channel inequality could be applied in real experiments with real imperfect detectors, though it was later proved (Bell, 1971)[3] that the inequality itself was equally valid. The occurrence of zero outcomes, though, means it is no longer so obvious how the values of E are to be estimated from the experimental data.

The mathematical formalism of quantum mechanics predicts a maximum value for S of 2√2,[4] which is greater than 2, and CHSH violations are therefore predicted by the theory of quantum mechanics.

A typical CHSH experiment[edit]

Schematic of a "two-channel" Bell test
The source S produces pairs of photons, sent in opposite directions. Each photon encounters a two-channel polariser whose orientation can be set by the experimenter. Emerging signals from each channel are detected and coincidences counted by the coincidence monitor CM.
In practice most actual experiments have used light rather than the electrons that Bell originally had in mind. The property of interest is, in the best known experiments (Aspect, 1981-2),[5][6][7] the polarisation direction, though other properties can be used. The diagram shows a typical optical experiment. Coincidences (simultaneous detections) are recorded, the results being categorised as '++', '+−', '−+' or '−−' and corresponding counts accumulated.

Four separate subexperiments are conducted, corresponding to the four terms E(a, b) in the test statistic S (2). The settings a, a′, b and b′ are generally in practice chosen to be 0, 45°, 22.5° and 67.5° respectively — the "Bell test angles" — these being the ones for which the QM formula gives the greatest violation of the inequality.

For each selected value of a and b, the numbers of coincidences in each category \left \{ N_{++}, N_{--}, N_{+-}, N_{-+} \right \} are recorded. The experimental estimate for E(a, b) is then calculated as:

E = \frac {N_{++} - N_{+-} - N_{-+} + N_{--}} {N_{++} + N_{+-} + N_{-+}+ N_{--}}

(3)
Once all the E’s have been estimated, an experimental estimate of S (2) can be found. If it is numerically greater than 2 it has infringed the CHSH inequality and the experiment is declared to have supported the QM (Quantum Mechanics) prediction and ruled out all local hidden variable theories.

The CHSH paper lists many preconditions (or "reasonable and/or presumable assumptions") to derive the simplified theorem and formula. For example, for the method to be valid, it has to be assumed that the detected pairs are a fair sample of those emitted. In actual experiments, detectors are never 100% efficient, so that only a sample of the emitted pairs are detected. A subtle, related requirement is that the hidden variables do not influence or determine detection probability in a way that would lead to different samples at each arm of the experiment.

Derivation from Clauser and Horne's 1974 inequality[edit]
In their 1974 paper,[8] Clauser and Horne show that the CHSH inequality can be derived from the CH74 one. As they tell us, in a two-channel experiment the CH74 single-channel test is still applicable and provides four sets of inequalities governing the probabilities p of coincidences.

Working from the inhomogeneous version of the inequality, we can write:

- 1 \; \leq \; p_{jk}(a, b) - p_{jk}(a, b\prime) + p_{jk}(a\prime, b) + p_{jk}(a\prime, b\prime) - p_{jk}(a\prime) - p_{jk}(b) \; \leq \; 0
where j and k are each '+' or '−', indicating which detectors are being considered.

To obtain the CHSH test statistic S (2), all that is needed is to multiply the inequalities for which j is different from k by −1 and add these to the inequalities for which j and k are the same.

QMRGHZ experiments are a class of physics experiments that may be used to generate starkly contrasting predictions from local hidden variable theory and quantum mechanical theory, and permit immediate comparison with actual experimental results. A GHZ experiment is similar to a test of Bell's inequality, except using three or more entangled particles, rather than two. With specific settings of GHZ experiments, it is possible to demonstrate absolute contradictions between the predictions of local hidden variable theory and those of quantum mechanics, whereas tests of Bell's inequality only demonstrate contradictions of a statistical nature. The results of actual GHZ experiments agree with the predictions of quantum mechanics.

The GHZ experiments are named for Daniel M. Greenberger, Michael A. Horne, and Anton Zeilinger (GHZ) who first analyzed certain measurements involving four observers.[1] and who subsequently (together with Abner Shimony, upon a suggestion by David Mermin) applied their arguments to certain measurements involving three observers.[2]

Further, as GHZ and collaborators demonstrate in detail, the following four distinct trials, with their various separate detector counts and with suitably identified settings, may be considered and be found experimentally:

trial s as shown above, characterized by the settings a2 , b2 , and c2 , and with detector counts such that
p(A↑) (B «) (C ◊)( s ) = (ns (A↑) - ns (A↓)) (ns (B «) - ns (B »)) (ns (C ◊) - ns (C )) = -1,
trial u with settings a2 , b1 , and c1 , and with detector counts such that
p(A↑) (B «) (C ◊)( u ) = (nu (A↑) - nu (A↓)) (nu (B «) - nu (B »)) (nu (C ◊) - nu (C )) = 1,
trial v with settings a1 , b2 , and c1 , and with detector counts such that
p(A↑) (B «) (C ◊)( v ) = (nv (A↑) - nv (A↓)) (nv (B «) - nv (B »)) (nv (C ◊) - nv (C )) = 1, and
trial w with settings a1 , b1 , and c2 , and with detector counts such that
p(A↑) (B «) (C ◊)( w ) = (nw (A↑) - nw (A↓)) (nw (B «) - nw (B »)) (nw (C ◊) - nw (C )) = 1.
The notion of local hidden variables is now introduced by considering the following question:

Can the individual detection outcomes and corresponding counts as obtained by any one observer, e.g. the numbers (nj (A↑) - nj (A↓)), be expressed as a function A( ax , λ ) (which necessarily assumes the values +1 or -1), i.e. as a function only of the setting of this observer in this trial, and of one other hidden parameter λ, but without an explicit dependence on settings or outcomes concerning the other observers (who are considered far away)?

Therefore: can the correlation numbers such as p(A↑) (B «) (C ◊)( ax , bx , cx ), be expressed as a product of such independent functions, A( ax , λ ), B( bx , λ ) and C( cx , λ ), for all trials and all settings, with a suitable hidden variable value λ?

Comparison with the product which defined p(A↑) (B «) (C ◊)( j ) explicitly above, readily suggests to identify

λ → j,
A( ax , j ) → (nj (A↑) - nj (A↓)),
B( bx , j ) → (nj (B «) - nj (B »)), and
C( cx , j ) → (nj (C ◊) - nj (C )),
where j denotes any one trial which is characterized by the specific settings ax , bx , and cx , of A, B, and of C, respectively.

However, GHZ and collaborators also require that the hidden variable argument to functions A(), B(), and C() may take the same value, λ, even in distinct trials, being characterized by distinct settings.

Consequently, substituting these functions into the consistent conditions on four distinct trials, u, v, w, and s shown above, they are able to obtain the following four equations concerning one and the same value λ:

A( a2 , λ ) B( b2 , λ ) C( c2 , λ ) = -1,
A( a2 , λ ) B( b1 , λ ) C( c1 , λ ) = 1,
A( a1 , λ ) B( b2 , λ ) C( c1 , λ ) = 1, and
A( a1 , λ ) B( b1 , λ ) C( c2 , λ ) = 1.
Taking the product of the last three equations, and noting that A( a1 , λ ) A( a1 , λ ) = 1, B( b1 , λ ) B( b1 , λ ) = 1, and C( c1 , λ ) C( c1 , λ ) = 1, yields

A( a2 , λ ) B( b2 , λ ) C( c2 , λ ) = 1
in contradiction to the first equation; 1 ≠ -1.

Given that the four trials under consideration can indeed be consistently considered and experimentally realized, the assumptions concerning hidden variables which lead to the indicated mathematical contradiction are therefore collectively unsuitable to represent all experimental results; namely the assumption of local hidden variables which occur equally in distinct trials.

The assumption of local hidden variables which vary between distinct trials, such as a trial index itself, does generally not allow to derive a mathematical contradiction as indicated by GHZ.

Because we have no control over the hidden variables, the contradiction derived above cannot be directly tested in an experiment.





Chemistry chapter
QMRThe Omnivore's Dilemma: A Natural History of Four Meals is a nonfiction book by Michael Pollan published in 2006. In the book, Pollan asks the seemingly straightforward question of what we should have for dinner. As omnivores, the most unselective eaters, humans (as well as other omnivores) are faced with a wide variety of food choices, resulting in a dilemma. Pollan suggests that, prior to modern food preservation and transportation technologies, this particular dilemma was resolved primarily through cultural influences. These technologies have recreated the dilemma, by making available foods that were previously seasonal or regional. The relationship between food and society, once moderated by culture, now finds itself confused. To learn more about those choices, Pollan follows each of the food chains that sustain us; industrial food, organic food, and food we forage ourselves; from the source to a final meal, and in the process writes a critique of the American way of eating.

QMrA full course dinner in its simplest form, can consist of three or four courses, such as soup, salad, meat and dessert. In formal dining, a full course dinner can consist of many courses, and in some instances the courses are carefully planned to complement each other gastronomically

QMRA full course dinner is a dinner consisting of multiple dishes, or courses. In its simplest form, it can consist of three or four courses, such as appetizers, fish course, entrée, main course and dessert.

QMRThe calendar year can be divided into 4 quarters, often abbreviated Q1, Q2, Q3 and Q4.

First quarter / Q1: from the beginning of January to the end of March (01/01 - 03/31)
Second quarter / Q2: from the beginning of April to the end of June (04/01 - 06/30)
Third quarter / Q3: from the beginning of July to the end of September (07/01 - 09/30)
Fourth quarter / Q4: from the beginning of October to the end of December (10/01 - 12/31)

QMRBiblical references to the pre-Jewish calendar include ten months identified by number rather than by name. In parts of the Torah portion Noach ("Noah") (specifically, Gen 7:11, 8:3-4, 8:13–14) it is implied that the months are thirty days long.[22] There is also an indication that there were twelve months in the annual cycle (1 Kings 4:7, 1 Chronicles 27:1–15). Prior to the Babylonian exile, the names of only four months are referred to in the Tanakh:

Aviv – first month – literally "spring" (Exodus 12:2, 13:4, 23:15, 34:18, Deut. 16:1);
Ziv – second month – literally "light" (1 Kings 6:1, 6:37);
Ethanim – seventh month – literally "strong" in plural, perhaps referring to strong rains (1 Kings 8:2); and
Bul – eighth month (1 Kings 6:38).

QMRFour gates[edit]
The annual calendar of a numbered Hebrew year, displayed as 12 or 13 months partitioned into weeks, can be determined by consulting the table of Four gates, whose inputs are the year's position in the 19-year cycle and its molad Tishrei. The resulting keviyah of the desired year in the body of the table is a triple consisting of two numbers and a letter (written left-to-right in English). The left number of each triple is the day of the week of 1 Tishrei, Rosh Hashanah (2 3 5 7); the letter indicates whether that year is deficient (D), regular (R), or complete (C), the number of days in Chesvan and Kislev; while the right number of each triple is the day of the week of 15 Nisan, the first day of Passover or Pesach (1 3 5 7), within the same Hebrew year (next Julian/Gregorian year). The keviyah in Hebrew letters are written right-to-left, so their days of the week are reversed, the right number for 1 Tishrei and the left for 15 Nisan. The year within the 19-year cycle alone determines whether that year has one or two Adars.[53][54][55][56]

This table numbers the days of the week and hours for the limits of molad Tishrei in the Hebrew manner for calendrical calculations, that is, both begin at 6 pm, thus 7d 18h 0p is noon Saturday. The years of a 19-year cycle are organized into four groups: common years after a leap year but before a common year (1 4 9 12 15); common years between two leap years (7 18); common years after a common year but before a leap year (2 5 10 13 16); and leap years (3 6 8 11 14 17 19), all between common years. The oldest surviving table of Four gates was written by Saadia Gaon (892–942). It is so named because it identifies the four allowable days of the week on which 1 Tishrei can occur.

Comparing the days of the week of molad Tishrei with those in the keviyah shows that during 39% of years 1 Tishrei is not postponed beyond the day of the week of its molad Tishrei, 47% are postponed one day, and 14% are postponed two days. This table also identifies the seven types of common years and seven types of leap years. Most are represented in any 19-year cycle, except one or two may be in neighboring cycles. The most likely type of year is 5R7 in 18.1% of years, whereas the least likely is 5C1 in 3.3% of years. The day of the week of 15 Nisan is later than that of 1 Tishrei by one, two or three days for common years and three, four or five days for leap years in deficient, regular or complete years, respectively.

QMRThe Enoch calendar is an ancient calendar described in the pseudepigraphal Book of Enoch. It divided the year into four seasons of exactly 13 weeks each. Each such season consisted of two 30-day months followed by one 31-day month, and the 31st day ended the season, so that Enoch's Year consisted of exactly 364 days.

QMRThe Enoch calendar was purportedly given to Enoch by the angel Uriel. Four named days, inserted as the 31st day of every third month, were named instead of numbered, which "placed them outside the numbering". The Book of Enoch gives the count of 2,912 days for 8 years, which divides out to exactly 364 days per year. This specifically excludes any periodic intercalations.

QMRFunctional classification generally relies on the New York Heart Association functional classification. The classes (I-IV) are:

Class I: no limitation is experienced in any activities; there are no symptoms from ordinary activities.
Class II: slight, mild limitation of activity; the patient is comfortable at rest or with mild exertion.
Class III: marked limitation of any activity; the patient is comfortable only at rest.
Class IV: any physical activity brings on discomfort and symptoms occur at rest.
This score documents severity of symptoms, and can be used to assess response to treatment. While its use is widespread, the NYHA score is not very reproducible and does not reliably predict the walking distance or exercise tolerance on formal testing.[40]

In its 2001 guidelines the American College of Cardiology/American Heart Association working group introduced four stages of heart failure:[41]

Stage A: Patients at high risk for developing HF in the future but no functional or structural heart disorder.
Stage B: a structural heart disorder but no symptoms at any stage.
Stage C: previous or current symptoms of heart failure in the context of an underlying structural heart problem, but managed with medical treatment.
Stage D: advanced disease requiring hospital-based support, a heart transplant or palliative care.

QMRAlthough the new calendar was much simpler than the pre-Julian calendar, the pontifices initially added a leap day every three years, instead of every four. There are accounts of this in Solinus,[44] Pliny,[45] Ammianus,[46] Suetonius,[47] and Censorinus.[48]

Macrobius [49] gives the following account of the introduction of the Julian calendar:

"Caesar’s regulation of the civil year to accord with his revised measurement was proclaimed publicly by edict, and the arrangement might have continued to stand had not the correction itself of the calendar led the priests to introduce a new error of their own; for they proceeded to insert the intercalary day, which represented the four quarter-days, at the beginning of each fourth year instead of at its end, although the intercalation ought to have been made at the end of each fourth year and before the beginning of the fifth.

QMRIn linear algebra, the main diagonal (sometimes principal diagonal, primary diagonal, leading diagonal, or major diagonal) of a matrix A is the collection of entries A_{i,j} where i = j. The following three matrices have their main diagonals indicated by red 1's:

\begin{bmatrix}
\color{red}{1} & 0 & 0\\
0 & \color{red}{1} & 0\\
0 & 0 & \color{red}{1}\end{bmatrix}
\qquad
\begin{bmatrix}
\color{red}{1} & 0 & 0 & 0 \\
0 & \color{red}{1} & 0 & 0 \\
0 & 0 & \color{red}{1} & 0 \end{bmatrix}
\qquad
\begin{bmatrix}
\color{red}{1} & 0 & 0\\
0 & \color{red}{1} & 0\\
0 & 0 & \color{red}{1}\\
0 & 0 & 0\end{bmatrix}
The antidiagonal (sometimes counterdiagonal, secondary diagonal, trailing diagonal or minor diagonal) of a dimension N square matrix, B, is the collection of entries B_{i,j} such that i + j = N + 1. That is, it runs from the top right corner to the bottom left corner:

\begin{bmatrix}
0 & 0 & \color{red}{1}\\
0 & \color{red}{1} & 0\\
\color{red}{1} & 0 & 0\end{bmatrix}
Linear algebra is performed in quadrant matrices
least squares method



QMRFour thieves vinegar (also called Marseilles vinegar, Marseilles remedy, prophylactic vinegar, vinegar of the four thieves, camphorated acetic acid, vinaigre des quatre voleurs, and acetum quator furum[1][2]) is a concoction of vinegar (either from red wine, white wine, cider, or distilled white) infused with herbs, spices or garlic that was believed to protect users from the plague. The recipe for this vinegar has almost as many variations as its legend.

History[edit]
This specific vinegar composition is said to have been used during the medieval period when the black death was happening to prevent the catching of this dreaded disease.[3] Other similar types of herbal vinegars have been used as medicine since the time of Hippocrates.[4]

Early recipes for this vinegar called for a number of herbs to be added into a vinegar solution and left to steep for several days. The following vinegar recipe hung in the Museum of Paris in 1937, and is said to have been an original copy of the recipe posted on the walls of Marseilles during an episode of the plague:

Take three pints of strong white wine vinegar, add a handful of each of wormwood, meadowsweet, wild marjoram and sage, fifty cloves, two ounces of campanula roots, two ounces of angelic, rosemary and horehound and three large measures of champhor. Place the mixture in a container for fifteen days, strain and express then bottle. Use by rubbing it on the hands, ears and temples from time to time when approaching a plague victim.[3] Plausible reasons for not contracting the plague was that the herbal concoction contained natural flea repellents, since the flea is the carrier for the plague bacillus, Yersinia pestis.[5] Wormwood has properties similar to cedar as an insect repellent, as do aromatics such as sage, cloves, camphor, rosemary, campanula, etc.[6] Meadowsweet, although known to contain salicyclic acid, is mainly used to mask odors like decomposing bodies.[7]

Another recipe called for dried rosemary, dried sage flowers, dried lavender flowers, fresh rue, camphor dissolved in spirit, sliced garlic, bruised cloves, and distilled wine vinegar.[8]

Modern day versions of four thieves vinegar include various herbs that typically include sage, lavender, thyme, and rosemary, along with garlic. Additional herbs sometimes include rue, mint, and wormwood. It has become traditional to use four herbs in the recipe—one for each thief, though earlier recipes often have a dozen herbs or more. It is still sold in Provence. In Italy a mixture called "seven thieves vinegar" is sold as a smelling salt, though its ingredients appear to be the same as in four thieves mixtures.[9]

Mythology[edit]
The usual story declares that a group of thieves during a European plague outbreak were robbing the dead or the sick. When they were caught, they offered to exchange their secret recipe, which had allowed them to commit the robberies without catching the disease, in exchange for leniency. Another version says that the thieves had already been caught before the outbreak and their sentence had been to bury dead plague victims; to survive this punishment, they created the vinegar. The city in which this happened is usually said to be Marseille or Toulouse, and the time period can be given as anywhere between the 14th and 18th century depending on the storyteller.[10]

One interesting twist says that "four thieves vinegar" is simply a corruption of the original "Forthave's vinegar," a popular concoction created by an enterprising fellow by the name of Richard Forthave.[10] Another source, the book Abregé de tout la médecine practique (published in 1741), seems to attribute its creation to George Bates, though Bates' own published recipe for antipestilential vinegar in his Pharmacopoeia Bateana does not specifically use the name 'thieves' or 'four thieves.'



QMRThat periodic table poster on your wall is about to be out of date, thanks to four new chemical elements that just received official recognition. The newcomers are some of the heaviest ever discovered, with atomic numbers of 113, 115, 117, and 118. They will be named by the researchers who identified them, the final step before the elements take up their rightful places in the seventh row of the periodic table.

Chemists classify elements by the number of protons per atom, which they call atomic number. Elements with more than 92 protons are unstable and not normally found in nature, but researchers have worked for decades to synthesize them and prove their brief existence. The International Union of Pure and Applied Chemistry (IUPAC) assesses the evidence for each new element, deciding when it’s strong enough to warrant official recognition and who should get credit for the discovery.

Researchers first claimed to have created the heaviest known element, No. 118, in 1999, but the data in that study turned out to be fabricated. The real discoveries of the four new elements came between 2002 and 2010, thanks to a series of experiments with particle accelerators. The particle accelerators fired beams of lighter nuclei at samples of heavy elements, smashing the atoms together until some of them fused. IUPAC credited a team of Russian and U.S. scientists with the discovery of elements 115, 117, and 118. Element 113 will become the first element to be named in Asia, with credit going to a group of Japanese researchers at the RIKEN Nishina Center for Accelerator-Based Science in Wako.

The experiments offered more than a checklist of new elements. By studying how the massive nuclei of the new elements decay, researchers gained insight into the forces that hold atoms together. According to their findings, elements heavier than any yet created might have conformations that are especially stable—suggesting that if we can ever make atoms that big, they might stick around for longer than a few microseconds.









Biology chapter

QMRStructurally the platelet can be divided into four zones, from peripheral to innermost:

Peripheral zone - is rich in glycoproteins required for platelet adhesion, activation, and aggregation. For example, GPIb/IX/X; GPVI; GPIIb/IIIa.
Sol-gel zone - is rich in microtubules and microfilaments, allowing the platelets to maintain their discoid shape.
Organelle zone - is rich in platelet granules. Alpha granules contain clotting mediators such as factor V, factor VIII, fibrinogen, fibronectin, platelet-derived growth factor, and chemotactic agents. Delta granules, or dense bodies, contain ADP, calcium, serotonin, which are platelet-activating mediators. -
Membranous zone - contains membranes derived from megakaryocytic smooth endoplasmic reticulum organized into a dense tubular system which is responsible for thromboxane A2 synthesis. This dense tubular system is connected to the surface platelet membrane to aid thromboxane A2 release.

QMRMost of the CAMs belong to four protein families: Ig (immunoglobulin) superfamily (IgSF CAMs), the integrins, the cadherins, and the selectins.



QMRThe cell cycle is divided into four distinct phases, G1, S, G2, and M. The G phases – which is the cell growth phase - makes up approximately 95% of the cycle.[4] The proliferation of cells is instigated by progenitors, the cells then differentiate to become specialized, where specialized cells of the same type aggregate to form tissues, then organs and ultimately systems.[1] The G phases along with the S phase – DNA replication, damage and repair - are considered to be the interphase portion of the cycle. While the M phase (mitosis and cytokinesis) is the cell division portion of the cycle.[4] The cell cycle is regulated by a series of signalling factors and complexes such as CDK's, kinases, and p53. to name a few. When the cell has completed its growth process, and if it is found to be damaged or altered it undergoes cell death, either by apoptosis or necrosis, to eliminate the threat it cause to the organism’s survival.







Psychology chapter

QMRJungian Type Index
From Wikipedia, the free encyclopedia
The Jungian Type Index (JTI) is an alternative to the Myers–Briggs Type Indicator (MBTI). Introduced by Optimas in 2001,[1] the JTI was developed over a 10-year period in Norway by psychologists Thor Ødegård and Hallvard E: Ringstad. The JTI was designed to help capture individuals' preferred usage of the psychological functions identified by Carl Jung in his book Psychological Types, such as thinking vs feeling and sensing vs intuition.
The JTI's questions and methodology for identifying the preferred functions differs from the MBTI. For example it eliminates word pairs, which can be troublesome to translate from English into other languages.[citation needed] In many languages, the sentence context frames the meaning of a word, while in English the words themselves may denote more meaning.
Overview[edit]
Similar to the MBTI, the JTI identifies 4 categories from which the 16 types are formed: Extraversion/Introversion, Intuiting/Sensing, Thinking/Feeling, Perceiving/Judging. A personality type is reached through an examination or introspection about these categories. For example, an Intuiting, Thinking, Judging Extrovert would be classified as an ENTJ. However, further complexity lies below this surface-level classification. Each personality types has its associated Jungian cognitive functions, which aim to further explain the ways in which each type perceives and interacts with reality. Each type has all 4 of the cognitive functions (Thinking, Feeling, Intuiting, and Sensing) arranged in a different order and with different levels of introversion/extroversion. Of the two middle letters of any type, one will be the primary function with which they interact with the world, and one will be the auxiliary. For example, an ENTJ's primary function is (extraverted) Thinking, and their secondary function is (introverted) Intuiting.[2]
Commercialization[edit]
Though it is relatively unknown in the United States, but it has won some market shares in Scandinavia although the original MBTI tool is still the most commonly used. In Norway and Sweden,[3] the JTI is also gaining users, in conjunction with other tools that complement the JTI for career development and coaching.[citation needed] It also has distributors in the Netherlands, China, and Germany.[4]


QMRKagan proposed that emotion is a psychological phenomenon controlled by brain states and that specific emotions are products of context, the person’s history, and biological make-up.[12] Kagan also explained emotion as occurring in four dinstinct phases, including the brain state (created by an incentive), the detection of changes in bodily movement, the appraisal of a change in bodily feeling, and the observable changes in facial expression and muscle tension.[12] These emotions vary in magnitude and usually differ across ages and when expressed in different contexts.[12] Kagan questioned relying on individual's verbal statements of their feelings.[12] He provided several reasons for this; he argued that the English language does not have enough words to describe all emotional states, the words to explain emotional states do not convey the differences in quality or severity, and translating emotion words from one language to another produces variations and inaccuracies.[12][12] In addition, Kagan argued that research in emotion studies should be free of ambiguous and coded terms, and this emphasis on specificity remains a recurring theme in his current research on emotion.[12

QMRThe four ways forgetting can be measured are as follows:

Free recall[edit]
Free recall is a basic paradigm used to study human memory. In a free recall task, a subject is presented a list of to-be-remembered items, one at at time. For example, an experimenter might read a list of 20 words aloud, presenting a new word to the subject every 4 seconds. At the end of the presentation of the list, the subject is asked to recall the items (e.g., by writing down as many items from the list as possible). It is called a free recall task because the subject is free to recall the items in any order that he or she desires.

Prompted (cued) recall[edit]
Prompted recall is a slight variation of free recall that consists of presenting hints or prompts to increase the likelihood that the behavior will be produced. Usually these prompts are stimuli that were not there during the training period. Thus in order to measure the degree of forgetting, one can see how many prompts the subject misses or the number of prompts required to produce the behavior.[10]

Relearning method[edit]
This method measures forgetting by the amount of training required to reach the previous level of performance. German psychologist Hermann Ebbinghaus (1885) used this method on himself. He memorized lists of nonsensical syllables until he could repeat the list two times without error. After a certain interval, he relearned the list and saw how long it would take him to do this task. If it took fewer times, then there had been less forgetting. His experiment was one of the first to study forgetting.[10]

Recognition[edit]
For this type of measurement, a participant has to identify material that was previously learned. The participant is asked to remember a list of material. Later on they are shown the same list of material with additional information and they are asked to identify the material that was on the original list. The more they recognize, the less information is forgotten.[10]

QMRThe four main theories of forgetting apparent in the study of psychology are as follows:

Cue-dependent forgetting[edit]
Cue-dependent forgetting (also, context-dependent forgetting) or retrieval failure, is the failure to recall a memory due to missing stimuli or cues that were present at the time the memory was encoded. Encoding is the first step in creating and remembering a memory. How well something has been encoded in the memory can be measured by completing specific tests of retrieval. Examples of these tests would be explicit ones like cued recall or implicit tests like word fragment completion.[11] Cue-dependent forgetting is one of five cognitive psychology theories of forgetting. This theory states that a memory is sometimes temporarily forgotten purely because it cannot be retrieved, but the proper cue can bring it to mind. A good metaphor for this is searching for a book in a library without the reference number, title, author or even subject. The information still exists, but without these cues retrieval is unlikely. Furthermore, a good retrieval cue must be consistent with the original encoding of the information. If the sound of the word is emphasized during the encoding process, the cue that should be used should also put emphasis on the phonetic quality of the word. Information is available however, just not readily available without these cues. Depending on the age of a person, retrieval cues and skills may not work as well. This is usually common in older adults but that is not always the case. When information is encoded into the memory and retrieved with a technique called spaced retrieval, this helps older adults retrieve the events stored in the memory better.[2] There is also evidence from different studies that show age related changes in memory.[11] These specific studies have shown that episodic memory performance does in fact decline with age and have made known that older adults produce vivid rates of forgetting when two items are combined and not encoded.[2]

Trace decay[edit]
Trace decay theory explains memories that are stored in both short term and long term memory system, and assumes that the memories leave a trace in the brain.[12] According to this theory, short term memory (STM) can only retain information for a limited amount of time, around 15 to 30 seconds unless it is rehearsed. If it is not rehearsed, the information will start to gradually fade away and decay. Donald Hebb proposed that incoming information causes a series of neurons to create a neurological memory trace in the brain which would result in change in the morphological and/or chemical changes in the brain and would fade with time. Repeated firing causes a structural change in the synapses. Rehearsal of repeated firing maintains the memory in STM until a structural change is made. Therefore, forgetting happens as a result of automatic decay of the memory trace in brain. This theory states that the events between learning and recall have no effects on recall; the important factor that affects is the duration that the information has been retained. Hence, as longer time passes more of traces are subject to decay and as a result the information is forgotten. One major problem about this theory is that in real-life situation, the time between encoding a piece of information and recalling it, is going to be filled with all different kinds of events that might happen to the individual. Therefore, it is difficult to conclude that forgetting is a result of only the time duration. It is also important to consider the effectiveness of this theory. Although it seems very plausible, it is about impossible to test. It is difficult to create a situation where there is a blank period of time between presenting the material and recalling it later.[12]

Organic causes[edit]
Forgetting that occurs through physiological damage or dilapidation to the brain are referred to as organic causes of forgetting. These theories encompass the loss of information already retained in long term memory or the inability to encode new information again. Examples include Alzheimer's, Amnesia, Dementia, consolidation theory and the gradual slowing down of the central nervous system due to aging.

Interference theories[edit]
Interference theory refers to the idea that when the learning of something new causes forgetting of older material on the basis of competition between the two. This essentially states that memory's information may become confused or combined with other information during encoding, resulting in the distortion or disruption of memories.[12] In nature, the interfering items are said to originate from an overstimulating environment. Interference theory exists in three branches: Proactive, Retroactive and Output. Retroactive and Proactive inhibition each referring in contrast to the other. Retroactive interference is when new information (memories) interferes with older information. On the other hand, proactive interference is when old information interferes with the retrieval of new information.[13] This is sometimes thought to occur especially when memories are similar. Output Interference occurs when the initial act of recalling specific information interferes with the retrieval of the original information. This theory shows an astonishing contradiction: an extremely intelligent individual is expected to forget more hastily than one who has a slow mentality. For this reason, an intelligent individual has stored up more memory in his mind which will cause interferences and impair their ability to recall specific information.[14] Based off current research, testing interference has only been carried out by recalling from a list of words rather than using situation from daily lives, thus it's hard to generalize the findings for this theory.[12]

Decay theory[edit]
Decay theory states that when something new is learned, a neurochemical, physical "memory trace" is formed in the brain and over time this trace tends to disintegrate, unless it is occasionally used. Decay theory states the reason we eventually forget something or an event is because the memory of it fades with time. If we do not attempt to look back at an event, the greater the interval time between the time when the event from happening and the time when we try to remember, the memory will start to fade. Time is the greatest impact in remembering an event.[15]



QMRJungian (Carl Jung, personality studies/behaviorist)[edit]
"The four archetypal personalities or the four aspects of the soul are grouped in two pairs: the ego and the shadow, the persona and the soul's image (animus or anima). The shadow is the container of all our despised emotions repressed by the ego. Lucky, the shadow, serves as the polar opposite of the egocentric Pozzo, prototype of prosperous mediocrity, who incessantly controls and persecutes his subordinate, thus symbolising the oppression of the unconscious shadow by the despotic ego. Lucky's monologue in Act I appears as a manifestation of a stream of repressed unconsciousness, as he is allowed to "think" for his master. Estragon's name has another connotation, besides that of the aromatic herb, tarragon: "estragon" is a cognate of oestrogen, the female hormone (Carter, 130). This prompts us to identify him with the anima, the feminine image of Vladimir's soul. It explains Estragon's propensity for poetry, his sensitivity and dreams, his irrational moods. Vladimir appears as the complementary masculine principle, or perhaps the rational persona of the contemplative type."[65]

QMRThe Bell states are a concept in quantum information science and represent the most simple examples of entanglement. They are named after John S. Bell because they are the subject of his famous Bell inequality. An EPR pair is a pair of qubits (or quantum bits) which are in a Bell state together, that is, entangled with each other. Unlike classical phenomena such as the nuclear, electromagnetic, and gravitational fields, entanglement is invariant under distance of separation[dubious – discuss] and is not subject to relativistic limitations such as the speed of light (though the no-communication theorem prevents this behaviour being used to transmit information faster than light, which would violate causality).

The Bell states are four specific maximally entangled quantum states of two qubits.

The degree to which a state in a quantum system consisting of two "particles" is entangled is measured by the Von Neumann entropy of either of the two reduced density operators of the state. The Von Neumann entropy of a pure state is zero - also for the bell states which are specific pure states. But the Von Neumann entropy of the reduced density operator of the Bell states is maximal.[1]

The qubits are usually thought to be spatially separated. Nevertheless, they exhibit perfect correlation which cannot be explained without quantum mechanics.

In order to explain this, it is important to first look at the Bell state |\Phi ^{+}\rangle :

|\Phi ^{+}\rangle ={\frac {1}{\sqrt {2}}}(|0\rangle _{A}\otimes |0\rangle _{B}+|1\rangle _{A}\otimes |1\rangle _{B}).
This expression means the following: The qubit held by Alice (subscript "A") can be 0 as well as 1. If Alice measured her qubit in the standard basis the outcome would be perfectly random, either possibility having probability 1/2. But if Bob then measured his qubit, the outcome would be the same as the one Alice got. So, if Bob measured, he would also get a random outcome on first sight, but if Alice and Bob communicated they would find out that, although the outcomes seemed random, they are correlated.

So far, this is nothing special: maybe the two particles "agreed" in advance, when the pair was created (before the qubits were separated), which outcome they would show in case of a measurement.

Hence, followed Einstein, Podolsky, and Rosen in 1935 in their famous "EPR paper", there is something missing in the description of the qubit pair given above—namely this "agreement", called more formally a hidden variable.

But quantum mechanics allows qubits to be in quantum superposition—i.e. in 0 and 1 simultaneously—that is, a linear combination of the two classical states—for example, the states |+\rangle ={\frac {1}{\sqrt {2}}}(|0\rangle +|1\rangle ) or |-\rangle ={\frac {1}{\sqrt {2}}}(|0\rangle -|1\rangle ). If Alice and Bob chose to measure in this basis, i.e. check whether their qubit were |+\rangle or |-\rangle , they would find the same correlations as above. That is because the Bell state can be formally rewritten as follows:

|\Phi ^{+}\rangle ={\frac {1}{\sqrt {2}}}(|+\rangle _{A}\otimes |+\rangle _{B}+|-\rangle _{A}\otimes |-\rangle _{B}).
Note that this is still the same state.

In his famous paper of 1964, John S. Bell showed by simple probability theory arguments that these correlations (the one for the 0,1 basis and the one for the +,- basis) cannot both be made perfect by the use of any "pre-agreement" stored in some hidden variables—but that quantum mechanics predicts perfect correlations. In a more formal and refined formulation known as the Bell-CHSH inequality, it is shown that a certain correlation measure cannot exceed the value 2 if one assumes that physics respects the constraints of local "hidden variable" theory (a sort of common-sense formulation of how information is conveyed), but certain systems permitted in quantum mechanics can attain values as high as 2{\sqrt {2}}.

Four specific two-qubit states with the maximal value of 2{\sqrt {2}} are designated as "Bell states". They are known as the four maximally entangled two-qubit Bell states, and they form a convenient basis of the two-qubit Hilbert space:

|\Phi ^{+}\rangle ={\frac {1}{\sqrt {2}}}(|0\rangle _{A}\otimes |0\rangle _{B}+|1\rangle _{A}\otimes |1\rangle _{B})
|\Phi ^{-}\rangle ={\frac {1}{\sqrt {2}}}(|0\rangle _{A}\otimes |0\rangle _{B}-|1\rangle _{A}\otimes |1\rangle _{B})
|\Psi ^{+}\rangle ={\frac {1}{\sqrt {2}}}(|0\rangle _{A}\otimes |1\rangle _{B}+|1\rangle _{A}\otimes |0\rangle _{B})
|\Psi ^{-}\rangle ={\frac {1}{\sqrt {2}}}(|0\rangle _{A}\otimes |1\rangle _{B}-|1\rangle _{A}\otimes |0\rangle _{B}).

Bell state measurement[edit]
The Bell measurement is an important concept in quantum information science: It is a joint quantum-mechanical measurement of two qubits that determines which of the four Bell states the two qubits are in.

If the qubits were not in a Bell state before, they get projected into a Bell state (according to the projection rule of quantum measurements), and as Bell states are entangled, a Bell measurement is an entangling operation.

Bell-state measurement is the crucial step in quantum teleportation. The result of a Bell-state measurement is used by one's co-conspirator to reconstruct the original state of a teleported particle from half of an entangled pair (the "quantum channel") that was previously shared between the two ends.

Experiments which utilize so-called "linear evolution, local measurement" techniques cannot realize a complete Bell state measurement. Linear evolution means that the detection apparatus acts on each particle independently from the state or evolution of the other, and local measurement means that each particle is localized at a particular detector registering a "click" to indicate that a particle has been detected. Such devices can be constructed, for example, from mirrors, beam splitters, and wave plates, and are attractive from an experimental perspective because they are easy to use and have a high measurement cross-section.

For entanglement in a single qubit variable, only three distinct classes out of four Bell states are distinguishable using such linear optical techniques. This means two Bell states cannot be distinguished from each other, limiting the efficiency of quantum communication protocols such as teleportation. If a Bell state is measured from this ambiguous class, the teleportation event fails.

Entangling particles in multiple qubit variables, such as (for photonic systems) polarization and a two-element subset of orbital angular momentum states, allows the experimenter to trace over one variable and achieve a complete Bell state measurement in the other.[2] Leveraging so-called hyper-entangled systems thus has an advantage for teleportation. It also has advantages for other protocols such as superdense coding, in which hyper-entanglement increases the channel capacity.

In general, for hyper-entanglement in n variables, one can distinguish between at most 2^{n+1}-1 classes out of 4^{n} Bell states using linear optical techniques.[3]

Bell state correlations[edit]
Independent measurements made on two qubits that are entangled in Bell states positively correlate perfectly if each qubit is measured in the relevant basis. For the |\Phi ^{+}\rangle state, this means selecting the same basis for both qubits. If an experimenter chose to measure both qubits in a |\Phi ^{-}\rangle Bell state using the same basis, the qubits would appear positively correlated when measuring in the \{|0\rangle ,|1\rangle \} basis, anti-correlated in the \{|+\rangle ,|-\rangle \} basis[a] and partially (probabilistically) correlated in other bases.

The |\Psi ^{+}\rangle correlations can be understood by measuring both qubits in the same basis and observing perfectly anti-correlated results. More generally, |\Psi ^{+}\rangle can be understood by measuring the first qubit in basis b_{1}, the second qubit in basis b_{2}=X.b_{1} and observing perfectly positively correlated results.

Relationship between the correlated bases of two qubits in the |\Phi ^{-}\rangle state.
Bell state Basis b_{2}
|\Phi ^{+}\rangle b_{1}
|\Phi ^{-}\rangle Z.b_{1}
|\Psi ^{+}\rangle X.b_{1}
|\Psi ^{-}\rangle X.Z.b_{1}

QMRIf Q represents the quantization map that acts on functions f in classical phase space, then the following properties are usually considered desirable:[6]

Q_x \psi = x \psi and Q_p \psi = -i\hbar \partial_x \psi ~~ (elementary position/momentum operators)
f \longmapsto Q_f ~~ is a linear map
[Q_f,Q_g]=i\hbar Q_{\{f,g\}}~~ (Poisson bracket)
Q_{g \circ f}=g(Q_f)~~ (von Neumann rule).

However, not only are these four properties mutually inconsistent, any three of them are also inconsistent![7] As it turns out, the only pairs of these properties that lead to self-consistent, nontrivial solutions are 2 & 3, and possibly 1 & 3 or 1 & 4. Accepting properties 1 & 2, along with a weaker condition that 3 be true only asymptotically in the limit ħ→0 (see Moyal bracket), leads to deformation quantization, and some extraneous information must be provided, as in the standard theories utilized in most of physics. Accepting properties 1 & 2 & 3 but restricting the space of quantizable observables to exclude terms such as the cubic ones in the above example amounts to geometric quantization.

GHZ experimentDeriving an Inequality[edit]
Since equations (1) through (4) above cannot be satisfied simultaneously when the hidden variable, λ, takes the same value in each equation, GHSZ proceed by allowing λ to take different values in each equation. They define

Λ1= the set of all λ's such that equation (1) holds,
Λ2= the set of all λ's such that equation (2) holds,
Λ3= the set of all λ's such that equation (3) holds,
Λ4= the set of all λ's such that equation (4) holds.
Also, Λic is the complement of Λi.

Now, equation (1) can only be true if at least one of the other three is false. Therefore

Λ1 ⊆ Λ2c ∪ Λ3c ∪ Λ4c.

In terms of probability, p(Λ1) ≤ p(Λ2c ∪ Λ3c ∪ Λ4c).

By the rules of probability theory, it follows that

p(Λ1) ≤ p(Λ2c) + p(Λ3c) + p(Λ4c).

This inequality allows for an experimental test.

Testing the inequality[edit]
To test the inequality just derived, GHSZ need to make one more assumption, the "fair sampling" assumption. Because of inefficiencies in real detectors, in some trials of the experiment only one or two particles of the triple will be detected. Fair sampling assumes that these inefficiencies are unrelated to the hidden variables; in other words, the number of triples actually detected in any run of the experiment is proportional to the number that would have been detected if the apparatus had no inefficiencies - with the same constant of proportionality for all possible settings of the apparatus. With this assumption, p(Λ1) can be determined by choosing the apparatus settings a2 , b2 , and c2 , counting the number of triples for which the outcome is -1, and dividing by the total number of triples observed at that setting. The other probabilities can be determined in a similar manner, allowing a direct experimental test of the inequality.

GHSZ also show that the fair sampling assumption can be dispensed with if the detector efficiencies are at least 90.8%

QMRBell tests with no "non-detections"[edit]
Consider, for example, David Bohm's thought-experiment (Bohm, 1951), in which a molecule breaks into two atoms with opposite spins. Assume this spin can be represented by a real vector, pointing in any direction. It will be the "hidden variable" in our model. Taking it to be a unit vector, all possible values of the hidden variable are represented by all points on the surface of a unit sphere.

Suppose the spin is to be measured in the direction a. Then the natural assumption, given that all atoms are detected, is that all atoms the projection of whose spin in the direction a is positive will be detected as spin up (coded as +1) while all whose projection is negative will be detected as spin down (coded as −1). The surface of the sphere will be divided into two regions, one for +1, one for −1, separated by a great circle in the plane perpendicular to a. Assuming for convenience that a is horizontal, corresponding to the angle a with respect to some suitable reference direction, the dividing circle will be in a vertical plane. So far we have modelled side A of our experiment.

Now to model side B. Assume that b too is horizontal, corresponding to the angle b. There will be second great circle drawn on the same sphere, to one side of which we have +1, the other −1 for particle B. The circle will be again in a vertical plane.

The two circles divide the surface of the sphere into four regions. The type of "coincidence" (++, −−, +− or −+) observed for any given pair of particles is determined by the region within which their hidden variable falls. Assuming the source to be "rotationally invariant" (to produce all possible states λ with equal probability), the probability of a given type of coincidence will clearly be proportional to the corresponding area, and these areas will vary linearly with the angle between a and b. (To see this, think of an orange and its segments. The area of peel corresponding to a number n of segments is roughly proportional to n. More accurately, it is proportional to the angle subtended at the centre.)

The formula (1) above has not been used explicitly — it is hardly relevant when, as here, the situation is fully deterministic. The problem could be reformulated in terms of the functions in the formula, with ρ constant and the probability functions step functions. The principle behind (1) has in fact been used, but purely intuitively.

Fig. 1: The realist prediction (solid lines) for quantum correlation when there are no non-detections. The quantum-mechanical prediction is the dotted curve.
Thus the local hidden variable prediction for the probability of coincidence is proportional to the angle (b − a) between the detector settings. The quantum correlation is defined to be the expectation value of the sum of the individual outcomes, and this is

(2) E = P++ + P−− − P+− − P−+
where P++ is the probability of a '+' outcome on both sides, P+− that of a + on side A, a '−' on side B, etc..

Since each individual term varies linearly with the difference (b − a), so does their sum.

The result is shown in fig. 1.



QMRAlong with the definition of team adaptive performance, researchers came up with a four-stage model to describe the process of team adaptive performance. The four core constructs characterizing this adaptive cycle include: (1) situation assessment; (2) plan formulation; (3) plan execution, via adaptive interaction processes; and (4) team learning, as well as emergent cognitive states (i.e., shared mental models, team situational awareness, psychological safety), which serve as both proximal outcomes and inputs to this cycle.[15] Team adaptive performance differs from individual adaptive performance from several aspects. Team adaptive performance reflects the extent to which the team meets its objectives during a transfer performance episode, whereas individual adaptive performance reflects the extent to which each member effectively executes his or her role in the team during the transfer episode.[16] Team adaptive performance also has different antecedents compared with individual adaptive performance



QMRImmunity to Change[edit]
Kegan's next book, How the Way We Talk Can Change the Way We Work (2001), co-authored with Lisa Laskow Lahey, jettisons the theoretical framework of his earlier books The Evolving Self and In Over Our Heads and instead presents a practical method, called the immunity map, intended to help readers overcome an immunity to change.[23] An immunity to change is our "processes of dynamic equilibrium, which, like an immune system, powerfully and mysteriously tend to keep things pretty much as they are."[24]

The immunity map continues the general dialectical pattern of Kegan's earlier thinking but without any explicit use of the concept of "evolutionary truces" or "orders of consciousness." The map primarily consists of a four-column worksheet that is gradually filled in by individuals or groups of people during a structured process of self-reflective inquiry that involves asking questions such as: What are the changes that we think we need to make? What are we doing or not doing to prevent ourselves (immunize ourselves) from making those changes? What anxieties and big assumptions does that doing or not doing imply? How can we test those big assumptions so as to disturb our immunity to change and make possible new learning and change? The following table presents an example of an immunity map.[25]

1. Commitment: I am committed to the value or the importance of... 2. What I'm doing or not doing that prevents my commitment from being fully realized 3. Competing commitment: I may also be committed to... 4. Big assumption: I assume that if...
Supporting my staff to exercise more individual initiative. When they ask me to get involved or take over, I don't refuse. I don't delegate as much as I could. I too often am willing to be drawn into things when I should refer to the subordinate who is in charge of that area. Not having my staff feel like I've abandoned them; not having my staff unhappy with me; not having our work product be less than I think I could do on my own, even if it means disempowering or failing to empower my staff. The quality of our work, when I transfer authority, does fall below what I could produce by maintaining more control, then I will be seen as a failure.

QMRIn psychology, the four stages of competence, or the "conscious competence" learning model, relates to the psychological states involved in the process of progressing from incompetence to competence in a skill.

Contents [hide]
1 History
2 The four stages of competence
3 See also
4 References
History[edit]
Initially described as “Four Stages for Learning Any New Skill”, the theory was developed at Gordon Training International by its employee Noel Burch in the 1970s.[1] It has since been frequently attributed to Abraham Maslow, although the model does not appear in his major works.[2]

The Four Stages of Learning provides a model for learning. It suggests that individuals are initially unaware of how little they know, or unconscious of their incompetence. As they recognize their incompetence, they consciously acquire a skill, then consciously use it. Eventually, the skill can be utilized without it being consciously thought through: the individual is said to have then acquired unconscious competence.[3]

Several elements, including helping someone 'know what they don't know' or recognize a blind spot, can be compared to some elements of a Johari window, although Johari deals with self-awareness, while the four stages of competence deals with learning stages.

The four stages of competence[edit]
Unconscious incompetence
The individual does not understand or know how to do something and does not necessarily recognize the deficit. They may deny the usefulness of the skill. The individual must recognize their own incompetence, and the value of the new skill, before moving on to the next stage.[2] The length of time an individual spends in this stage depends on the strength of the stimulus to learn.[3]
Conscious incompetence
Though the individual does not understand or know how to do something, he or she does recognize the deficit, as well as the value of a new skill in addressing the deficit. The making of mistakes can be integral to the learning process at this stage.[4]
Conscious competence
The individual understands or knows how to do something. However, demonstrating the skill or knowledge requires concentration. It may be broken down into steps, and there is heavy conscious involvement in executing the new skill.[3]
Unconscious competence
The individual has had so much practice with a skill that it has become "second nature" and can be performed easily. As a result, the skill can be performed while executing another task. The individual may be able to teach it to others, depending upon how and when it was learned.

QMRThe original five-stage model[edit]
In the novice stage, a person follows rules as given, without context, with no sense of responsibility beyond following the rules exactly. Competence develops when the individual develops organizing principles to quickly access the particular rules that are relevant to the specific task at hand; hence, competence is characterized by active decision making in choosing a course of action. Proficiency is shown by individuals who develop intuition to guide their decisions and devise their own rules to formulate plans. The progression is thus from rigid adherence to rules to an intuitive mode of reasoning based on tacit knowledge.

Michael Eraut summarized the five stages of increasing skill as follows:[2]

1. Novice
"rigid adherence to taught rules or plans"
no exercise of "discretionary judgment"
2. Advanced beginner
limited "situational perception"
all aspects of work treated separately with equal importance
3. Competent
"coping with crowdedness" (multiple activities, accumulation of information)
some perception of actions in relation to goals
deliberate planning
formulates routines
4. Proficient
holistic view of situation
prioritizes importance of aspects
"perceives deviations from the normal pattern"
employs maxims for guidance, with meanings that adapt to the situation at hand
5. Expert
transcends reliance on rules, guidelines, and maxims
"intuitive grasp of situations based on deep, tacit understanding"
has "vision of what is possible"
uses "analytical approaches" in new situations or in case of problems
Instead the original Dreyfus model is based on four binary qualities:

Recollection (non-situational or situational)
Recognition (decomposed or holistic)
Decision (analytical or intuitive)
Awareness (monitoring or absorbed)
This leads to five roles:

1. Novice
non-situational recollection, decomposed recognition, analytical decision, monitoring awareness
2. Competence
situational recollection, decomposed recognition, analytical decision, monitoring awareness
3. Proficiency
situational recollection, holistic recognition, analytical decision, monitoring awareness
4. Expertise
situational recollection, holistic recognition, intuitive decision, monitoring awareness
5. Mastery
situational recollection, holistic recognition, intuitive decision, absorbed awareness

The first square is the idealist who is not a doer. The second square is the guardian who is mediocre and homeostasis. The third square is the artisan who is the doer. The fourth square is the rational who is transcendent. The fifth square is the ultra transcendent God square
QMRA skill is the learned ability to carry out a task with pre-determined results often within a given amount of time, energy, or both.[2] In other words, the abilities that one possesses. Skills can often be divided into domain-general and domain-specific skills. For example, in the domain of work, some general skills would include time management, teamwork and leadership, self motivation and others, whereas domain-specific skills would be useful only for a certain job. Skill usually requires certain environmental stimuli and situations to assess the level of skill being shown and used.

People need a broad range of skills in order to contribute to a modern economy. A joint ASTD and U.S. Department of Labor study showed that through technology, the workplace is changing, and identified 16 basic skills that employees must have to be able to change with it.[3]

16 is the squares of the quadrant model
QMRMotivational models are central to game design, because without motivation, a player will not be interested in progressing further within a game.[115] Several models for gameplay motivations have been proposed, including Richard Bartle's. Jon Radoff has proposed a four-quadrant model of gameplay motivation that includes cooperation, competition, immersion and achievement.[116] The motivational structure of games is central to the gamification trend, which seeks to apply game-based motivation to business applications.[117] In the end, game designers must know the needs and desires of their customers for their companies to flourish.

There have been various studies on the connection between motivation and games. One particular study was on Taiwanese adolescents and their drive of addiction to games. Two studies by the same people were conducted. The first study revealed that addicted players showed higher intrinsic than extrinsic motivation and more intrinsic motivation than the non-addicted players.[118] It can then be said that addicted players, according to the studies findings, are more internally motivated to play games. They enjoy the reward of playing. There are studies that also show that motivation gives these players more to look for in the future such as long-lasting experience that they may keep later on in life.[119

QMRThe Bartle Test of Gamer Psychology is a series of questions and an accompanying scoring formula that classifies players of multiplayer online games (including MUDs and MMORPGs) into categories based on their gaming preferences. The test is based on a 1996 paper by Richard Bartle[1] and was created in 1999–2000 by Erwin Andreasen and Brandon Downey.[2][3][4][5] Although the test has met with some criticism[6] for the dichotomous nature of its question-asking method, as of October 2011, it had been taken over 800,000 times.[7][8]

The result of the Bartle Test is the "Bartle Quotient", which is calculated based on the answers to a series of 30 random questions in the test, and totals 200% across all categories, with no single category exceeding 100%.[9] For example, a person may score "100% Killer, 50% Socializer, 40% Achiever, 10% Explorer", which indicates a player who prefers fighting other players relative to any other area of interest. Scores are typically abbreviated by the first letter of each category, in order of the quotient. In the previous example, this result would be described as a "KSAE" result.

The Bartle Test is based on a character theory. This character theory consists of four characters: Achievers, Explorers, Socializers, and Killers. These are imagined according to a quadrant model where the X axis represents preference for interacting with other players vs. exploring the world and the Y axis represents preference for interaction vs. unilateral action.[10]

QMRJames Marcia created a structural interview designed to classify adolescents into one of four statuses of identity. The identity statuses are used to describe and pinpoint the progression of an adolescent's identity formation process. In James Marcia's theory, the operational definition of identity is whether an individual has explored various alternatives and made firm commitments to: an occupation, religion, sexual orientation and a set of political values.

The four identity statuses in James Marcia's theory are:[5]

Identity Diffusion (also known as Role Confusion): This is the opposite of identity achievement. The individual has not yet resolved their identity crisis, failing to commit to any goals or values and establish future life direction. In adolescents, this stage is characterized by disorganized thinking, procrastination, and avoidance of issues and action.[4]
Identity Foreclosure: This occurs when teenagers accept traditional values and cultural norms, rather than determining their own values. In other words, the person conforms to an identity without exploration as to what really suits him or her best. For instance, teenagers might follow the values and roles of their parents or cultural norms. They might also foreclose on a negative identity, the direct opposite of their parent's values or cultural norms.[4]
Identity Moratorium: This postpones identity achievement by providing temporary shelter. This status provides opportunities for exploration, either in breadth or in depth. Examples of moratoria common in American society include college or the military.[4]
Identity Achievement: This status is attained when the person has solved the identity issues by making commitments to goals, beliefs and values after extensive exploration of different areas.

QMRDr. Montessori defined 4 stages of development and labeled them as the 4 planes of development, noting that within these stages, the development is intense at the beginning, consolidates and then tapers to the next. The 1st and 3rd planes are periods of intense creation, while the 2nd and 4th planes are the calm periods of consolidation. Key to all the planes of development is the individual’s need for independence. This is expressed differently throughout the planes.

Each plane is approximately 6 years and has its own special characteristics as follows:

First Plane – Age 0 - 6 – Early Childhood (Individual Creation Of The Person)

Characterized by the “Absorbent Mind” in which the child’s mind is like a sponge, absorbing all that is in the environment.
At age 0-3 this is unconscious
At age 3-6 this is conscious
Characterized by “Sensitive Periods” which include the intense need for:
Order
Language
Refinement of the senses
Movement
Characterized by concrete thinking
Construction of the physical person
Fundamental formation of the character
Physical independence – “I can do it myself!”
The child wants to be free to work independently within a structured environment doing real activities with an intelligent purpose.

Second Plane – Age 6 – 12 – Childhood (Construction Of The Intelligence)

Characterized by reasoning with imagination and logic.
Intense thirst for knowledge which is so great that if allowed, the child will seek exposure to many things that have been left to high school and college in the past.
“Cosmic Education” – the child wants to know about the whole and his/her place within it and can appreciate the interconnectedness of all things and people.
The “bridge” to abstraction – or the transition from concrete to abstract thinking
Interested in learning about the universe – what is outside of the prepared environment.
Intellectual independence – “I can “think it” myself”.

Third Plane – Ages 12 – 18 Adolescence (Construction Of Social Self)

Characterized by self concern and self assessment.
Critical thinking and re-evaluation.
Transition period both physically and mentally.
Beginning to try to find place in this world.
Characterized by construction of social and moral values.
“Erd Kinder” or “Children of the Land” – Dr. Montessori envisioned the child practicing for life in society by working together in a sort of hostel.
Cultural development which has been ongoing is solidified in this plane.
Emotional Independence – “I can stand on my own”.

Fourth Plane – Ages 18 – 24 And Beyond – Adulthood (Construction Of Self Understanding)

Characterized by construction of the spiritual.
Conscious discernment of right and wrong.
Seeking to know one’s own place within the world.
Financial Independence – “I can get it myself”.



QMRInnovation[edit]
Main article: Innovation
Industrial and Organizational Psychologists consider innovation, more often than not, a variable of less importance and often a counter-productive one to include in conducting job performance appraisals when irrelevant to the major job functions for which a given job exists. Nonetheless, Industrial and Organizational Psychologists see the value of that variable where its consideration would, were its reliability and validity questioned, achieve a statistically significant probability that its results are not due to chance, and that it can be replicated reliably with a statistically significant ratio of reliability, and that were a court to raise a question on its reliability and validity testing, the Industrial and Organizational Psychologist behind its use would be able to defend it before a court of justice with the belief that it will stand before such a court as reliable, and valid.

With the above in mind, innovation is often considered a form of productive behavior that employees exhibit when they come up with novel ideas that further the goals of the organization.[55] This section will discuss three topics of interest: research on innovation; characteristics of an individual that may predict innovation; and how organizations may be structured to promote innovation. According to Jex & Britt, individual and organization research can be divided into four unique research focuses.[55]

Focus One: The examination of the process by which an employee develops innovations and the unique characteristics of an individuals which enables them to be highly innovative.[55] This stream of thought focuses primarily on the employee or the individual contributor.
Focus Two: The macro perspective which focuses upon the process that innovation is diffused within a specific organization. In short, this is the process of communicating an innovation to members of an organization.[112]
Focus Three: The process by which an organization adopts an innovation.[55]
Focus Four: A shared perspective of the role of the individual and the organization's culture which contribute to innovation.[55]
As indicated above, the first focus looks specifically to find certain attributes of an individual that may lead to innovation, therefore, one must ask, "Are there quantifiable predictors that an individual will be innovative?" Research indicates if various skills, knowledge, and abilities are present then an individual will be more apt to innovation. These qualities are generally linked to creativity.[55] A brief overview of these characteristics are listed below.

Task-relevant skills (general mental ability and job specific knowledge). Task specific and subject specific knowledge is most often gained through higher education; however, it may also be gained by mentoring and experience in a given field.[55]
Creativity-relevant skills (ability to concentrate on a problem for long periods of time, to abandon unproductive searches, and to temporarily put aside stubborn problems). The ability to put aside stubborn problems is referred to by Jex & Britt as productive forgetting.[55] Creativity-relevant skills also require the individual contributor to evaluate a problem from multiple vantage points. One must be able to take on the perspective of various users. For example, an Operation Manager analyzing a reporting issue and developing an innovative solution would consider the perspective of a sales person, assistant, finance, compensation, and compliance officer.
Task motivation (internal desire to perform task and level of enjoyment).[55]
In addition to the role and characteristics of the individual, one must consider what it is that may be done on an organizational level to develop and reward innovation. A study by Damanpour identified four specific characteristics that may predict innovation within an organization.[113] They are the following ones:

A population with high levels of technical knowledge
The organization's level of specialization
The level an organization communicates externally
Functional Differentiation.[55]

QMRServices offered[edit]
Kurpius (1978; as cited in Hedge & Borman, 2009)[133] gave four general types of consultation:

services and products (e.g., selection tools)
collecting information and helping the organization identify and solve the problem
collaborating with the client to design and plan changes in the organization
helping the client implement the changes and incorporate them into the organizational culture.
Consultants offer these consulting services to all kinds of organizations, such as profit and nonprofit sectors, public and private sectors, and a government organization.

QMRIn contrast to the Fiedler contingency model, the path–goal model states that the four leadership styles are fluid, and that leaders can adopt any of the four depending on what the situation demands.

QMRThe path–goal theory, also known as the path–goal theory of leader effectiveness or the path–goal model, is a leadership theory developed by Robert House, an Ohio State University graduate, in 1971 and revised in 1996. The theory states that a leader's behavior is contingent to the satisfaction, motivation and performance of her or his subordinates. The revised version also argues that the leader engages in behaviors that complement subordinate's abilities and compensate for deficiencies. The path–goal model can be classified as a Transaction leadership theory.

According to the first of all theory, the manager’s job is viewed as guiding workers to choose the best paths to reach their goals, as well as the organizational goals. The theory argues that leaders will have to engage in different types of leadership behavior depending on the nature and the demands of a particular situation. It is the leader’s job to assist followers in attaining goals and to provide the direction and support needed to ensure that their goals are compatible with the organization’s goals.[2]

A leader’s behavior is acceptable to subordinates when viewed as a source of satisfaction, and motivational when need satisfaction is contingent on performance, and the leader facilitates, coaches, and rewards effective performance. The original path-goal theory identifies achievement-oriented, directive, participative, and supportive leader behaviors:

The directive path-goal clarifying leader behavior refers to situations where the leader lets followers know what is expected of them and tells them how to perform their tasks. The theory argues that this behavior has the most positive effect when the subordinates' role and task demands are ambiguous and intrinsically satisfying.[4]
The achievement-oriented leader behavior refers to situations where the leader sets challenging goals for followers, expects them to perform at their highest level, and shows confidence in their ability to meet this expectation.[4] Occupations in which the achievement motive were most predominant were technical jobs, sales persons, scientists, engineers, and entrepreneurs.[2]
The participative leader behavior involves leaders consulting with followers and asking for their suggestions before making a decision. This behavior is predominant when subordinates are highly personally involved in their work.[2]
The supportive leader behavior is directed towards the satisfaction of subordinates needs and preferences. The leader shows concern for the followers’ psychological well being.[4] This behavior is especially needed in situations in which tasks or relationships are psychologically or physically distressing.[2]
Path–goal theory assumes that leaders are flexible and that they can change their style, as situations require. The theory proposes two contingency variables, such as environment and follower characteristics, that moderate the leader behavior-outcome relationship. Environment is outside the control of the follower-task structure, authority system, and work group. Environmental factors determine the type of leader behavior required if the follower outcomes are to be maximized. Follower characteristics are the locus of control, experience, and perceived ability. Personal characteristics of subordinates determine how the environment and leader are interpreted. Effective leaders clarify the path to help their followers achieve goals and make the journey easier by reducing roadblocks and pitfalls. [1] [5] Research demonstrates that employee performance and satisfaction are positively influenced when the leader compensates for the shortcomings in either the employee or the work setting. According to Northouse, the theory is useful because it reminds leaders that their central purpose as a leader is to help subordinates define and reach their goals in an efficient manner.[6]

In contrast to the Fiedler contingency model, the path–goal model states that the four leadership styles are fluid, and that leaders can adopt any of the four depending on what the situation demands.

fourth generation nuclear design

QMROgden[18] identifies four functions that projective identification may serve. As in the traditional Kleinian model, it serves as a defense. Projective identification serves as a mode of communication. It is a form of object relations, and “a pathway for psychological change.”[18]:21 As a form of object relationship, projective identification is a way of relating with others who are not seen as entirely separate from the individual. Instead, this relating takes place “between the stage of the subjective object and that of true object relatedness”.[

QMRThe psychiatrist George Eman Vaillant introduced a four-level classification of defence mechanisms:

Level I - pathological defences (psychotic denial, delusional projection)
Level II - immature defences (fantasy, projection, passive aggression, acting out)
Level III - neurotic defences (intellectualization, reaction formation, dissociation, displacement, repression)
Level IV - mature defences (humour, sublimation, suppression, altruism, anticipation)

Level 1: Pathological[edit]

This section does not cite any sources. Please help improve this section by adding citations to reliable sources. Unsourced material may be challenged and removed. (June 2013)
The mechanisms on this level, when predominating, almost always are severely pathological. These six defences, in conjunction, permit one to effectively rearrange external experiences to eliminate the need to cope with reality. The pathological users of these mechanisms frequently appear irrational or insane to others. These are the "psychotic" defences, common in overt psychosis. However, they are normally found in dreams and throughout childhood as well.[22] They include:

Conversion: The expression of an intrapsychic conflict as a physical symptom; some examples include blindness, deafness, paralysis, or numbness. This phenomenon is sometimes called hysteria.[23]
Delusional projection: Delusions about external reality, usually of a persecutory nature.
Denial: Refusal to accept external reality because it is too threatening; arguing against an anxiety-provoking stimulus by stating it doesn't exist; resolution of emotional conflict and reduction of anxiety by refusing to perceive or consciously acknowledge the more unpleasant aspects of external reality.
Distortion: A gross reshaping of external reality to meet internal needs.
Extreme projection: The blatant denial of a moral or psychological deficiency, which is perceived as a deficiency in another individual or group.
Splitting: A primitive defence. Both harmful and helpful impulses are split off and unintegrated, frequently projected onto someone else. The defended individual segregates experiences into all-good and all-bad categories, with no room for ambiguity and ambivalence. When "splitting" is combined with "projecting", the undesirable qualities that one unconsciously perceives oneself as possessing, one consciously attributes to another.[24]

Level 2: Immature[edit]

This section needs additional citations for verification. Please help improve this article by adding citations to reliable sources. Unsourced material may be challenged and removed. (June 2013)
These mechanisms are often present in adults. These mechanisms lessen distress and anxiety produced by threatening people or by an uncomfortable reality. Excessive use of such defences is seen as socially undesirable, in that they are immature, difficult to deal with and seriously out of touch with reality. These are the so-called "immature" defences and overuse almost always leads to serious problems in a person's ability to cope effectively. These defences are often seen in major depression and personality disorders.[22] They include:

Acting out: Direct expression of an unconscious wish or impulse in action, without conscious awareness of the emotion that drives that expressive behavior.
Fantasy: Tendency to retreat into fantasy in order to resolve inner and outer conflicts.
Idealization: Tending to perceive another individual as having more desirable qualities than he or she may actually have.[25]
Introjection: Identifying with some idea or object so deeply that it becomes a part of that person. For example, introjection occurs when we take on attributes of other people who seem better able to cope with the situation than we do.
Passive aggression: Aggression towards others expressed indirectly or passively, often through procrastination.
Projective identification: The object of projection invokes in that person a version of the thoughts, feelings or behaviours projected.
Projection: A primitive form of paranoia. Projection reduces anxiety by allowing the expression of the undesirable impulses or desires without becoming consciously aware of them; attributing one's own unacknowledged unacceptable or unwanted thoughts and emotions to another; includes severe prejudice and jealousy, hypervigilance to external danger, and "injustice collecting", all with the aim of shifting one's unacceptable thoughts, feelings and impulses onto someone else, such that those same thoughts, feelings, beliefs and motivations are perceived as being possessed by the other.
Somatization: The transformation of uncomfortable feelings towards others into uncomfortable feelings toward oneself: pain, illness, and anxiety.
Wishful thinking: Making decisions according to what might be pleasing to imagine instead of by appealing to evidence, rationality, or reality.

Level 3: Neurotic[edit]
These mechanisms are considered neurotic, but fairly common in adults. Such defences have short-term advantages in coping, but can often cause long-term problems in relationships, work and in enjoying life when used as one's primary style of coping with the world.[22] They include:

Displacement: defence mechanism that shifts sexual or aggressive impulses to a more acceptable or less threatening target; redirecting emotion to a safer outlet; separation of emotion from its real object and redirection of the intense emotion toward someone or something that is less offensive or threatening in order to avoid dealing directly with what is frightening or threatening. For example, a mother may yell at her child because she is angry with her husband.
Dissociation: Temporary drastic modification of one's personal identity or character to avoid emotional distress; separation or postponement of a feeling that normally would accompany a situation or thought.
Hypochondriasis: An excessive preoccupation or worry about having a serious illness.
Intellectualization: A form of isolation; concentrating on the intellectual components of a situation so as to distance oneself from the associated anxiety-provoking emotions; separation of emotion from ideas; thinking about wishes in formal, affectively bland terms and not acting on them; avoiding unacceptable emotions by focusing on the intellectual aspects (isolation, rationalization, ritual, undoing, compensation, and magical thinking).
Isolation: Separation of feelings from ideas and events, for example, describing a murder with graphic details with no emotional response.
Rationalization (making excuses): Convincing oneself that no wrong has been done and that all is or was all right through faulty and false reasoning. An indicator of this defence mechanism can be seen socially as the formulation of convenient excuses.
Reaction formation: Converting unconscious wishes or impulses that are perceived to be dangerous or unacceptable into their opposites; behaviour that is completely the opposite of what one really wants or feels; taking the opposite belief because the true belief causes anxiety.
Regression: Temporary reversion of the ego to an earlier stage of development rather than handling unacceptable impulses in a more adult way, for example, using whining as a method of communicating despite already having acquired the ability to speak with appropriate grammar.[26]
Repression: The process of attempting to repel desires towards pleasurable instincts, caused by a threat of suffering if the desire is satisfied; the desire is moved to the unconscious in the attempt to prevent it from entering consciousness;[27] seemingly unexplainable naivety, memory lapse or lack of awareness of one's own situation and condition; the emotion is conscious, but the idea behind it is absent.[28]
Undoing: A person tries to 'undo' an unhealthy, destructive or otherwise threatening thought by acting out the reverse of the unacceptable. Involves symbolically nullifying an unacceptable or guilt provoking thought, idea, or feeling by confession or atonement.
Upward and downward social comparisons: A defensive tendency that is used as a means of self-evaluation. Individuals will look to another individual or comparison group who are considered to be worse off in order to dissociate themselves from perceived similarities and to make themselves feel better about themselves or their personal situation.
Withdrawal: Withdrawal is a more severe form of defence. It entails removing oneself from events, stimuli, and interactions under the threat of being reminded of painful thoughts and feelings.

Level 4: Mature[edit]

This section does not cite any sources. Please help improve this section by adding citations to reliable sources. Unsourced material may be challenged and removed. (June 2013)
These are commonly found among emotionally healthy adults and are considered mature, even though many have their origins in an immature stage of development. They have been adapted through the years in order to optimise success in human society and relationships. The use of these defences enhances pleasure and feelings of control. These defences help to integrate conflicting emotions and thoughts, whilst still remaining effective. Those who use these mechanisms are usually considered virtuous.[22] Mature defences include:

Acceptance: A person's assent to the reality of a situation, recognizing a process or condition (often a difficult or uncomfortable situation) without attempting to change it, protest, or exit. Religions and psychological treatments often suggest the path of acceptance when a situation is both disliked and unchangeable, or when change may be possible only at great cost or risk.
Altruism: Constructive service to others that brings pleasure and personal satisfaction.
Anticipation: Realistic planning for future discomfort.
Courage: The mental ability and willingness to confront conflicts, fear, pain, danger, uncertainty, despair, obstacles, vicissitudes or intimidation. Physical courage often extends lives, while moral courage preserves the ideals of justice and fairness.
Emotional self-regulation: The ability to respond to the ongoing demands of experience with the range of emotions in a manner that is socially tolerable. Emotional self-regulation refers to the processes people use to modify the type, intensity, duration, or expression of various emotions.
Emotional self-sufficiency: Not being dependent on the validation (approval or disapproval) of others.
Forgiveness: Cessation of resentment, indignation or anger as a result of a perceived offence, disagreement, or mistake, or ceasing to demand retribution or restitution.
Gratitude: A feeling of thankfulness or appreciation involving appreciation of a wide range of people and events. Gratitude is likely to bring higher levels of happiness, and lower levels of depression and stress. Throughout history, gratitude has been given a central position in religious and philosophical theories.
Humility: A mechanism by which a person, considering their own defects, has a humble self-opinion. Humility is intelligent self-respect which keeps one from thinking too highly or too meanly of oneself.
Humour: Overt expression of ideas and feelings (especially those that are unpleasant to focus on or too terrible to talk about directly) that gives pleasure to others. The thoughts retain a portion of their innate distress, but they are "skirted around" by witticism, for example self-deprecation.
Identification: The unconscious modelling of one's self upon another person's character and behaviour.
Mercy: Compassionate behavior on the part of those in power.
Mindfulness: Adopting a particular orientation toward one’s experiences in the present moment, an orientation that is characterised by curiosity, openness, and acceptance.
Moderation: The process of eliminating or lessening extremes and staying within reasonable limits. It necessitates self-restraint which is imposed by oneself on one's own feelings, desires etc.
Patience: Enduring difficult circumstances (delay, provocation, criticism, attack etc.) for some time before responding negatively. Patience is a recognized virtue in many religions.
Respect: Willingness to show consideration or appreciation. Respect can be a specific feeling of regard for the actual qualities of a person or feeling being and also specific actions and conduct representative of that esteem. Relationships and contacts that are built without the presence of respect are seldom long term or sustainable. The lack of respect is at the very heart of most conflict in families, communities, and nations.
Sublimation: Transformation of unhelpful emotions or instincts into healthy actions, behaviours, or emotions, for example, playing a heavy contact sport such as football or rugby can transform aggression into a game.[26]
Suppression: The conscious decision to delay paying attention to an emotion or need in order to cope with the present reality; making it possible to later access uncomfortable or distressing emotions whilst accepting them.
Tolerance: The practice of deliberately allowing or permitting a thing of which one disapproves.







Sociology chapter
QMRRitzer highlighted four primary components of McDonaldization:

Efficiency – the optimal method for accomplishing a task. In this context, Ritzer has a very specific meaning of "efficiency". In the example of McDonald's customers, it is the fastest way to get from being hungry to being full. Efficiency in McDonaldization means that every aspect of the organization is geared toward the minimization of time.[3]
Calculability – objective should be quantifiable (e.g., sales) rather than subjective (e.g., taste). McDonaldization developed the notion that quantity equals quality, and that a large amount of product delivered to the customer in a short amount of time is the same as a high quality product. This allows people to quantify how much they're getting versus how much they’re paying. Organizations want consumers to believe that they are getting a large amount of product for not a lot of money. Workers in these organizations are judged by how fast they are instead of the quality of work they do.[3]
Predictability – standardized and uniform services. "Predictability" means that no matter where a person goes, they will receive the same service and receive the same product every time when interacting with the McDonaldized organization. This also applies to the workers in those organizations. Their tasks are highly repetitive, highly routine, and predictable.[3]
Control – standardized and uniform employees, replacement of human by non-human technologies
With these four principles of the fast food industry, a strategy which is rational within a narrow scope can lead to outcomes that are harmful or irrational. As these processes spread to other parts of society, modern society’s new social and cultural characteristics are created. For example, as McDonald’s enters a country and consumer patterns are unified, cultural hybridization occurs.



QMRFORGE is a United States based nonprofit organization that works with displaced communities in Africa. FORGE was founded by Stanford University graduate Kjerstin Erickson at the age of 20 in 2003. Since its founding, FORGE has implemented over sixty community development projects that have served more than 70,000 refugees in four refugee camps in Zambia and Botswana. An official Operating Partner of the United Nations refugee agency (UNHCR), FORGE works in Zambia, hand-in-hand with refugees from the Democratic Republic of Congo, Angola, Rwanda, Burundi and Sudan.
Contents [hide]
1 FORGE’s Mission
2 Collaborative Project Planning Process (CPPP)
3 FORGE’s Development and Future
4 See also
5 External links
FORGE’s Mission[edit]
FORGE aims to build upon the capacity of African refugees to cultivate empowered communities and to create the conditions for peace and prosperity in their countries. FORGE believes that individuals affected by war are a key factor to breaking the cycle of war and poverty in Africa. Instead of providing relief work for refugees, FORGE aims at providing education and training to refugees in order to empower them with greater economic and leadership capacity. There are five main project areas that FORGE is now working on: Education, Economic Development, Health Education, Women’s Empowerment and Community Enrichment.
Recently, FORGE has launched a new program called the Collaborative Project Planning Process (CPPP). Believing that the most effective, relevant and sustainable development projects come from the insights and vision of the refugee community itself, FORGE launched the CPPP to provide refugees with the resources that are unreachable to them when building education and enrichment projects for their communities.
Collaborative Project Planning Process (CPPP)[edit]
The collaborative project is a partnership built between FORGE and refugee leaders in the refugee camps where FORGE currently works. The planning of the project involves the following four stages.
Stage I - Identification of needs: The refugee community identifies its top needs and priorities.
Stage II – Assessment of needs: Selected community leaders research and evaluate these needs.
Stage III – Project proposal: The best intervention point is selected and leaders submit a Project Proposal to FORGE’s website for funding.
Stage IV – Funding and implementation: Once funded, the project is implemented, monitored and sustained by the community.
To begin the CPPP, FORGE first recruits potential refugee leaders who are especially capable of spearheading development initiatives in the refugee camps. The CPPP then goes through the Stages I, II and III. The preparation work is all done by the refugee leaders and the role of the FORGE staff member is to give advice to the refugee leaders when necessary. After all four stages of the CPPP are complete, the projects designed by the refugee leaders will begin and they are to run the projects for at least a year. The funding for new projects is coordinated by FORGE headquarters in the United States and is managed by a FORGE field staff member who serves as a facilitator and an adviser of the refugee leaders as well as a bridge of communication between the refugee camps and the US office.
Fundraising for new projects is basically the responsibility of the US office, although most international staff based in Zambia, including project managers, are required to raise a minimum of $5,000 to offset the costs of their monthly stipends. The fundraising for the Collaborative Projects happen between stage 2 and stage 3. There are currently three projects underway in this collaborative approach. They are the Block H Reliable Seed & Market Program, Mwangaza Education Centers and FORGE Health Service. These projects are now going through stage two and heading towards stage three.
FORGE’s Development and Future[edit]
Since its establishment, FORGE has worked in four refugee camps in Botswana and Zambia in central Africa. As the refugee camp in Botswana was beginning to decrease in size and the refugees were repatriating, FORGE started to retreat from the area and focus its work in Zambia where a large number of refugees have come from the countries surrounding Zambia due to civil wars.
FORGE has been working closely with the refugees in three refugee camps in Zambia: Meheba, Kala and Mwange. As peace was proclaimed in the Democratic Republic of the Congo, the Congolese refugees in Kala and Mwange camps were beginning to repatriate under UNHCR's assistance. In light of this, FORGE is now planning to expand its work to the Congo in the coming years. FORGE aims to continue to support and help the former refugees to rebuild their lives in their home country of Congo by providing education, training and counseling until the people genuinely feel safe living in their homeland and have confidence to rebuild their communities on their own. As of December 2009, FORGE no longer operates in Kala and Mwange refugee camps.


QMRQMRSocial style does NOT focus on the innermost workings of one's personality, nor focus on one's values or beliefs. A social style is a pervasive and enduring set of interpersonal behaviors. It is quite simply how one acts--what one says and does.

David W. Merrill and Roger H. Reid, Personal Styles and Effective Performance. Radnor, PA: Chilton, 1981.

Initially studied several large insurance companies: observed and recorded interactions and conflicts between management and employees, between managers and managers, employees and other employees, agents and customers, and agents and other agents. Later replicated their studies in several other service industries. Prior to their work, most of the published research about conflicts on the job had been based on studies in industrial settings, where the social gap between employees and management is more profound and more obvious.

Robert Bolton and Dorothy G. Bolton, Social Style/Management Style. New York: American Management Association, 1984

Expanded work of Merrill & Reid and developed instrumentation for diagnosing and assessing social style. Bolton and Bolton identified four primary social styles: amiable, analytical, driver, and expressive, about equally divided among managers and among employees in service industries and government organizations. Each person tends to employ one of these social styles, and the dominant styles affects the way the individual works and interacts with others in conflict and other non-conflict situations.

These styles are defined by two behavioral variables or dimensions: assertiveness and responsiveness.

Assertiveness = the degree to which a person's behaviors are seen by others as forceful or directive.

Responsiveness= the degree to which a person's behaviors are seen by others as emotionally controlled. More responsive people react noticeably to their own emotions or to the emotions of others. Less responsive people are more guarded in their emotional expression.

While no one style works better than any other, flexibility has been shown to distinguish the success manager of conflict from the unsuccessful.

Flexibility=the ability to get along with people whose styles differ from one's own.




QMRThe initialisms LGBT or GLBT are not agreeable to everyone that they encompass.[70] For example, some argue that transgender and transsexual causes are not the same as that of lesbian, gay, and bisexual (LGB) people.[71] This argument centers on the idea that transgender and transsexuality have to do with gender identity, or a person's understanding of being or not being a man or a woman irrespective of their sexual orientation.[24] LGB issues can be seen as a matter of sexual orientation or attraction.[24] These distinctions have been made in the context of political action in which LGB goals, such as same-sex marriage legislation and human rights work (which may not include transgender and intersex people), may be perceived to differ from transgender and transsexual goals.[24] The fourth is always different

For the purpose of classifying forms of social organization, one can map them onto a two-axis coordinate system that divides space into four quadrants, as follows:

Participatory democracies, which are designed to simultaneously optimize the 'autonomy' of the individual and the 'interdependence' of individuals, belong in quadrant four.

See "The New Frontier in Democratic Theory and Practice: Organizational Forms that Simultaneously Optimize Autonomy & Community",
The Four Organizational Forms

Since each quadrant has unique characteristics that distinguish a particular quadrant from each of the others, it would follow that different types of organizational form can be associated with each quadrant and that these organizational forms will inhibit or enhance certain types of group process. Ian Mitroff in his book Stakeholders of the Organizational Mind identifies four organizational forms.1 These four forms are: bureaucratic, matrix (R & D), familial, and organic adaptive. Each of the four forms can be roughly correlated to one of the four quadrants.

Mitroff identifies the bureaucratic organizational form as impersonal and focused upon the roles to be filled not the individuals themselves. It is authoritarian with "a single leader at the top and a well-defined hierarchical line of authority that extends from the very top down to all the lower rungs of the organization."2 The bureaucratic organizational form also allows for individualism to be optimized. Individuals within bureaucracies individuate themselves through rising within the organization thereby obtaining power and independence. The individual's goal is to become part of the ruling class which has special privileges over others. The operating principle is that the cream of society rises to the top according to their merits, in other words, meritocracy.3 This is synomous to Quadrant three where the individual values being independent - not being embedded in social relationships (impersonal) - while at the same time expects decisions to be made for him or her (well-defined hierarchical lines of authority).

The matrix, also know as "research and development", organizational form shares the impersonal orientation with the bureaucratic form. However, instead of rigid lines of authority and a well defined hierarchy, the matrix organization form is more flexible allowing individuals and groups to be "freer to organize and reorganize" according to the circumstances. The matrix form is similar to what is described as "project management." Project management allows for the organization to form and re-form around particular projects with different free-agent individuals playing functional roles. For instance, on a

-page 33-
particular research and development project concerned with designing an aircraft, a particular individual may be assigned the role of project manager and would be surrounded by a hand-picked crew of specialists. The structure of the organization is built around problem solving.4 "The emphasis is on discovery, invention, and production of new technologies... constantly seeking new ideas to anticipate and create new external markets, not respond to them."5 This form is similar in dynamics to that of Quadrant one where individuals are less concerned about their relationships to one another (impersonal) and more interested in the pursuit of their own good outside of the confines of set rules and structures (freer to organize and reorganize).

The familial organizational form is identified as being concerned with interpersonal relationships in the environment as well as attending to the concrete details and hard facts. According to Mitroff, the familial organization is the extreme opposite of the matrix organization. Typically the heroes of this organizational form "are those very special people who are able to create a highly personal, very warm climate in their organization ... indeed, the organization becomes just like a home, like a family."6 The following passage is an excerpt from a story a person describes to Mitroff about the ideal familial organization:

...Everyone I met was very friendly and in the days to come proved to be most helpful. My duties were explained to me quite clearly and thoroughly. The procedure with which I had to work was written in such a way that there was very little chance of misinterpretation. All the staff worked quite well with each other... the separate department heads would meet once a week with the administrator who would keep them informed of new developments. The department heads would keep the workers informed.7
Not only the extremely personal nature of organizational interrelationships but also demonstrates a highly, albeit friendly, hierarchical structure. This organizational form approximates the characteristics of Quadrant two with its strong emphasis on social interdependence (friendly and personal nature of the interrelationships) and heteronomy (hierarchy).8 Interestingly, the opposing relationship between the familial organization and the matrix organization is parallel to the opposing relationship between Quadrants one and two.

The fourth organizational form that Mitroff describes is the organic adaptive.9 The organic adaptive organization is the extreme opposite of the bureaucratic organization. Therefore if the bureaucratic organization is highly authoritarian and structured with

-page 34-
well-defined roles and behaviors, then the organic adaptive would be "completely decentralized with no clear lines of authority, no central leader, and no fixed, prescribed rules of behavior

The above diagram demonstrates how the features of Mitroff's four organizational forms fit with the four Quadrants. Interestingly, the two systems quite clearly overlap and, as demonstrated in the above diagram, do not contradict each other.

For example, the organizational form bureaucracy associated with Quadrant three will inhibits exactly those processes that Quadrant four enhances. In other words Quadrant three is the shadow of Quadrant four. Quadrant three organizations center around impersonal control, certainty, and specificity. These characteristics become manifest through clearly defined "stable" hierarchical structures, specialization of roles, and impersonal relations. The opposite of the bureaucratic organization, organic adaptive organizations have the features of role flexibility, no centralized structuring system, and an emphasis on process over task. The benefits of such organizational features are enhanced creativity, innovation, high interaction and reciprocal feedback.

The organic adaptive organizational form allows for greater openness to processes that includes mutual reciprocity which comes out of the respect for the needs of the individual within the collaborative process. In contrast to bureaucracy, this collaborative process is not imposed by a preset agenda or hegemonous ideology. Folded into the term "organic adaptive" is the idea of an organism (organic) that is self-organizing (adaptive).

At this point we are going to focus exclusively on Quadrant four. Initially, any efforts to do this will be similar to the experience of "square" in his first encounter with

-page 35-
"sphere" (see chapter one). In order to avoid the obstacles of experience and language that may continue to inhibit developing one's awareness and appreciation of the qualities and attributes of Quadrant four, consider this a journey to a whole new world with different rules and customs. Unlike a journey to a far away land where you can pack your goods and prepare a schedule, this journey involves turning one's beliefs inside-out much like Alice discovering the unique properties of the world inside a rabbit's home in the Lewis Carroll's novel Alice's Adventures in Wonderland. Things happened to Alice that would not be possible in the "real" world, like shrinking and getting larger by eating and drinking, or finding a Cheshire cat that could disappear into its own smile. Alice could not operate in this newly "discovered" wonderland effectively until she was able to suspend her original belief system in the world outside the rabbit hole. Her adventures could not be explained by what she already knew. Similarly, to escape the gravity of Quadrant three's framework in order to travel to the world of Quadrant four one must actively engage in seeking to understand Quadrant four in terms of its own values and experiences. Otherwise, if this is not done, one will be stuck understanding Quadrant Four in terms of what it is not, much like describing "Life" as being "not-Death". For Alice, many of the prominent characteristics of her world outside the rabbit hole are present in Wonderland such as gardens, houses with furniture, and even some of the social rituals like "four o'clock tea". However, the principles around which these things are organized are very different than those she learned outside. In addition, a number of things are present in one world and not the other. The same can be said for how organizations exist and function within Quadrant three as compared to Quadrant four. Many characteristics associated with organizations can be found within both quadrants, but the nature, priority, and usage reflects completely opposite organizing principles. In addition, many different features are unique to that quadrant and will not be found in the other. One way of visualizing this would be to see two overlapping circles. The overlapping part exhibits those characteristics that are recognizable in both worlds, so in a way each contains a part of the other.

turkeys four quadrant approach
QMREquity theory consists of four propositions:

Individuals seek to maximize their outcomes (where outcomes are defined as rewards minus costs).[3]
Groups can maximize collective rewards by developing accepted systems for equitably apportioning rewards and costs among members. Systems of equity will evolve within groups, and members will attempt to induce other members to accept and adhere to these systems. The only way groups can induce members to equitably behave is by making it more profitable to behave equitably than inequitably. Thus, groups will generally reward members who treat others equitably and generally punish (increase the cost for) members who treat others inequitably.
When individuals find themselves participating in inequitable relationships, they become distressed. The more inequitable the relationship, the more distress individuals feel. According to equity theory, both the person who gets "too much" and the person who gets "too little" feel distressed. The person who gets too much may feel guilt or shame. The person who gets too little may feel angry or humiliated.
Individuals who perceive that they are in an inequitable relationship attempt to eliminate their distress by restoring equity. The greater the inequity, the more distress people feel and the more they try to restore equity. (Walster, Traupmann and Walster, 1978)

QMRCecil Alec Mace carried out the first empirical studies in 1935.[3]

Edwin A. Locke began to examine goal setting in the mid-1960s and continued researching goal setting for over thirty years.[2][4][5] Locke derived the idea for goal-setting from Aristotle's form of final causality. Aristotle speculated that purpose can cause action; thus, Locke began researching the impact goals have on human activity. Locke developed and refined his goal-setting theory in the 1960s, publishing his first article on the subject, "Toward a Theory of Task Motivation and Incentives", in 1968.[6] This article established the positive relationship between clearly identified goals and performance.

Concept[edit]
Goals that are deemed difficult to achieve and specific tend to increase performance more than goals that are not.[7] A goal can become more specific through quantification or enumeration (should be measurable), such as by demanding "...increase productivity by 50%," or by defining certain tasks that must be completed.

Setting goals affects outcomes in four ways:[8]

Choice: goals narrow attention and direct efforts to goal-relevant activities, and away from perceived undesirable and goal-irrelevant actions.
Effort: goals can lead to more effort; for example, if one typically produces 4 widgets an hour, and has the goal of producing 6, one may work more intensely towards the goal than one would otherwise.
Persistence: someone becomes more likely to work through setbacks if pursuing a goal.
Cognition: goals can lead individuals to develop and change their behavior.
In business[edit]
In business, goal setting encourages participants to put in substantial effort. Also, because every member has defined expectations for their role, little room is left for inadequate, marginal effort to go unnoticed.

Managers cannot constantly drive motivation, or keep track of an employee's work on a continuous basis. Goals are therefore an important tool for managers, since goals have the ability to function as a self-regulatory mechanism that helps employees prioritize tasks.[9][10]

The four mechanisms through which goal setting can affect individual performance are:

Goals focus attention toward goal-relevant activities and away from goal-irrelevant activities.
Goals serve as an energizer: Higher goals induce greater effort, while low goals induce lesser effort.
Goals affect persistence; constraints with regard to resources affect work pace.
Goals activate cognitive knowledge and strategies that help employees cope with the situation at hand.

QMrMichael Porter wrote in 1980 that formulation of competitive strategy includes consideration of four key elements:

Company strengths and weaknesses;
Personal values of the key implementers (i.e., management and the board);
Industry opportunities and threats; and
Broader societal expectations.[3]
The first two elements relate to factors internal to the company (i.e., the internal environment), while the latter two relate to factors external to the company (i.e., the external environment).[3] These elements are considered throughout the strategic planning process.

QMRMcKinsey & Company developed a capability maturity model in the 1970s to describe the sophistication of planning processes, with strategic management ranked the highest. The four stages include:

Financial planning, which is primarily about annual budgets and a functional focus, with limited regard for the environment;
Forecast-based planning, which includes multi-year financial plans and more robust capital allocation across business units;
Externally oriented planning, where a thorough situation analysis and competitive assessment is performed;

Strategic management, where widespread strategic thinking occurs and a well-defined strategic framework is used.

No comments:

Post a Comment