Tuesday, May 3, 2016

Quadrant Model of Reality Book 6 Science

Science Chapter






















Physics chapter


In Western culture, the four principal lunar phases are first quarter, full moon, third quarter (also known as last quarter), and new moon. These are the instants when the Moon's apparent geocentric celestial longitude minus the Sun's apparent geocentric celestial longitude is 0°, 90°, 180° and 270°, respectively. Each of these phases is roughly 7.38 days long, but the durations vary slightly because the Moon's orbit is slightly elliptical, and thus its speed in orbit is not constant.


The four-factor formula, also known as Fermi's four factor formula is used in nuclear engineering to determine the multiplication of a nuclear chain reaction in an infinite medium. The formula is[1]

k_{\infty} = \eta f p \varepsilon
Symbol Name Meaning Formula
\eta Reproduction Factor (Eta)
\frac{\mbox{fission neutrons}}{\mbox{absorption in fuel isotope}}
\eta = \frac{\nu \sigma_f^F}{\sigma_a^F}
f The thermal utilization factor
\frac{\mbox{neutrons absorbed by the fuel isotope}}{\mbox{neutrons absorbed anywhere}}
f = \frac{\Sigma_a^F}{\Sigma_a}
p The resonance escape probability
\frac{\mbox{fission neutrons slowed to thermal energies without absorption}}{\mbox{total fission neutrons}}
p \approx \mathrm{exp} \left( -\frac{\sum\limits_{i=1}^{N} N_i I_{r,A,i}}{\left( \overline{\xi} \Sigma_p \right)_{mod}} \right)
\epsilon The fast fission factor
\frac{\mbox{total number of fission neutrons}}{\mbox{number of fission neutrons from just thermal fissions}}
\varepsilon \approx 1 + \frac{1-p}{p}\frac{u_f \nu_f P_{FAF}}{f \nu_t P_{TAF} P_{TNL}}



In special and general relativity, the four-current is the four-dimensional analogue of the electric current density, which is used in the geometric context of four-dimensional spacetime, rather than three-dimensional space and time separately. Mathematically it is a four-vector, and is Lorentz covariant.

Analogously, it is possible to have any form of "current density", meaning the flow of a quantity per unit time per unit area, see current density for more on this quantity.

This article uses the summation convention for indices, see covariance and contravariance of vectors for background on raised and lowered indices, and raising and lowering indices on how to switch between them.




QMrIn special relativity, a four-vector is an object with four in general complex components that transform in a specific way under Lorentz transformations. Specifically, a four-vector is an element of a 4-dimensional vector space considered as a representation space of the standard representation of the Lorentz group, the (½,½) representation. It differs from a Euclidean vector in how its magnitude is determined. The transformations that preserve this magnitude are the Lorentz transformations. They include spatial rotations, boosts (a change by a constant velocity to another inertial reference frame), and temporal and spatial inversions.

Four-vectors describe, for instance, position xμ in spacetime modeled as Minkowski space, a particles 4-momentum pμ, the amplitude of the electromagnetic four-potential Aμ(x) at a point x in spacetime, and the elements of the subspace spanned by the gamma matrices inside the Dirac algebra.

The Lorentz group may be represented by 4×4 matrices Λ. The action of a Lorentz transformation on a general contravariant four-vector X (like the examples above), regarded as a column vector with Cartesian coordinates with respect to an inertial frame in the entries, is given by

X^\prime = \Lambda X,
(matrix multiplication) where the components of the primed object refer to the new frame. Related to the examples above that are given as contravariant vectors, there are also the corresponding covariant vectors xμ, pμ and Aμ(x). These transform according to the rule

X^\prime = {(\Lambda^{-1})}^\mathrm T X,
where T denotes the matrix transpose. This rule is different from the above rule. It corresponds to the dual representation of the standard representation. However, for the Lorentz group the dual of any representation is equivalent to the original representation. Thus the objects with covariant indices are four-vectors as well.

For an example of a well-behaved four-component object in special relativity that is not a four-vector, see bispinor. It is similarly defined, the difference being that the transformation rule under Lorentz transformations is given by a representation other than the standard representation. In this case, the rule reads X′ = Π(Λ)X, where Π(Λ) is a 4×4 matrix other than Λ. Similar remarks apply to objects with fewer or more components that are well-behaved under Lorentz transformations. These include scalars, spinors, tensors and spinor-tensors.

The article considers four-vectors in the context of special relativity. Although the concept of four-vectors also extends to general relativity, some of the results stated in this article require modification in general relativity.




In physics, in particular in special relativity and general relativity, a four-velocity is a four-vector in four-dimensional spacetime that represents the relativistic counterpart of velocity, which is a three-dimensional vector in space.[nb 1]

Physical events correspond to mathematical points in time and space, the set of all of them together forming a mathematical model of physical four-dimensional spacetime. The history of an object traces a curve in spacetime, called its world line. If the object is massive, so that its speed is less than the speed of light, the world line may be parametrized by the proper time of the object. The four-velocity is the rate of change of four-position with respect to the proper time along the curve. The velocity, in contrast, is the rate of change of the position in (three-dimensional) space of the object, as seen by an observer, with respect to the observer's time.

The value of the magnitude of an object's four-velocity, i. e. the quantity obtained by applying the metric tensor g to the four-velocity u, that is ||u||2 = u ⋅ u = gμνuνuμ, is always equal to ±c2, where c is the speed of light. Whether the plus or minus sign applies depends on the choice of metric signature. For an object at rest its four-velocity is parallel to the direction of the time coordinate with u0 = c. A four-velocity is thus the normalized future-directed timelike tangent vector to a world line, and is a contravariant vector. Though it is a vector, addition of two four-velocities does not yield a four-velocity: The space of four-velocities is not itself a vector space.[nb 2]


In the theory of relativity, four-acceleration is a four-vector (vector in four-dimensional spacetime) that is analogous to classical acceleration (a three-dimensional vector). Four-acceleration has applications in areas such as the annihilation of antiprotons, resonance of strange particles and radiation of an accelerated charge.[1]


In the special theory of relativity four-force is a four-vector that replaces the classical force.



In special relativity, four-momentum is the generalization of the classical three-dimensional momentum to four-dimensional spacetime. Momentum is a vector in three dimensions; similarly four-momentum is a four-vector in spacetime. The contravariant four-momentum of a particle with energy E and three-momentum p = (px, py, pz) = mu = γmv, where v is the particles 3-velocity and γ the Lorentz factor, is

p=(p^{0},p^{1},p^{2},p^{3})=\left({E \over c},p_{x},p_{y},p_{z}\right).






















Chemistry chapter

The Potez 4D was a four-cylinder, inverted inline aircraft engine. It was first built shortly before World War II, but did not enter full production until 1949. Like the othe D-series engines, the cylinders had a bore of 125 mm (4.9 in) and a stroke of 120 mm (4.7 in). Power for different models was in the 100 kW-190 kW (140 hp-260 hp) range.


Polar modulation is analogous to quadrature modulation in the same way that polar coordinates are analogous to Cartesian coordinates. Quadrature modulation makes use of Cartesian coordinates, x and y. When considering quadrature modulation, the x axis is called the I (in-phase) axis, and the y axis is called the Q (quadrature) axis. Polar modulation makes use of polar coordinates, r (amplitude) and Θ (phase).

The quadrature modulator approach to digital radio transmission requires a linear RF power amplifier which creates a design conflict between improving power efficiency or maintaining amplifier linearity. Compromising linearity causes degraded signal quality, usually by adjacent channel degradation, which can be a fundamental factor in limiting network performance and capacity. Additional problems with linear RF power amplifiers, including device parametric restrictions, temperature instability, power control accuracy, wideband noise and production yields are also common. On the other hand compromising power efficiency increases power consumption (which reduces battery life in handheld devices) and generates more heat.

The issue of linearity in a power amplifier can theoretically be mitigated by requiring that the input signal of the power amplifier be "constant envelope", i.e. contain no amplitude variations. In a polar modulation system, the power amplifier input signal may vary only in phase. Amplitude modulation is then accomplished by directly controlling the gain of the power amplifier through changing or modulating its supply voltage. Thus a polar modulation system allows the use of highly non-linear power amplifier architectures such as Class E and Class F.

In order to create the Polar signal, the phase transfer of the amplifier must be known over at least a 17 dB amplitude range. As the phase transitions from one to another, there will be an amplitude perturbation that can be calculated during the transition as,

P(n) = \sqrt {I^2(n) + Q^2(n)}
where n is the number of samples of I and Q and should be sufficiently large to allow an accurate tracing of the signal. One hundred samples per symbol would be about the lowest number that is workable.

Now that the amplitude change of the signal is known, the phase error introduced by the amplifier at each amplitude change can be used to pre-distort the signal. One simply subtracts the phase error at each amplitude from the modulating I and Q signals.

History
Polar modulation was originally developed by Thomas Edison in his 1874 quadruplex telegraph – this allowed 4 signals to be sent along a pair of lines, 2 in each direction. Sending a signal in each direction had already been accomplished earlier, and Edison found that by combining amplitude and phase modulation (i.e., by polar modulation), he could double this to 4 signals – hence, quadruplex.





One way of inducing or stabilizing G-quadruplex formation, is to introduce a molecule which can bind to the G-quadruplex structure, and a number of ligands, both small molecules and proteins, have been developed which can do so. This has become an increasingly large field of research.

A number of naturally occurring proteins have been identified which selectively bind to G-quadruplexes. These include the helicases implicated in Bloom's and Werner's syndromes and the Saccharomyces cerevisiae protein RAP1. An artificially derived three zinc finger protein called Gq1, which is specific for G-quadruplexes has also been developed, as have specific antibodies.

Cationic porphyrins have been shown to bind intercalatively with G-quadruplexes, as well as the molecule telomestatin.

Quadruplex prediction techniques[edit]
Identifying and predicting sequences which have the capacity to form quadruplexes is an important tool in further understanding of their role. Generally, a simple pattern match is used for searching for possible quadruplex forming sequences: d(G3+N1-7G3+N1-7G3+N1-7G3+), where N is any base (including guanine).[17] This rule has been widely used in on-line algorithms.


In computing, quadruple precision (also commonly shortened to quad precision) is a binary floating-point-based computer number format that occupies 16 bytes (128 bits) in computer memory and whose precision is about twice the 53-bit double precision.

This 128 bit quadruple precision is designed not only for applications requiring results in higher than double precision,[1] but also, as a primary function, to allow the computation of double precision results more reliably and accurately by minimising overflow and round-off errors in intermediate calculations and scratch variables: as William Kahan, primary architect of the original IEEE-754 floating point standard noted, "For now the 10-byte Extended format is a tolerable compromise between the value of extra-precise arithmetic and the price of implementing it to run fast; very soon two more bytes of precision will become tolerable, and ultimately a 16-byte format... That kind of gradual evolution towards wider precision was already in view when IEEE Standard 754 for Floating-Point Arithmetic was framed." [2]


The four-way valve or four-way cock is a fluid control valve whose body has four ports equally spaced round the valve chamber and the plug has two passages to connect adjacent ports. The plug may be cylindrical or tapered, or a ball.

It has two flow positions as shown, and usually a central position where all ports are closed.

It can be used to isolate and to simultaneously bypass a sampling cylinder installed on a pressurized water line. It is useful to take a fluid sample without affecting the pressure of a hydraulic system and to avoid degassing (no leak, no gas loss or air entry, no external contamination).

It was used to control the flow of steam to the cylinder of early double-acting steam engines, such as those designed by Richard Trevithick. This use of the valve is possibly attributable to Denis Papin.

Because the two "L"-shaped ports in the plug do not interconnect, the four-way valve is sometimes referred to as an "×" port.




The four-way handshake[edit]

The four-way handshake in 802.11i
The four-way handshake is designed so that the access point (or authenticator) and wireless client (or supplicant) can independently prove to each other that they know the PSK/PMK, without ever disclosing the key. Instead of disclosing the key, the access point & client each encrypt messages to each other—that can only be decrypted by using the PMK that they already share—and if decryption of the messages was successful, this proves knowledge of the PMK. The four-way handshake is critical for protection of the PMK from malicious access points—for example, an attacker's SSID impersonating a real access point—so that the client never has to tell the access point its PMK.

The PMK is designed to last the entire session and should be exposed as little as possible; therefore, keys to encrypt the traffic need to be derived. A four-way handshake is used to establish another key called the Pairwise Transient Key (PTK). The PTK is generated by concatenating the following attributes: PMK, AP nonce (ANonce), STA nonce (SNonce), AP MAC address, and STA MAC address. The product is then put through a pseudo random function. The handshake also yields the GTK (Group Temporal Key), used to decrypt multicast and broadcast traffic.

The actual messages exchanged during the handshake are depicted in the figure and explained below (all messages are sent as EAPOL-Key frames):

The AP sends a nonce-value to the STA (ANonce). The client now has all the attributes to construct the PTK.
The STA sends its own nonce-value (SNonce) to the AP together with a MIC, including authentication, which is really a Message Authentication and Integrity Code (MAIC).
The AP constructs and sends the GTK and a sequence number together with another MIC. This sequence number will be used in the next multicast or broadcast frame, so that the receiving STA can perform basic replay detection.
The STA sends a confirmation to the AP.
The Pairwise Transient Key (64 bytes) is divided into five separate keys:

16 bytes of EAPOL-Key Confirmation Key (KCK)– Used to compute MIC on WPA EAPOL Key message
16 bytes of EAPOL-Key Encryption Key (KEK) - AP uses this key to encrypt additional data sent (in the 'Key Data' field) to the client (for example, the RSN IE or the GTK)
16 bytes of Temporal Key (TK) – Used to encrypt/decrypt Unicast data packets
8 bytes of Michael MIC Authenticator Tx Key – Used to compute MIC on unicast data packets transmitted by the AP
8 bytes of Michael MIC Authenticator Rx Key – Used to compute MIC on unicast data packets transmitted by the station
The Group Temporal Key (32 bytes) is divided into three separate keys:

16 bytes of Group Temporal Encryption Key – used to encrypt/decrypt Multicast and Broadcast data packets
8 bytes of Michael MIC Authenticator Tx Key – used to compute MIC on Multicast and Broadcast packets transmitted by AP
8 bytes of Michael MIC Authenticator Rx Key – currently unused as stations do not send multicast traffic
The Michael MIC Authenticator Tx/Rx Keys in both the PTK and GTK are only used if the network is using TKIP to encrypt the data.




























Biology chapter

Health-illness continua
Dunn’s High-Level Wellness Grid
composed of two axis’s
a health axes which ranges from peak wellness to death
a environmental axes which ranges from very favorable to very unfavorable
the two axis’s form four quadrants
high-level wellness in a favorable environment
e.g., a person who implements healthy life-style behaviors and has the biopsychosocialspiritual resources to support this life-style
emergent high-level wellness in an unfavorable environment
e.g., a woman who has the knowledge to implement healthy life-style practices but does not implement adequate self-care practices because of family responsibilities, job demands, or other factors
protected poor health in a favorable environment
e.g., an ill person whose needs are met by the health care system and who has access to appropriate medications, diet, and health care instruction
poor health in an unfavorable environment
e.g., a young child who is starving in a drought ridden country



Germs are found all over the world, in all kinds of places. The four major types of germs are: bacteria, viruses, fungi, and protozoa. They can invade plants, animals, and people, and sometimes they make us sick.

Bacteria (say: BAK-teer-ee-uh) are tiny, one-celled creatures that get nutrients from their environments in order to live. In some cases that environment is a human body. Bacteria can reproduce outside of the body or within the body as they cause infections. Some infections that bacteria can cause include ear infections, sore throats (tonsillitis or strep throat), cavities, and pneumonia (say: new-MO-nyuh).

But not all bacteria are bad. Some bacteria are good for our bodies — they help keep things in balance. Good bacteria live in our intestines and help us use the nutrients in the food we eat and make waste from what's left over. We couldn't make the most of a healthy meal without these important helper germs! Some bacteria are also used by scientists in labs to produce medicines and vaccines (say: VAK-seens).

Viruses (say: VY-rus-iz) need to be inside living cells to grow and reproduce. Most viruses can't survive very long if they're not inside a living thing like a plant, animal, or person. Whatever a virus lives in is called its host. When viruses get inside people's bodies, they can spread and make people sick. Viruses cause chickenpox, measles, flu, and many other diseases. Because some viruses can live for a short time on something like a doorknob or countertop, be sure to wash your hands regularly!

Fungi (say: FUN-guy) are multi-celled (made of many cells), plant-like organisms. Unlike other plants, fungi cannot make their own food from soil, water, and air. Instead, fungi get their nutrition from plants, people, and animals. They love to live in damp, warm places, and many fungi are not dangerous in healthy people. An example of something caused by fungi is athlete's foot, that itchy rash that teens and adults sometimes get between their toes.

Protozoa (say: pro-toh-ZOH-uh) are one-cell organisms that love moisture and often spread diseases through water. Some protozoa cause intestinal infections that lead to diarrhea, nausea, and belly pain.



Since 2001, 40 non-natural amino acids have been added into protein by creating a unique codon (recoding) and a corresponding transfer-RNA:aminoacyl – tRNA-synthetase pair to encode it with diverse physicochemical and biological properties in order to be used as a tool to exploring protein structure and function or to create novel or enhanced proteins.[42] [43]

H. Murakami and M. Sisido have extended some codons to have four and five bases. Steven A. Benner constructed a functional 65th (in vivo) codon.

The fourth is always different. The fifth is ultra transcendent.


A protein contact map is in the form of a quadrant.

A protein contact map represents the distance between all possible amino acid residue pairs of a three-dimensional protein structure using a binary two-dimensional matrix. For two residues i and j, the ij element of the matrix is 1 if the two residues are closer than a predetermined threshold, and 0 otherwise. Various contact definitions have been proposed: The distance between the Cα-Cα atom with threshold 6-12 Å; distance between Cβ-Cβ atoms with threshold 6-12 Å (Cα is used for Glycine); and distance between the side-chain centers of mass.


A Ramachandran plot (also known as a Ramachandran diagram or a [φ,ψ] plot), originally developed in 1963 by G. N. Ramachandran, C. Ramakrishnan, and V. Sasisekharan,[1] is a way to visualize backbone dihedral angles ψ against φ of amino acid residues in protein structure and identify sterically allowed regions for these angles. The figure at left illustrates the definition of the φ and ψ backbone dihedral angles[2] (called φ and φ' by Ramachandran). The ω angle at the peptide bond is normally 180°, since the partial-double-bond character keeps the peptide planar.[3] The figure at top right shows the allowed φ,ψ backbone conformational regions from the Ramachandran et al. 1963 and 1968 hard-sphere calculations: full radius in solid outline, reduced radius in dashed, and relaxed tau (N-Calpha-C) angle in dotted lines.[4] Because dihedral angle values are circular and 0° is the same as 360°, the edges of the Ramachandran plot "wrap" right-to-left and bottom-to-top. For instance, the small strip of allowed values along the lower-left edge of the plot are a continuation of the large, extended-chain region at upper left.

The plot is in the form of a quadrant.

A Ramachandran plot is a four quadrant plot taught in every biochemistry class that can be used in two somewhat different ways. One is to show in theory which values, or conformations, of the ψ and φ angles are possible for an amino-acid residue in a protein (as at top right). A second is to show the empirical distribution of datapoints observed in a single structure (as at right, here) in usage for structure validation, or else in a database of many structures (as in the lower 3 plots at left). Either case is usually shown against outlines for the theoretically favored regions.

The fourth square of a Rakmachandran plot is empty. The fourth square is always different.


The CATH Protein Structure Classification is a semi-automatic, hierarchical classification of protein domains published in 1997 by Christine Orengo, Janet Thornton and their colleagues.[2] CATH shares many broad features with its principal rival, SCOP, however there are also many areas in which the detailed classification differs greatly.

The name CATH is an acronym of the four main levels in the classification.

The four main levels of the CATH hierarchy are as follows:
# Level Description
1 Class the overall secondary-structure content of the domain. (Equivalent to SCOP class)
2 Architecture high structural similarity but no evidence of homology. (Equivalent to SCOP fold)
3 Topology a large-scale grouping of topologies which share particular structural features
4 Homologous superfamily indicative of a demonstrable evolutionary relationship. (Equivalent to SCOP superfamily)
CATH defines four classes: mostly-alpha, mostly-beta, alpha and beta, few secondary structures.

In order to better understand the CATH classification system it is useful to know how it is constructed: much of the work is done by automatic methods, however there are important manual elements to the classification.

The very first step is to separate the proteins into domains. It is difficult to produce an unequivocal definition of a domain and this is one area in which CATH and SCOP differ.

The domains are automatically sorted into classes and clustered on the basis of sequence similarities. These groups form the H levels of the classification. The topology level is formed by structural comparisons of the homologous groups. Finally, the Architecture level is assigned manually.

Class Level classification is done on the basis of 4 criteria:

Secondary structure content;
Secondary structure contacts;
Secondary structure alternation score; and
Percentage of parallel strands.
More detail on this process and the comparison between SCOP, CATH and FSSP can be found in: Hadley & Jones, 1999[3] and Day et al., 2003.[4]



A helix bundle is a small protein fold composed of several alpha helices that are usually nearly parallel or antiparallel to each other.

Contents [hide]
1 Three-helix bundles
2 Four-helix bundles
3 See also
4 References
5 External links
Three-helix bundles[edit]

An example of the three-helix bundle fold, the headpiece domain from the protein villin as expressed in chickens (PDB ID 1QQV).
Three-helix bundles are among the smallest and fastest known cooperatively folding structural domains.[1] The three-helix bundle in the villin headpiece domain is only 36 amino acids long and is a common subject of study in molecular dynamics simulations because its microsecond-scale folding time is within the timescales accessible to simulation [2][3] The 40-residue HIV accessory protein has a very similar fold and has also been the subject of extensive study.[4] There is no general sequence motif associated with three-helix bundles, so they cannot necessarily be predicted from sequence alone. Three-helix bundles often occur in actin-binding proteins and in DNA-binding proteins.

Four-helix bundles[edit]
Four-helix bundles typically consist of four helices packed in a coiled-coil arrangement with a sterically close-packed hydrophobic core in the center. Pairs of adjacent helices are often additionally stabilized by salt bridges between charged amino acids. The helix axes typically are oriented about 20 degrees from their neighboring helices, a much shallower incline than in the larger helical structure of the globin fold.[5]

The specific topology of the helices is dependent on the protein - helices that are adjacent in sequence are often antiparallel, although it is also possible to arrange antiparallel links between two pairs of parallel helices. Because dimeric coiled-coils are themselves relatively stable, four-helix bundles can be dimers of coiled-coil pairs, as in the Rop protein. Four-helix bundle can have thermal stability more than 100℃. Other examples of four-helix bundles include cytochrome, ferritin, human growth hormone, cytokine,[5] and Lac repressor C-terminal. The four-helix bundle fold has proven an attractive target for de novo protein design, with numerous de novo four-helix bundle proteins having been successfully designed by rational[6] and by combinatorial[7] methods. Although sequence is not conserved among four-helix bundles, sequence patterns tend to mirror those of coiled-coil structures in which every fourth and seventh residue is hydrophobic.


The Gram stain, developed in 1884 by Hans Christian Gram, characterises bacteria based on the structural characteristics of their cell walls.[66] The thick layers of peptidoglycan in the "Gram-positive" cell wall stain purple, while the thin "Gram-negative" cell wall appears pink. By combining morphology and Gram-staining, most bacteria can be classified as belonging to one of four groups (Gram-positive cocci, Gram-positive bacilli, Gram-negative cocci and Gram-negative bacilli). Some organisms are best identified by stains other than the Gram stain, particularly mycobacteria or Nocardia, which show acid-fastness on Ziehl–Neelsen or similar stains.[146] Other organisms may need to be identified by their growth in special media, or by other techniques, such as serology.


In autecological studies, the growth of bacteria (or other microorganisms, as protozoa, microalgae or yeasts) in batch culture can be modeled with four different phases: lag phase (A), log phase or exponential phase (B), stationary phase (C), and death phase (D).[3]

During lag phase, bacteria adapt themselves to growth conditions. It is the period where the individual bacteria are maturing and not yet able to divide. During the lag phase of the bacterial growth cycle, synthesis of RNA, enzymes and other molecules occurs.
The log phase (sometimes called the logarithmic phase or the exponential phase) is a period characterized by cell doubling.[4] The number of new bacteria appearing per unit time is proportional to the present population. If growth is not limited, doubling will continue at a constant rate so both the number of cells and the rate of population increase doubles with each consecutive time period. For this type of exponential growth, plotting the natural logarithm of cell number against time produces a straight line. The slope of this line is the specific growth rate of the organism, which is a measure of the number of divisions per cell per unit time.[4] The actual rate of this growth (i.e. the slope of the line in the figure) depends upon the growth conditions, which affect the frequency of cell division events and the probability of both daughter cells surviving. Under controlled conditions, cyanobacteria can double their population four times a day.[5] Exponential growth cannot continue indefinitely, however, because the medium is soon depleted of nutrients and enriched with wastes.
The stationary phase is often due to a growth-limiting factor such as the depletion of an essential nutrient, and/or the formation of an inhibitory product such as an organic acid. Stationary phase results from a situation in which growth rate and death rate are equal. The number of new cells created is limited by the growth factor and as a result the rate of cell growth matches the rate of cell death. The result is a “smooth,” horizontal linear part of the curve during the stationary phase.
At death phase (decline phase), bacteria die. This could be due to lack of nutrients, a temperature which is too high or low, or the wrong living conditions.


Gram-positive bacteria have a thick mesh-like cell wall made of peptidoglycan (50–90% of cell envelope), and as a result are stained purple by crystal violet, whereas gram-negative bacteria have a thinner layer (10% of cell envelope), so do not retain the purple stain and are counter-stained pink by the Safranin. There are four basic steps of the Gram stain:

Applying a primary stain (crystal violet) to a heat-fixed smear of a bacterial culture. Heat fixation kills some bacteria but is mostly used to affix the bacteria to the slide so that they don't rinse out during the staining procedure.
The addition of iodide, which binds to crystal violet and traps it in the cell,
Rapid decolorization with ethanol or acetone, and
Counterstaining with safranin.[9] Carbol fuchsin is sometimes substituted for safranin since it more intensely stains anaerobic bacteria, but it is less commonly used as a counterstain.[10]


Perhaps the most recognizable extracellular bacterial cell structures are flagella. Flagella are whip-like structures protruding from the bacterial cell wall and are responsible for bacterial motility (i.e. movement). The arrangement of flagella about the bacterial cell is unique to the species observed. Common forms include:

Monotrichous - Single flagellum
Lophotrichous - A tuft of flagella found at one of the cell pole
Amphitrichous - Single flagellum found at each of two opposite poles
Peritrichous - Multiple flagella found at several locations about the cell
The bacterial flagellum consists of three basic components: a whip-like filament, a motor complex, and a hook that connects them. The filament is approximately 20 nm in diameter and consists of several protofilaments, each made up of thousands of flagellin subunits. The bundle is held together by a cap and may or may not be encapsulated. The motor complex consists of a series of rings anchoring the flagellum in the inner and outer membranes, followed by a proton-driven motor that drives rotational movement in the filament.



A quadrat is a plot used in ecology and geography to isolate a standard unit of area for study of the distribution of an item over a large area. While originally rectangular, modern quadrats can be rectangular, circular, irregular, etc.,.[1][2] The quadrat is suitable for sampling plants, slow-moving animals (such as millipedes and insects), and some aquatic organisms.

When an ecologist wants to know how many organisms there are in a particular habitat, it would not be feasible to count them all. Instead, he or she would be forced to count a smaller representative part of the population, called a sample. Sampling of plants or animals that do not move much (such as snails), can be done using a sampling square called a quadrat. A suitable size of a quadrat depends on the size of the organisms being sampled. For example, to count plants growing on a school field, one could use a quadrat with sides 0.5 or 1 meter in length.

It is important that sampling in an area is carried out at random, to avoid bias. For example, if you were sampling from a school field, but for convenience only placed quadrats next to a path, this might not give a sample that was representative of the whole field. It would be an unrepresentative, or biased, sample. One way one can sample randomly is to place the quadrats at coordinates on a numbered grid. Quadrats may also be used sampling oneself.

Long-term studies may require that the same quadrats be revisited months or even years after initial sampling. Methods of relocating the precise area of study vary widely in accuracy, and include measurement from nearby permanent markers, use of total station theodolites, consumer-grade GPS, and differential GPS.[3]


The Four Stages or Four Levels are from the Traditional Chinese medicine book Discussion of Warm Diseases by Ye Tianshi,[1][2] who lived from 1667-1746.

The stages, in order, range from surface (or "light") sickness to internal (or "deep") death.

Contents [hide]
1 Wei level
2 Qi level
3 Ying level
4 Blood level
5 References
Wei level[edit]
This level is treated by releasing the exterior (diaphoresis)
Wind-heat
Summer-heat
Damp-heat
Dry-heat
Qi level[edit]
The Qi is treated by dispelling heat and promoting body fluids
Lung heat (heat in chest and diaphragm)
Stomach heat
Intestines dry-heat
Gall-bladder heat (heat in the lesser yang)
Stomach and Spleen damp heat
Ying level[edit]
Ying (Nutritive qi) is treated by cooling fire and tonifing the yin
Heat in Nutritive qi portion
Heat in Pericardium
Blood level[edit]
The Blood level treated by tonifing the yin and qi and stopping bleeding.
Heat Victorious moves blood
Heat victorious stirs wind
Empty wind agitates in the interior
Collapse of yin
Collapse of yang
Separation of yin and yang (Death




Johnson (1980) explores the emotional progression of the addict’s response to alcohol. He looks at this in four phases. The first two are considered “normal” drinking and the last two are viewed as "typical" alcoholic drinking.[93][94] Johnson's four phases consist of:

Learning the mood swing. A person is introduced to alcohol (in some cultures this can happen at a relatively young age), and the person enjoys the happy feeling it produces. At this stage there is no emotional cost.
Seeking the mood swing. A person will drink to regain that feeling of euphoria experienced in phase 1; the drinking will increase as more intoxication is required to achieve the same effect. Again at this stage, there are no significant consequences.
At the third stage there are physical and social consequences, i.e., hangovers, family problems, work problems, etc. A person will continue to drink excessively, disregarding the problems.
The fourth stage can be detrimental, as Johnson cites it as a risk for premature death. As a person now drinks to feel normal, they block out the feelings of overwhelming guilt, remorse, anxiety, and shame they experience when sober.[95]


The definitions of the four pressure ulcer stages are revised periodically by the National Pressure Ulcer Advisor Panel (NPUAP)[2] in the United States and the European Pressure Ulcer Advisor Panel (EPUAP) in Europe.[3] Briefly, they are as follows:

Stage I: Intact skin with non-blanchable redness of a localized area usually over a bony prominence. Darkly pigmented skin may not have visible blanching; its color may differ from the surrounding area. The area differs in characteristics such as thickness and temperature as compared to adjacent tissue. Stage I may be difficult to detect in individuals with dark skin tones. May indicate "at risk" persons (a heralding sign of risk).
Stage II: Partial thickness loss of dermis presenting as a shallow open ulcer with a red pink wound bed, without slough. May also present as an intact or open/ruptured serum-filled blister. Presents as a shiny or dry shallow ulcer without slough or bruising. This stage should not be used to describe skin tears, tape burns, perineal dermatitis, maceration or excoriation.
Stage III: Full thickness tissue loss. Subcutaneous fat may be visible but bone, tendon or muscle are not exposed. Slough may be present but does not obscure the depth of tissue loss. May include undermining and tunneling. The depth of a stage III pressure ulcer varies by anatomical location. The bridge of the nose, ear, occiput and malleolus do not have (adipose) subcutaneous tissue and stage III ulcers can be shallow. In contrast, areas of significant adiposity can develop extremely deep stage III pressure ulcers. Bone/tendon is not visible or directly palpable.
Stage IV: Full thickness tissue loss with exposed bone, tendon or muscle. Slough or eschar may be present on some parts of the wound bed. Often include undermining and tunneling. The depth of a stage IV pressure ulcer varies by anatomical location. The bridge of the nose, ear, occiput and malleolus do not have (adipose) subcutaneous tissue and these ulcers can be shallow. Stage IV ulcers can extend into muscle and/or supporting structures (e.g., fascia, tendon or joint capsule) making osteomyelitis likely to occur. Exposed bone/tendon is visible or directly palpable. In 2012, the NPUAP stated that pressure ulcers with exposed cartilage are also classified as a stage IV.
Unstageable: Full thickness tissue loss in which actual depth of the ulcer is completely obscured by slough (yellow, tan, gray, green or brown) and/or eschar (tan, brown or black) in the wound bed. Until enough slough and/or eschar is removed to expose the base of the wound, the true depth, and therefore stage, cannot be determined. Stable (dry, adherent, intact without erythema or fluctuance) eschar on the heels is normally protective and should not be removed.
Suspected Deep Tissue Injury: A purple or maroon localized area of discolored intact skin or blood-filled blister due to damage of underlying soft tissue from pressure and/or shear. The area may be preceded by tissue that is painful, firm, mushy, boggy, warmer or cooler as compared to adjacent tissue. A deep tissue injury may be difficult to detect in individuals with dark skin tones. Evolution may include a thin blister over a dark wound bed. The wound may further evolve and become covered by thin eschar. Evolution may be rapid exposing additional layers of tissue even with optimal treatment.
Healing time is prolonged for higher stage ulcers. While about 75% of Stage II ulcers heal within eight weeks, only 62% of Stage IV pressure ulcers ever heal, and only 52% heal within one year. It is important to note that pressure ulcers do not regress in stage as they heal. A pressure ulcer that is becoming shallower with healing is described in terms of its original deepest depth (e.g., healing Stage II pressure ulcer).



Fertilization occurs once the sperm effectively enters the ovum's membrane. The genetical substance of the sperm and egg that combine to form just one cell, called a zygote, and the germinal stage of prenatal improvement commences.[1] The germinal stage refers to the time from fertilization, with the development of the early embryo, up until implantation. The germinal stage is over at about 10 days of gestation.[2]

The zygote consists of a full complement of genetic material and grows into the embryo. Briefly, embryonic improvements have four stages: the morula stage, the bastula stage, the gastrula stage, and the neurula stage. Prior to implantation, the embryo remains in a protein shell, the zona pellucida, and undergoes a series of cell divisions, called mitosis. A week after fertilization the embryo still has not grown in size, but hatches in the zona pellucida and adheres to the lining of the mother's uterus. This induces a decidual reaction, wherein the uterine cells proliferate and surround the embryo thus causing it to be embedded inside the uterine tissue. The embryo, meanwhile, proliferates and grows both into embryonic and extra-embryonic tissue, the latter forming the fetal membranes and the placenta. In humans, the embryo is often called a fetus in the later stages of prenatal development. The changeover from embryo to fetus is arbitrarily defined as happening 8 weeks after fertilization. In comparability to the embryo, the fetus has more recognizable exterior features and a set of progressively building internal organs. A nearly identical process occurs in other species.


The estrous cycle (also oestrous cycle; derived from Latin oestrus and originally from Greek οἶστρος meaning sexual desire) comprises the recurring physiologic changes that are induced by reproductive hormones in most mammalian therian females. Estrous cycles start after sexual maturity in females and are interrupted by anestrous phases or pregnancies. Typically, estrous cycles continue until death. Some animals may display bloody vaginal discharge, often mistaken for menstruation, also called a "period".


Four phases
Overview of the mammal estrous cycle

Proestrus
One or several follicles of the ovary start to grow. Their number is species specific. Typically this phase can last as little as one day or as long as three weeks, depending on the species. Under the influence of estrogen the lining in the uterus (endometrium) starts to develop. Some animals may experience vaginal secretions that could be bloody. The female is not yet sexually receptive; the old corpus luteum gets degenerated; the uterus and the vagina get distended and filled with fluid, become contractile and secrete a sanguinous fluid; the vaginal epithelium proliferates and the vaginal smear shows a large number of non-cornified nucleated epithelial cells.

Estrus
"Estrus" redirects here. For other uses, see Estrus (disambiguation).
Estrus refers to the phase when the female is sexually receptive ("in heat"). Under regulation by gonadotropic hormones, ovarian follicles mature and estrogen secretions exert their biggest influence. The female then exhibits sexually receptive behavior,[9] a situation that may be signaled by visible physiologic changes. A signal trait of estrus is the lordosis reflex, in which the animal spontaneously elevates her hindquarters.

In some species, the labia are reddened. Ovulation may occur spontaneously in some species.

Metestrus or diestrus
This phase is characterized by the activity of the corpus luteum, which produces progesterone. The signs of estrogen stimulation subside and the corpus luteum starts to form. The uterine lining begins to appear. In the absence of pregnancy the diestrus phase (also termed pseudo-pregnancy) terminates with the regression of the corpus luteum. The lining in the uterus is not shed, but is reorganized for the next cycle.



The OODA Loop[edit]
OODA.Boyd.svg
Boyd's key concept was that of the decision cycle or OODA loop, the process by which an entity (either an individual or an organization) reacts to an event. According to this idea, the key to victory is to be able to create situations wherein one can make appropriate decisions more quickly than one's opponent. The construct was originally a theory of achieving success in air-to-air combat, developed out of Boyd's Energy-Maneuverability theory and his observations on air combat between MiG-15s and North American F-86 Sabres in Korea. Harry Hillaker (chief designer of the F-16) said of the OODA theory, "Time is the dominant parameter. The pilot who goes through the OODA cycle in the shortest time prevails because his opponent is caught responding to situations that have already changed."

John Boyd during the Korean War
Boyd hypothesized that all intelligent organisms and organizations undergo a continuous cycle of interaction with their environment. Boyd breaks this cycle down to four interrelated and overlapping processes through which one cycles continuously:

Observation: the collection of data by means of the senses
Orientation: the analysis and synthesis of data to form one's current mental perspective
Decision: the determination of a course of action based on one's current mental perspective
Action: the physical playing-out of decisions
Of course, while this is taking place, the situation may be changing. It is sometimes necessary to cancel a planned action in order to meet the changes. This decision cycle is thus known as the OODA loop. Boyd emphasized that this decision cycle is the central mechanism enabling adaptation (apart from natural selection) and is therefore critical to survival.

Boyd theorized that large organizations such as corporations, governments, or militaries possessed a hierarchy of OODA loops at tactical, grand-tactical (operational art), and strategic levels. In addition, he stated that most effective organizations have a highly decentralized chain of command that utilizes objective-driven orders, or directive control, rather than method-driven orders in order to harness the mental capacity and creative abilities of individual commanders at each level. In 2003, this power to the edge concept took the form of a DOD publication "Power to the Edge: Command ... Control ... in the Information Age" by Dr. David S. Alberts and Richard E. Hayes. Boyd argued that such a structure creates a flexible "organic whole" that is quicker to adapt to rapidly changing situations. He noted, however, that any such highly decentralized organization would necessitate a high degree of mutual trust and a common outlook that came from prior shared experiences. Headquarters needs to know that the troops are perfectly capable of forming a good plan for taking a specific objective, and the troops need to know that Headquarters does not direct them to achieve certain objectives without good reason.

In 2007, strategy writer Robert Greene discussed the loop in a post called "OODA and You".[17] He insisted that it was "deeply relevant to any kind of competitive environment: business, politics, sports, even the struggle of organisms to survive", and claimed to have been initially "struck by its brilliance".

The OODA Loop has since been used as the core for a theory of litigation strategy that unifies the use of cognitive science and game theory to shape the actions of witnesses and opposing counsel.[18]



The heuristic scheme Parsons used to analyze systems and subsystems is called the "AGIL Paradigm", "AGIL scheme".[182] To survive or maintain equilibrium with respect to its environment, any system must to some degree adapt to that environment (Adaptation), attain its goals (Goal Attainment), integrate its components (Integration), and maintain its latent pattern (Latency Pattern Maintenance), a sort of cultural template. These concepts can be abbreviated as AGIL. These are called the system's functional imperatives. It is important to understand that Parsons AGIL model is an analytical scheme for the sake of theoretical "production," it is not any simple "copy" or any direct historical "summary" of empirical reality. Also the scheme itself doesn't explain "anything" as little as the periodical table in the natural sciences explains anything in and by itself. The AGIL scheme is a tool for explanations and no better than the quality of those theories and explanation by which it is processed.

In the case of the analysis of a social action system, the AGIL Paradigm, according to Parsons, yields four interrelated and interpenetrating subsystems: the behavioral systems of its members (A), the personality systems of those members (G), the social system (as such) (I) and the cultural system of that society (L). To analyze a society as a social system (the I subsystem of action), people are posited to enact roles associated with positions. These positions and roles become differentiated to some extent and in a modern society are associated with things such as occupational, political, judicial and educational roles.

Considering the interrelation of these specialized roles, as well as functionally differentiated collectivities (e.g., firms, political parties), the society can be analyzed as a complex system of interrelated functional subsystems, namely:

The pure AGIL model for all living systems:

(A) Adaptation.
(G) Goal Attainment.
(I) Integration.
(L) Pattern maintenance. (L stand for "Latent function").
The Social system level:

The economy — social adaptation to its action and non-action environmental systems
The polity — collective goal attainment
The societal community — the integration of its diverse social components
The fiduciary system — processes that function to reproduce historical culture in its "direct" social embeddedness.
The General Action Level:

The behavioral organism (or system). (In later version, the foci for generalized "intelligence.").
The personality system.
The social system.
The cultural system. (See cultural level).
The cultural level:

Cognitive symbolization.
Expressive symbolization.
Evaluative symbolization. (Sometimes called: moral-evaluative symbolization).
Constitutive symbolization.
The Generalized Symbolic media:

Social System level:

(A) Economic system: Money.
(G) Political system: Political power.
(I) The Societal Community: Influence.
(L) The Fiduciary system (cultural tradition): Value-commitment.


Parsons contributed to the field of social evolutionism and neoevolutionism. He divided evolution into four sub-processes:

differentiation, which creates functional subsystems of the main system, as discussed above;
adaptation, where those systems evolve into more efficient versions;
inclusion of elements previously excluded from the given systems;
generalization of values, increasing the legitimization of the increasingly complex system.

AGIL is an acronym from the initials of each of the four systemic necessities. The AGIL system is considered a cybernetic hierarchy and has generally the following order L-I-G-A, when the order is viewed from an "informational" point of view; this imply that the L function could "control" or define the I function (and the I the G and so on) approximately in the way in which a computer-game-program "defines" the game. The program does not "determine" the game (which actual outcome would depend on the input of the player, that was what Parsons in a sense called the voluntaristic aspect of action) but it "determined" the logical parameter of the game, which lies implicit in the game's concrete design and rules. In this way, Parsons would say that culture would not determine the social system but it would "define it." The AGIL system had also an energy side (or a "conditional" side), which would go A-G-I-L. So that the Adaptive level would be on the highest level of the cybernetic hierarchy from the energy or "conditional" point of view. However, within these two reverse sequences of the hierarchy Parsons maintained that in the long historical perspective, a system which was high in information (that is, a system that followed the L-I-G-A sequence) would tend to prevail over system which was high in energy. For example in the human body, the DNA is the informational code which will tend to control "the body" which is high in energy. Within the action system, Parsons would maintain that it was culture which was highest in information and which in his way was in cybernetic control over other components of the action system, as well as the social system. However, it is important to maintain that all action systems (including social systems) are always depending on the (historically specific) equilibrium of the overall forces of information and condition, which both shape the outcome of the system. Also it is important to highlight that the AGIL system does not "guarantee" any historical system survival; they rather specify the minimum conditions for whether societies or action systems in principle can survive. Whether a concrete action system survive or not is a sheer historical question.

Adaptation, or the capacity of society to interact with the environment. This includes, among other things, gathering resources and producing commodities to social redistribution.
Goal Attainment, or the capability to set goals for the future and make decisions accordingly. Political resolutions and societal objectives are part of this necessity.
Integration, or the harmonization of the entire society is a demand that the values and norms of society are solid and sufficiently convergent. This requires, for example, the religious system to be fairly consistent, and even in a more basic level, a common language.
Latency, or latent pattern maintenance, challenges society to maintain the integrative elements of the integration requirement above. This means institutions like family and school, which mediate belief systems and values between an older generation and its successor.[2]
These four functions aim to be intuitive. For example a tribal system of hunter-gatherers needs to gather food from the external world by hunting animals and gathering other goods. They need to have a set of goals and a system to make decisions about such things as when to migrate to better hunting grounds. The tribe also needs to have a common belief system that enforces actions and decisions as the community sees fit. Finally there needs to be some kind of educational system to pass on hunting and gathering skills and the common belief system. If these prerequisites are met, the tribe can sustain its existence.



In molecular biology, G-quadruplexes (also known as G4-DNA) are structures formed in nucleic acids by sequences that are rich in guanine. Four guanine bases can associate through Hoogsteen hydrogen bonding to form a square planar structure called a guanine tetrad, and two or more guanine tetrads can stack on top of each other to form a G-quadruplex. The quadruplex structure is further stabilized by the presence of a cation, especially potassium, which sits in a central channel between each pair of tetrads.[1] They can be formed of DNA, RNA, LNA, and PNA, and may be intramolecular, bimolecular, or tetramolecular.[2] Depending on the direction of the strands or parts of a strand that form the tetrads, structures may be described as parallel or antiparallel.



The length of the nucleic acid sequences involved in tetrad formation determines how the quadruplex folds. Short sequences, consisting of only a single contiguous run of three or more guanine bases, require four individual strands to form a quadruplex. Such a quadruplex is described as tetramolecular, reflecting the requirement of four separate strands. Longer sequences, which contain two contiguous runs of three or more guanine bases, where the guanine regions are separated by one or more bases, only require two such sequences to provide enough guanine bases to form a quadruplex. These structures, formed from two separate G-rich strands, are termed bimolecular quadruplexes. Finally, sequences which contain four distinct runs of guanine bases can form stable quadruplex structures by themselves, and a quadruplex formed entirely from a single strand is called an intramolecular quadruplex.[3]

Depending on how the individual runs of guanine bases are arranged in a bimolecular or intramolecular quadruplex, a quadruplex can adopt one of a number of topologies with varying loop configurations.[4] If the 5' – 3’ direction of all the strands is the same, the quadruplex is termed parallel; that is, all the strands of DNA are proceeding in the same direction. For intramolecular quadruplexes, this means that any loop regions present must be of the edge, or propellor, type, which are positioned to the sides of the quadruplex. If one or more of the runs of guanine bases has a 5’-3’ direction opposite to the other runs of guanine bases, the quadruplex is said to have adopted an antiparallel topology. The loops joining runs of guanine bases in intramolecular antiparallel quadruplexes are either diagonal (joining two diagonally opposite runs of guanine bases) or edge type loops, which join two adjacent runs of guanines.


Telomeric repeats in a variety of organisms have been shown to form these structures in vitro, and subsequently they have also been shown to form in vivo.[5][6] The human telomeric repeat (which is the same for all vertebrates) consists of many repeats of the sequence d(GGTTAG), and the quadruplexes formed by this structure have been well studied by NMR and X-ray crystal structure determination. The formation of these quadruplexes in telomeres has been shown to decrease the activity of the enzyme telomerase, which is responsible for maintaining length of telomeres and is involved in around 85% of all cancers. This is an active target of drug discovery.

Non-telomeric quadruplexes[edit]
Recently, there has been increasing interest in quadruplexes in locations other than at the telomere. For example the proto-oncogene c-myc was shown to form a quadruplex in a nuclease hypersensitive region critical for gene activity.[7][8] Since then, many other genes have been shown to have G-quadruplexes in their promoter regions, including the chicken β-globin gene, human ubiquitin-ligase RFP2 and the proto-oncogenes c-kit, bcl-2, VEGF, H-ras and N-ras. This list is ever-increasing.

Genome-wide surveys based on a quadruplex folding rule have been performed, which have identified 376,000 Putative Quadruplex Sequences (PQS) in the human genome, although not all of these probably form in vivo.[9] A similar study has identified putative G-quadruplexes in prokaryotes.[10] There are several possible models for how quadruplexes could influence gene activity, either by upregulation or downregulation. One model is shown below, with G-quadruplex formation in or near a promoter blocking transcription of the gene, and hence de-activating it. In another model, quadruplex formed at the non-coding DNA strand helps to maintain an open conformation of the coding DNA strand and enhance an expression of the respective gene.

Model for quadruplex-mediated down-regulation of gene expression[11]
Quadruplex function[edit]
Nucleic acid quadruplexes have been described as "structures in search of a function",[12] as for many years there was minimal evidence pointing towards a biological role for these structures. It has been suggested that quadruplex formation plays a role in immunoglobulin heavy chain switching.[13] As cells have evolved mechanisms for resolving (i.e., unwinding) quadruplexes that form, quadruplex formation may be potentially damaging for a cell; for example, the helicases WRN and Bloom syndrome protein have a high affinity for resolving G4 DNA.[14] More recently, there are many studies that implicate quadruplexes in both positive and negative transcriptional regulation, and in allowing programmed recombination of immunologlobin heavy genes and the pilin antigenic variation system of the pathogenic Neisseria.[15] The roles of quadraduplex structure in translation control are not as well explored. The direct visualization of quadraduplex structures in human cells [16] has provided an important confirmation of their existence. The potential positive and negative roles of quadraduplexes in telomere replication and function remains controversial.



The 2010 Sharm el-Sheikh shark attacks were a series of attacks by sharks on swimmers off the Red Sea resort of Sharm el-Sheikh, Egypt. On 1 December 2010, three Russians and one Ukrainian were seriously injured within minutes of each other. This was the first four squares. The fourth was different. After this the beaches were closed for a long time and authorities went on a killing spree killing all of the sharks that they could find in the area. Then the beaches were reopened and right when they were reopeneda 5 December 2010 a German woman was killed, when they were attacked while wading or snorkeling near the shoreline. The attacks were described as "unprecedented" by shark experts. It was seen as bizarre as nothing like that had ever happened before. The fifth is always ultra transcendent the fourth is always different.

In response to the attacks, beaches in the popular tourist resort were closed for over a week, dozens of sharks were captured and killed, and the local government issued new rules banning shark feeding and restricting swimming. A variety of theories were put forward to explain the attacks. By late December 2010, the most plausible theory to emerge was that the dumping of sheep carcasses in the Red Sea by a livestock transport during the Islamic festival of Eid al-Adha had attracted the sharks to the shore. Other theories focused on overfishing in the Red Sea or on the illegal or inadvertent feeding of sharks or smaller fish close to the shore, which produced scents that attracted more sharks.

The attacks fit the quadrant model pattern.

The attacks also sparked conspiracy theories about possible Israeli involvement. Egyptian television broadcast claims that Israeli divers captured a shark with a GPS unit planted on its back. Describing the theory as "sad", Professor Mahmoud Hanafy of the Suez Canal University pointed out that GPS devices are used by marine biologists to track sharks, not to remote-control them. Governor Mohamed Abdel Fadil Shousha himself ultimately said he thought the dumping of sheep carcasses during the Islamic festival of Eid al-Adha on 16 November was the most likely explanation.

These are the two very famous shark attack sprees and they fit the quadrant model pattern.







Excerpt from my first book "The Quadrant Model of Reality"
Anthropology, a part of sociology, is the study of humans. Anthropologists enumerate four levels of society, noting that they are distinct levels requiring a sort of phase shift to transition from one level to the next. Societies throughout the world undergo transformations along these lines. The four kinds of society are
*Square one: Bands
*Square two: Tribes
*Square three: Chiefdoms
*Square four: States.
Bands consist of small kin groups, no larger than an extended family or clan. The first square is conservative and related to the family. Bands are egalitarian and ideal. Idealists tend to like the notion of egalitarianism and sharing. They are sensitive, as is the nature of the first square. Tribes are societies based largely on kinship, especially corporate descent groups. The second square is the most related to family. Tribes are not completely egalitarian, and are not extremely hierarchical. There are usually two classes, the chiefs and the commoners. Guardians are the second square and they are more normal and less doers. The second square is kind of mediocre and linked to tribes.
Chiefdoms led by chiefs, occur when multiple tribes get together. In chiefdoms there is more inequality, probably due in part to the fact that there is more genetic diversity. Chiefdoms are also more violent; the third square is always very violent. In the Bible King David was apparently a chief. Anthropologists point out that in cultural stories of mythic heroes the chiefs are often related to gods, or depicted as somewhat distant or separate from the people--the chiefs often kill their subjects. King David is depicted as possibly not completely related to Israel. Rabbis say that he was possibly the offspring of his Israelite mother, and possibly a Philistine father. David reportedly killed many people, including Israelites. A single lineage/ family elite usually rules a chiefdom. Chiefdoms are very violent, and according to most Anthropologists this is due to the fact that there is more genetic diversity, and more hierarchy and division. These societies are divided among kings, nobles, freemen, and serfs. The Artisan is a dreamer and known to be violent in their search for “more”. The consequence of this dynamic is stratification.
A state is formed when Chiefdoms get together to create complex social hierarchies, organization, and institutional governments. This is like the Rational. States have elements of all of the previous three types of society, but transcend them in complexity and effectiveness--the nature of the fourth square.
Levels of Society
square 1: Bands
Square 2: Tribes
square 3: Chiefdoms
Square 4: States




























Excerpt from my book Quadrant Model of Reality

The Earth atmosphere, consisting of four layers with a possible fifth, fits the quadrant model pattern. What matters is not the number, but the pattern in which they occur, which is revealed by the qualities of each.
*Square one: troposphere, where the Earth weather exists and the biosphere is  found
*Square two: stratosphere, the protective layer of the atmosphere containing the ozone layer, which protects the Earth from UV radiation--the second square is always protection and homeostasis
*Square three: mesosphere.   Just below the mesopause, the air is so cold that even the very scarce water vapor at this altitude can be sublimated into polar-mesospheric noctilucent clouds made of ice crystals.  Seemingly solid, they are luminous and seem solid and therefore have the quality of doers.  The third square is always solid and  associated with doing
*Square four: thermosphere, an extremely low density mass of air molecules called, rarefied air, leaving an impression that nothing is there--the fourth is always different from the previous three.

Scientists agree that there are four layers of the atmosphere. Some argue that there is a fifth, but the fifth square in the quadrant model is always questionable. Regardless, the fourth is always different, and the nature of the fourth points to the nature of the fifth.
*Square five: exosphere; in addition to the four obvious layers of the atmosphere there is possibly a fifth, reflecting the qualities of the fourth--like the thermosphere, the exosphere is composed of rarefied air. The fourth always points to the fifth.






Psychology chapter


Decision making matrix - 4 Quadrant Model

The dichotomy is between inner psychological conditions met and external conditions met
Our decisions regarding what we want to accomplish is strongly influenced by what we believe we can accomplish and what we are ready to commit our energies to seriously pursue. Our decisions are successful, when they fulfill all the essential conditions.

There are four types of decisions:
1.Decisions that are directly fulfilled.
2.Decisions that are difficult to begin acting upon, but are ultimately accomplished.
3.Decisions that get off to a good start, but ultimately fail to achieve.
4.Decisions that are a failure from start to finish.
These four situations can be graphically represented by the four quadrants below: Decisionmaking

Quadrant I: This quadrant applies to instances in which all the essential requirements for fulfillment of the decision are met. These requirements consist of external conditions as well as inner psychological conditions. If you decide to climb Mt. Everest, learning the essential skills of mountain-climbing is an external requirement. Having the inner courage, fortitude and determination to face the challenge and persevere are internal requirements. Where both inner and outer conditions are fulfilled, success is assured and it will come readily.

Quadrant II: This quadrant applies to instances in which all the essential inner requirements are met, but the outer requirements are lacking. Such decisions get off to a slow start, encounter many obstacles on the way, but ultimately end in success. This is in consonance with the teaching of The Secret that psychological conditions are of paramount importance, not external realities. When you choose a goal for which you are not outwardly prepared and qualified, it takes time for the inner aspiration to bring about the necessary conditions.

Quadrant III: This quadrant applies to instances in which all the essential external requirements are met, but the inner requirements are lacking. Such decisions usually get off to a good start, but ultimately end in failure. A person with sufficient knowledge, skill and experience in mountain-climbing, connections with people who organize treks and sufficient funds to outfit themselves properly may find it relatively easy to commence preparations for the conquest of Mt. Everest. But if they do not also fulfill the inner conditions – if they lack the emotional fortitude, the self-confidence, the faith in other people, the patience to wait for suitable conditions, etc. – their enterprise is likely to stall even in the formative stage and even the attempt may never be made.

Quadrant IV: This quadrant applies to instances in which both the essential inner and outer requirements are lacking. There are situations where neither the outer nor the inner is ready. It is like the longing of one for a house who is intimidated inwardly by that suggestion and outwardly can never complete all the requirements.

When The Secret says that anyone can achieve anything, it only means that wherever a person’s decision falls in the 2nd, 3rd, or 4th quadrants, following the methodology of The Secret will enable the person to move into the first quadrant, where failure is unknown. Mental concentration on the goal, releasing one’s emotional energy to achieve it, attuning oneself to the external environment, feeling cheerful, having faith and expressing gratitude are powerful methods for advancing from any of the lower quadrants by moving inwardly to make life respond


The A-B-C-D model is a classic cognitive behavioral therapy (CBT) technique developed by one of CBT’s founders, Albert Ellis. When applied effectively, this can help address a variety of emotional difficulties, including anger management problems. This post explains how the model works and how to start using it.

Overview of the A-B-C-D Model in Context of Anger Management

Below is an overview of the A-B-C-D cognitive-behavioral therapy model, using anger as the problem focus:

A = Activating Event

This refers to the initial situation or “trigger” to your anger.

B = Belief System

Your belief system refers to how you interpret the activating event (A). What do you tell yourself about what happened? What are your beliefs and expectations of how others should behave?

C = Consequences

This how you feel and what you do in response to your belief system; in other words, the emotional and behavioral consequences that result from A + B. When angry, it’s common to also feel other emotions, like fear, since anger is a secondary emotion. Other “consequences” may include subtle physical changes, like feeling warm, clenching your fists and taking more shallow breaths. More dramatic behavioral displays of anger include yelling, name-calling and physical violence.

D = Dispute

D refers to a very important step in the anger management process. You need to examine your beliefs and expectations. Are they unrealistic or irrational? If so, what may be an alternative and calmer way to relate to the situation? By “disputing” those knee-jerk beliefs about the situation, you can take a more rational and balanced approach, which can help you control your anger.

Example of the A-B-C-D Model

Let’s look at an example to illustrate how this model can be applied to anger management.

A = Activating Event

You’re driving to work and somebody cuts you off, almost causing a collision. You were already feeling stressed to begin with because you were running late and had a big day ahead of you.

B = Belief System

You think to yourself, “people shouldn’t drive like that,” “I’m a courteous driver, I don’t do that,” “everybody on the road these days is a reckless driver,” “if that car hit me, I would have been really late to work or even worse, I could have gotten injured.”

C = Consequences

After the triggering event (i.e., being cut-off in traffic), you roll down your window and yell an expletive out at the other driver, while giving the bird. You notice that your muscles are tense, your heart rate is high and you feel like you want to hit the steering wheel. You also notice that you feel some fear.

D = Dispute

In response to the triggering situation and its sequelae, rather than reinforce what’s fueling the anger, you could shift your thinking (this is the “D”/dispute part of the model). For example, you could say to yourself: “It’s a bummer that some people drive recklessly, but that’s just a fact of life. Most people actually do obey the rules of the road and I’m glad that I do as well. Who knows, maybe that driver had some emergency that he was responding to…probably not, but you never know. That was scary to almost get hit, but even if we got into a fender bender, I would have eventually gotten to work and probably nothing drastic would have happened because of it.”

As you can see, using this type of rational self-talk is likely to diffuse some of the anger and help you calm down.

How to Apply this Model to Anger Management

The first step in using this anger management tool is to increase your awareness of what is happening in each step. To review:

A.) Identify what initially triggered the anger

B.) Reflect on how you related to the triggering situation (e.g., what did you say to yourself about it)

C.) Identify all of the specific emotional and behavioral responses that followed


classic view of attitudes is that attitudes serve particular functions for individuals. That is, researchers have tried to understand why individuals hold particular attitudes or why they hold attitudes in general by considering how attitudes affect the individuals who hold them.[33] Daniel Katz, for example, writes that attitudes can serve "instrumental, adjustive or utilitarian," "ego-defensive," "value-expressive," or "knowledge" functions.[34] The functional view of attitudes suggests that in order for attitudes to change (e.g., via persuasion), appeals must be made to the function(s) that a particular attitude serves for the individual. As an example, the "ego-defensive" function might be used to influence the racially prejudicial attitudes of an individual who sees themselves as open-minded and tolerant. By appealing to that individual's image of themselves as tolerant and open-minded, it may be possible to change their prejudicial attitudes to be more consistent with their self-concept. Similarly, a persuasive message that threatens self-image is much more likely to be rejected.[35]

Daniel Katz classified attitudes into four different groups based on their functions

Utilitarian: provides us with general approach or avoidance tendencies
Knowledge: help people organize and interpret new information
Ego-defensive: attitudes can help people protect their self-esteem
Value-expressive: used to express central values or beliefs
Utilitarian People adopt attitudes that are rewarding and that help them avoid punishment. In other words any attitude that is adopted in a person's own self-interest is considered to serve a utilitarian function. Consider you have a condo, people with condos pay property taxes, and as a result you don't want to pay more taxes. If those factors lead to your attitude that "increases in property taxes are bad" your attitude is serving a utilitarian function.

Knowledge People need to maintain an organized, meaningful, and stable view of the world. That being said important values and general principles can provide a framework for our knowledge. Attitudes achieve this goal by making things fit together and make sense. Example:

I believe that I am a good person.
I believe that good things happen to good people.
Something bad happens to Bob.
So I believe Bob must not be a good person.
Ego-Defensive This function involves psychoanalytic principles where people use defense mechanisms to protect themselves from psychological harm. Mechanisms include:

Denial
Repression
Projection
Rationalization
The ego-defensive notion correlates nicely with Downward Comparison Theory which holds the view that derogating a less fortunate other increases our own subjective well-being. We are more likely to use the ego-defensive function when we suffer a frustration or misfortune.

Value-Expressive

Serves to express one's central values and self-concept.
Central values tend to establish our identity and gain us social approval thereby showing us who we are, and what we stand for.
An example would concern attitudes toward a controversial political issue.


The Elaboration Likelihood Model (ELM) of persuasion [1] is a dual process theory describing the change of attitudes form. The ELM was developed by Richard E. Petty and John Cacioppo in the mid-1970s.

There are four core ideas to the ELM.[2]

1. The ELM argues that when a person encounters some form of communication, they can process this communication with varying levels of thought (elaboration), ranging from a low degree of thought (low elaboration) to a high degree of thought (high elaboration).

2. The ELM predicts that there are a variety of psychological processes of change that operate to varying degrees as a function of a person's level of elaboration. On the lower end of the continuum are the processes that require relatively little thought, including classical conditioning and mere exposure. On the higher end of the continuum are processes that require relatively more thought, including expectancy-value and cognitive response processes. When lower elaboration processes predominate, a person is said to be using the peripheral route, which is contrasted with the central route, involving the operation of predominantly high elaboration processes.

3. The ELM predicts that the degree of thought used in a persuasion context determines how consequential the resultant attitude becomes. Attitudes formed via high-thought, central-route processes will tend to persist over time, resist persuasion, and be influential in guiding other judgments and behaviors to a greater extent that attitudes formed through low-thought, peripheral-route processes.

4. The ELM also predicts that any given variable can have multiple roles in persuasion, including acting as a cue to judgment or as an influence on the direction of thought about a message. The ELM holds that the specific role by which a variable operates is determined by the extent of elaboration.





Management Hubris and the "Humble Pie" Matrix

Management of a successful company has every right to be proud. Alot was invested to get there. It’s hard, and usually thankless, work with long hours and constant “payroll-to-payroll” anxiety. Finally, when management begins to see fruits of their labors and their strategy appears to be paying off and cashflow goes positive (and looks like it’s going to stay there) — a sigh of relief is audible and well deserved.

Unfortunately, this is exactly when both management and investors need to do a new kind of work which is critical to the company’s “long-term” success. It is painful and all too frequently overlooked by everyone (except your competitors) —

You must now take a cold hard look at all of your assumptions and what you perceive to be your keys to success and how you got there.

To even bring this process up is unpopular with management, and even on the board of directors. Resistence to this next step tends to be pervasive.

HumblePieMatrixGIF2

We VCs use the "the Humble Pie" Matrix to explain this phenomena --
it's a two by two matrix: management reasoning (correct and wrong) vs
company performance (success and failure) which means there are four
possible scenarios to consider --

Management Reasoning vs Company Performance

1) Correct | Success- gloat
2) Correct | Failure- watch out
3) Wrong | Success- really watch out
4) Wrong | Failure- remorse

The classic error, made by both management and investors, is to assume that a company is successful due to scenario #1 and that competitors fail due to scenario #4

This is the natural hubris of management success —

“We are successful, THEREFORE we must be correct.” (#1)

when, more often than we like to admit —

“We are successful INSPITE of our reasoning.” (#3)

One must be constantly vigiliant not to ascribe too much validity to our reasoning based on success alone. I know it’s been tough and you have alot personally invested in those insights and assumptions you made inorder to convince yourself and others to go along with the plan and now you are just begining to “prove you are right” (Been there… Done that… ) — but this is the wrong way to look at it. You will NEVER prove whether or not you were right, and, if you let down your intellectual guard, you will certainly look like you were wrong going forward.

Success NEVER proves anything. Continued success requires to you “let go” of any conceptual frameworks which are no longer useful for effective decisionmaking — no matter how much they mattered and no matter how much you loved them. And it is love — a kind of intellectual narcissism which develops overtime. A resistence to question what has made you great. It’s human nature — which is all the more reason you need to fight this tendency to sustain a long-term competitive edge.


Joe Pine’s ideas about Mass Customisation, he describes how companies are able to differentiate: with products, it’s mainly about price. With services, he believes it’s about improving quality. And with experiences, it’s all about being authentic – or rather, two specific types of authenticity:

Being true to others – doing what you say you will
Being true to yourself – being consistent about who you are
By mapping these two types onto a ‘two-by-two’ matrix, he shows that there are four possible types of experience. Here’s a picture of what he means (some are my own examples):

Moment economy pic 2_B&W

Top right: true to self and true to othersIt’s an authentically ‘real’ experience. They are themselves and do what they say they will. Like going to a traditional Italian family restaurant – they take a very real pride in serving you and discussing suitable wines, insisting that you sample their home-made tiramisu, and giving you an espresso on the house, because it’s what they are passionate about.

Top left: true to others not true to themselves. It’s kind of authentic, but there’s something missing. They do what they promise but they really aren’t truly being themselves. Like being served by the clichéd Fast Food Burger Guy – yes, he sells you a burger, but he doesn’t love his job. His “have a nice day” as you leave feels empty.

Bottom right: true to themselves not true to others. It’s an authentic experience, but it’s a ‘real fake’. Like going to DisneyLand – in every way it’s about family entertainment, but you’re not really in the Magic Kingdom.

Bottom left: not true to others not true to themselves. It’s a completely fake experience. Like being the victim of a phishing attack – the people contacting you are not who they say they are, and don’t do what they promise.


KISS is an four letter acronym for four words "Keep it simple, stupid" as a design principle noted by the U.S. Navy in 1960. it was very popular. The KISS principle states that most systems work best if they are kept simple rather than made complicated; therefore simplicity should be a key goal in design and unnecessary complexity should be avoided. The phrase has been associated with aircraft engineer Kelly Johnson (1910–1990).[3] The term "KISS principle" was in popular use by 1970.[4] Variations on the phrase include "Keep it Simple, Silly", "keep it short and simple", "keep it simple and straightforward" and "keep it small and simple".[5][6]

I think that KISS is relevant to the quadrant model because the quadrant model is simple and also because it is four letters, reflecting the quadrant four.



The forming–storming–norming–performing model of group development was first proposed by Bruce Tuckman in 1965,[1] who mentioned that these phases are all necessary and inevitable in order for the team to grow, to face up to challenges, to tackle problems, to find solutions, to plan work, and to deliver results. This model has become the basis for subsequent models.


The team meets and learns about the opportunities and challenges, and then agrees on goals and begins to tackle the tasks. Team members tend to behave quite independently. They may be motivated but are usually relatively uninformed of the issues and objectives of the team. Team members are usually on their best behavior but very focused on themselves. Mature team members begin to model appropriate behavior even at this early phase.

The forming stage of any team is important because the members of the team get to know one another, exchange some personal information, and make new friends. This is also a good opportunity to see how each member of the team works as an individual and how they respond to each other. So forming plays a great role in group forming and to understand each other's behavior.

Storming[edit]
In this stage "...participants form opinions about the character and integrity of the other participants and feel compelled to voice these opinions if they find someone shirking responsibility or attempting to dominate. Sometimes participants question the actions or decision of the leader as the expedition grows harder...".[2] Disagreements and personality clashes must be resolved before the team can progress out of this stage, and so some teams may never emerge from "storming"[3] or re-enter that phase if new challenges or disputes arise.[4] In Tuckman's 1965 paper, only 50% of the studies identified a stage of intragroup conflict, and some of the remaining studies jumped directly from stage 1 to stage 3.[5] Some groups may avoid the phase altogether, but for those who don't: but the duration, intensity and destructiveness of the "storms" can be varied. Tolerance of each team member and their differences should be emphasized; without tolerance and patience the team will fail. This phase can become destructive to the team and will lower motivation if allowed to get out of control. Some teams will never develop past this stage; however, disagreements within the team can make members stronger, more versatile, and able to work more effectively as a team. Supervisors of the team during this phase may be more accessible, but tend to remain directive in their guidance of decision-making and professional behaviour. The team members will therefore resolve their differences and members will be able to participate with one another more comfortably. The ideal is that they will not feel that they are being judged, and will therefore share their opinions and views. Normally tension, struggle and sometimes arguments occur. This stage can also be upsetting.

Norming[edit]
"Resolved disagreements and personality clashes result in greater intimacy, and a spirit of co-operation emerges [6]". This happens when the team is aware of competition and they share a common goal. In this stage, all team members take the responsibility and have the ambition to work for the success of the team's goals. They start tolerating the whims and fancies of the other team members. They accept others as they are and make an effort to move on. The danger here is that members may be so focused on preventing conflict that they are reluctant to share controversial ideas.

Performing[edit]
"With group norms and roles established, group members focus on achieving common goals, often reaching an unexpectedly high level of success"[7]" By this time, they are motivated and knowledgeable. The team members are now competent, autonomous and able to handle the decision-making process without supervision. Dissent is expected and allowed as long as it is channelled through means acceptable to the team.

Supervisors of the team during this phase are almost always participating. The team will make most of the necessary decisions. Even the most high-performing teams will revert to earlier stages in certain circumstances. Many long-standing teams go through these cycles many times as they react to changing circumstances. For example, a change in leadership may cause the team to revert to storming as the new people challenge the existing norms and dynamics of the team.



In 1977, Tuckman, jointly with Mary Ann Jensen, added a fifth stage to the 4 stages: adjourning,[8] that involves completing the task and breaking up the team (in some texts referred to as Mourning).

Norming and re-norming[edit]
Timothy Biggs suggested that an additional stage be added of Norming after Forming and renaming the traditional Norming stage Re-Norming. This addition is designed to reflect that there is a period after Forming where the performance of a team gradually improves and the interference of a leader content with that level of performance will prevent a team progressing through the Storming stage to true performance. This puts the emphasis back on the team and leader as the Storming stage must be actively engaged in order to succeed – too many 'diplomats' or 'peacemakers' especially in a leadership role may prevent the team from reaching their full potential.[citation needed]

Rickards and Moger proposed a similar extension to the Tuckman model when a group breaks out of its norms through a process of creative problem-solving.[9][10]

John Fairhurst TPR model[edit]
Alasdair A. K. White together with his colleague, John Fairhurst, examined Tuckman's development sequence when developing the White-Fairhurst TPR Model. They simplify the sequence and group the Forming-Storming-Norming stages together as the Transforming phase, which they equate with the initial performance level. This is then followed by a Performing phase that leads to a new performance level which they call the Reforming phase. Their work was developed further by White in his essay "From Comfort Zone to Performance Management"[11] in which he demonstrates the linkage between Tuckman's work with that of Colin Carnall's "coping cycle" and the Comfort Zone Theory.

The fifth square is always questioanble


In the early seventies, Hill and Grunner (1973) reported that more than 100 theories of group development existed. Since then, other theories have emerged as well as attempts at contrasting and synthesizing them. As a result, a number of typologies of group change theories have been proposed. A typology advanced by George Smith (2001) based on the work of Mennecke and his colleagues (1992) classifies theories based on whether they perceive change to occur in a linear fashion, through cycles of activities, or through processes that combine both paths of change, or which are completely non-phasic. Other typologies are based on whether the primary forces promoting change and stability in a group are internal or external to the group. A third framework advanced by Andrew Van de Ven and Marshall Scott Poole (1995), differentiates theories based on four distinct "motors" for generating change. According to this framework, the following four types of group development models exist:

Life cycle models: Describe the process of change as the unfolding of a prescribed and linear sequence of stages following a program that is prefigured at the beginning of the cycle (decided within the group or imposed on it).
Teleological models: Describe change as a purposeful movement toward one or more goals, with adjustments based on feedback from the environment.
Dialectical models: Describe change as emerging from conflict between opposing entities and eventual synthesis leading to the next cycle of conflict
Evolutionary models: Describe change as emerging from a repeated cycle of variation, selection and retention and generally apply to change in a population rather than change within an entity over time.



Tubbs' Systems model[edit]
Stewart Tubbs "systems" approach to studying small group interaction led him to the creation of a four-phase model of group development:

Orientation: In this stage, group members get to know each other, they start to talk about the problem, and they examine the limitations and opportunities of the project.
Conflict: Conflict is a necessary part of a group's development. Conflict allows the group to evaluate ideas and it helps the group conformity and groupthink
Consensus: Conflict ends in the consensus stage, when group members compromise, select ideas, and agree on alternatives.
Closure In this stage, the final result is announced and group members reaffirm their support of the decision.


Fisher's theory of decision emergence in groups[edit]
Fisher outlines four phases through which task groups tend to proceed when engaged in decision making. By observing the distribution of act-response pairs (a.k.a. "interacts") across different moments of the group process, Fisher noted how the interaction changed as the group decision was formulated and solidified. His method pays special attention to the "content" dimension of interactions by classifying statements in terms of how they respond to a decision proposal (e.g. agreement, disagreement, etc.).

Orientation: During the orientation phase, group members get to know each other and they experience a primary tension: the awkward feeling people have before communication rules and expectations are established. Groups should take time to learn about each other and feel comfortable communicating around new people.
Conflict: The conflict phase is marked by secondary tension, or tension surrounding the task at hand. Group members will disagree with each other and debate ideas. Here conflict is viewed as positive, because it helps the group achieve positive results.
Emergence: In the emergence phase, the outcome of the group's task and its social structure become apparent. Group members soften their positions and undergo an attitudinal change that makes them less tenacious in defending their individual viewpoint.
Reinforcement: In this stage, group members bolster their final decision by using supportive verbal and nonverbal communication.
Based on this categorization, Fisher created his "Decision Proposal Coding System" that identifies act-response pairs associated with each decision-making phase. Interestingly, Fisher observed that the group decision making process tended to be more cyclical and, in some cases, almost erratic. He hypothesized that the interpersonal demands of discussion require "breaks" from task work. In particular, Fisher observed that there are a number of contingencies that might explain some of the decision paths taken by some groups. For instance, in modifying proposals, groups tend to follow one of two patterns. If conflict is low, the group will reintroduce proposals in less abstract, more specific language. When conflict is higher, the group might not attempt to make a proposal more specific but, instead, because disagreement lies on the basic idea, the group introduces substitute proposals of the same level of abstraction as the original.


In 1989, Troiden proposed a four-stage model for the development of homosexual sexual identity.[117] The first stage, known as sensitization, usually starts in childhood, and is marked by the child's becoming aware of same-sex attractions. The second stage, identity confusion, tends to occur a few years later. In this stage, the youth is overwhelmed by feelings of inner turmoil regarding their sexual orientation, and begins to engage sexual experiences with same-sex partners. In the third stage of identity assumption, which usually takes place a few years after the adolescent has left home, adolescents begin to come out to their family and close friends, and assumes a self-definition as gay, lesbian, or bisexual.[118] In the final stage, known as commitment, the young adult adopts their sexual identity as a lifestyle. Therefore, this model estimates that the process of coming out begins in childhood, and continues through the early to mid 20s. This model has been contested, and alternate ideas have been explored in recent years

Learning styles have been described as "enduring ways of thinking and processing information."[138]

Although there is no evidence that personality determines thinking styles, they may be intertwined in ways that link thinking styles to the Big Five personality traits.[139] There is no general consensus on the number or specifications of particular learning styles, but there have been many different proposals.

Smeck, Ribicj, and Ramanaih (1997) defined four types of learning styles:

synthesis analysis
methodical study
fact retention
elaborative processing
When all four facets are implicated within the classroom, they will each likely improve academic achievement.[140] This model asserts that students develop either agentic/shallow processing or reflective/deep processing. Deep processors are more often than not found to be more conscientious, intellectually open, and extraverted when compared to shallow processors. Deep processing is associated with appropriate study methods (methodical study) and a stronger ability to analyze information (synthesis analysis), whereas shallow processors prefer structured fact retention learning styles and are better suited for elaborative processing.[140] The main functions of these four specific learning styles are as follow:


Synthesis analysis: processing information, forming categories, and organizing them into hierarchies. This is the only one of the learning styles that has explained a significant impact on academic performance.[140]
Methodical study: methodical behavior while completing academic assignments
Fact retention: focusing on the actual result instead of understanding the logic behind something
Elaborative processing: connecting and applying new ideas to existing knowledge



As Haidt and his collaborators worked within the social intuitionist approach, they began to devote attention to the sources of the intuitions that they believed underlay moral judgments. In a 2004 article published in the journal Daedalus, Haidt and Craig Joseph surveyed works on the roots of morality, including the work of Donald Brown, Alan Fiske, Shalom Schwartz, and Shweder. From their review, they suggested that all individuals possess four "intuitive ethics", stemming from the process of human evolution as responses to adaptive challenges. They labelled these four ethics as suffering, hierarchy, reciprocity, and purity. According to Haidt and Joseph, each of the ethics formed a module, whose development was shaped by culture. They wrote that each module could "provide little more than flashes of affect when certain patterns are encountered in the social world," while a cultural learning process shaped each individual's response to these flashes. Morality diverges because different cultures utilize the four "building blocks" provided by the modules differently.[7] This article became the first statement of moral foundations theory, which Haidt, Joseph, and others have since elaborated and refined.



The Multidimensional Personality Questionnaire (MPQ) was developed by Auke Tellegen at the University of Minnesota in the early 1980s.[16] It has been used since its development in the Minnesota Twin Family Study.[9] Three of the four broad traits measured by the MPQ contain between three and four facets, or "primary traits."[17] The fourth, "Absorption," is classified as both a broad trait and a primary trait.[9] In addition to these personality measures, the MPQ contains three scales assessing the validity of responses. The "Unlikely Virtues" scale is designed to assess impression management, the "True Response Inconsistency" scale assesses the tendency to answer all questions true (or false), and the "Variable Response Inconsistency" scale assesses inconsistent responses to similar or opposite questions.[18] The following table displays Tellegen's labels for broad traits, primary traits (facets), and the subscales of Absorption.[9]

Positive Emotional Temperament: Well-being, Social Potency, Achievement, Social Closeness
Negative Emotional Temperament: Stress Reaction, Alienation, Aggression
Constraint: Control, Harm-avoidance, Traditionalism
Absorption (subscales): Sentient, Prone to Imaginative and Altered States



Interaction Styles are groupings of the 16 types of the MBTI instrument of psychometrics and Jungian psychology. The Interaction Styles model was developed by Linda Berens, PhD, founder of the Temperament Research Institute. This model builds on David Keirsey's Temperament model and its subcategories, and is based on observable behavior patterns that are quite similar to David Merrill's "Social Styles" and William Moulton Marston's DiSC theory.

Temperament and Interaction Style
iStJ iSfJ iNFj iNTj
iStP iSfP iNFp iNTp
eStP eSfP eNFp eNTp
eStJ eSfJ eNFj eNTj
Temperaments and Interaction Styles in the MBTI.

Development[edit]
David Keirsey, who had mapped the four temperaments of his Keirsey Temperament Sorter to the MBTI system, ended up minimizing the role of both Thinking/Feeling (T/F) and J/P (Judging/Perceiving). Starting with S/N ("Sensing" and "Intuition", which he renames "Observation" and "Introspection" or “Concrete” and “Abstract”, under the category of "Communication"), he divides this by the new scale of "Cooperative" or “Utilitarian” (also called "Pragmatic") under the category of "Action"; which yields his "four temperaments" (SP-Artisan, SJ-Guardian, NF-Idealist, and NT-Rational). Next, this is divided by "Role-Directive" and "Role-Informative", into eight “roles” or “intelligence types” (STP-Operator, SFP-Entertainer, STJ-Administrator, SFJ-Conservator, NFJ-Mentor, NFP-Advocate, NTJ-Coordinator, and NTP-Engineer). Finally, these are divided by E/I (Extraversion/Introversion also called "Expressive/Reserved"), yielding the sixteen "types" of the MBTI.

Linda V. Berens, another doctor of Psychology and a former student of Keirsey, would also use a similar system, pairing the Interaction Styles (which were implicit in Keirsey's system) with both the temperaments and the cognitive processes. Just as Keirsey combined S/N and his “Cooperative-Utilitarian” into "temperaments", Berens would pair “Directing” and “Informing” (as she calls them) directly with E/I (which she calls “Initiating - Responding”) creating the four "Interaction Styles" in addition to the four "temperaments". This then matched the several other Two-factor models of personality, beginning with the original four temperaments which had been observed in terms of fast or slow response, and short or long delay.



Keirsey generally correlated his temperaments with the ancient temperaments as follows: Artisan=Sanguine; Idealist=Choleric; Rational=Phlegmatic and Guardian=Melancholic. However, among others interpreting the theory, there is no complete agreement as to which correspond to which. For instance, while the Artisan is almost unanimously matched with the Sanguine, the other comparisons are not consistent. The Guardians are often associated with the Melancholic, but then they are also linked to the Phlegmatic, with the Melancholic being the Idealist.[1] Idealists and Rationals are often switched back and forth between representing Cholerics and Phlegmatics in comparisons. These anomalies may have stemmed from the fact that Keirsey based his temperaments on a "Greek god" typology created by pairing together the philosophical Apollonian and Dionysian concept with Carl Spitteler's "Prometheus and Epimetheus" epic (1881), rather than using purely the Galenic descriptions. Believing the "humor" names were "misleading",[2] he originally named his temperaments after the mythological figures.

Plus, he also drew more upon the likes of Ernst Kretschmer and Eduard Spranger, who had other models which he correlated with Galen's temperaments (though they were not necessarily perfect matches of them); while others followed Pavlov and Eysenck, who shaped the modern theories of those who held onto the Galenic names. Kretschmer, for example, used different factors ("Cycloid": gay vs. sad, and "Schizoid": sensitive vs. cold), instead of the "extraversion" and "people/task-orientation" scales that define temperament in many other systems. Spranger had six types; the two that were omitted ("Social" and "Political") fit the classic behavioral descriptions of the Sanguine and Choleric (love of people or love of power); while the remaining four, which did not correspond as clearly to the temperaments would be compared to Keirsey's model. This omissions was because "Political" was a category containing both Theoretic and Artistic, and "Social" contained Economical and Religious. [3] Hence, it corresponded more with Keirsey's own "cooperative" and "pragmatic" categories.

The Interaction Styles, however, more closely match the behavior of the familiar understanding of the classic temperaments with “Directing” and “Informing” being a closer counterpart to people/task-orientation. Berens herself stated "Directing communications seem to have a task focus and Informing communications have a people focus. MBTI practitioners have long related task focus to a preference for Thinking and people focus to a preference for Feeling". "Descriptors of 'responsive' seem to go with the Informing style of communication and descriptors of 'less responsive' seem to go with the Directing style of communication.".[4] Directives are the more "serious" type, defined by Keirsey as "those who communicate primarily by directing others", and Informatives are defined as "those who communicate primarily by informing others". The expressive/directing Berens calls In Charge, and behaves like a Choleric, as the name itself even implies. The reserved/directing is called Chart the Course and corresponds to the Melancholy, who is very analytical and needs order and familiarity. The expressive/informing is called Get Things Going and fits the description of the Sanguine, who is upbeat, enthusiastic and focused on interaction. The reserved/informing; Behind the Scenes is a calm peacemaker who sees value in many contributions and consult outside inputs to make an informed decision and is linked to the Phlegmatic.

In addition to the E/I and Directing/Informing categories, there is also "Attention: Focus and Interest (Control/Movement)", which pairs the diametric opposite styles. In-Charge and Behind-the-Scenes have in common "Control": Focus on control over the outcome, and Chart-the-Course and Get-Things-Going have in common "Movement": Focus on movement toward the goal. In 2008, Berens released version 2.0 of Understanding Yourself and Others: An Introduction to Interaction Styles, in which she renamed this dimension into Process/Outcome. In-Charge and Behind-the-Scenes focus on the outcome of tasks (which as already implicit in the "control" definition, above), while Get-Things-Going and Chart-the-Course focus on the process (hence, the act of movement toward the goal).

Keirsey would eventually divide his eight intelligence types to yield a corresponding four groupings, which he calls "four differing roles that people play in face-to-face interaction with one another" in his book Brains and Careers (2008).

Initiators: ENTJ, ESTJ, ESTP, ENFJ (Extraverted and Directive)
Coworkers: ENTP, ESFJ, ESFP, ENFP (Extraverted and Informative)
Contenders: INTJ, ISTJ, ISTP, INFJ (Introverted and Directive)
Responders: INTP, ISFJ, ISFP, INFP (Introverted and Informative)

The roles were implied in the role-informative/directive factor he had introduced in Portraits of Temperament (1987).



The role of Thinking, Feeling, Judging and Perceiving[edit]
Looking at the type division between Directing vs. Informing reveals that this scale can be defined by both T/F and J/P together. Directives lean towards T and J, while Informatives lean towards F and P. (Thinking or “Toughmindedness” as well as Judging or "Scheduling" as Keirsey calls it, are very compatible with "Directing". Likewise, Feeling or “Friendliness” as well as Perceiving or "Probing" (Keirsey) are compatible with Informing). "TJ" in the Jungian-Myersian system represents "extraverted Thinking", meaning the people of these types use critical logic or objective criteria in dealing with the outer world of people, (resulting in their characteristic "directive" behavior); while FP is "Introverted Feeling" (meaning they use values or subjective criteria in dealing with the inner world of thoughts, in which they tend to do what they would want to be done to them). So each of the four types that make up each directing style have a T and/or J, and the informatives have an F and/or P. Since there are only two TJ or FP combinations per E and I, each style also contains one TP and one FJ type. The directives tie TP with S and FJ with N; and the informatives tie TP with N, and FJ with S. So Directives can also be categorized as containing all NJ's (Introverted iNtuition) and ST's (Sensing and Thinking together as the primary and auxiliary functions). Informatives also contain all NP's (extraverted iNtuition) and SF's (Sensing and Feeling together).

The result is that each of the four interaction styles share one of the 16 personality types with each of the four Keirsey temperaments. Berens describes the Keirseyan SP, SJ, NF and NT temperaments (Which she renames “Improviser”, "Stabilizer", “Catalyst” and “Theorist”) as the “Why” of behavior, while the interaction styles are the “How”.

Temperaments and Interaction Styles
Interaction Style Catalyst (NF) Theorist (NT) Stabilizer (SJ) Improviser (SP)
In Charge (Choleric) Envisioner
Mentor (ENFJ) Strategist
Mobilizer (ENTJ) Implementor
Supervisor (ESTJ) Promoter
Executor (ESTP)
Chart the Course (Melancholic) Foreseer
Developer (INFJ) Conceptualizer
Director (INTJ) Planner
Inspector (ISTJ) Analyzer
Operator (ISTP)
Get Things Going (Sanguine) Discoverer
Advocate (ENFP) Explorer
Inventor (ENTP) Facilitator
Caretaker (ESFJ) Motivator
Presenter (ESFP)
Behind the Scenes (Phlegmatic) Harmonizer
Clarifier (INFP) Designer
Theorizer (INTP) Protector
Supporter (ISFJ) Composer
Producer (ISFP)


Marston was also a writer of essays in popular psychology. In 1928, he published Emotions of Normal People, which elaborated the DISC Theory. Marston viewed people behaving along two axes, with their attention being either passive or active; depending on the individual's perception of his or her environment as either favorable or antagonistic. By placing the axes at right angles, four quadrants form with each describing a behavioral pattern:

Dominance produces activity in an antagonistic environment
Inducement produces activity in a favorable environment
Submission produces passivity in a favorable environment
Compliance produces passivity in an antagonistic environment.


klouts influence matrix

Prussian Field Marshal Helmuth Karl Bernhard Graf von Moltke (1800-1891) developed this interesting Value Matrix to categorize his officer corps.

• Smart & Lazy: I make them my Commanders because they make the right thing happen but find the easiest way to accomplish the mission.
• Smart & Energetic: I make them my General Staff Officers because they make intelligent plans that make the right things happen.
• Dumb & Lazy: There are menial tasks that require an officer to perform that they can accomplish and they follow orders without causing much harm
• Dumb & Energetic: These are dangerous and must be eliminated. They cause thing to happen but the wrong things so cause trouble.

Later Field Marshal Erich Von Manstein (1887-1973), arguably the Wehrmacht's best World War II military strategists who was dismissed from service by the Fuhrer in March 1944 due to frequent clashes with him over military strategy, later articulated Field Marshal Moltke’s model in the following quote:

“There are only four types of officer. First, there are the lazy, stupid ones. Leave them alone, they do no harm…Second, there are the hard- working, intelligent ones. They make excellent staff officers, ensuring that every detail is properly considered. Third, there are the hard- working, stupid ones. These people are a menace and must be fired at once. They create irrelevant work for everybody. Finally, there are the intelligent, lazy ones. They are suited for the highest office.”

I would suggest that both Field Marshals have correctly identified the four types of officers but would respectfully disagree about who is best suited for “highest office,” especially in today’s highly competitive environment. Smart & Energetic wins every time and only the truly driven or exceptionally lucky succeed.



In the most recent edition of the PCL-R, Hare adds a fourth antisocial behavior factor, consisting of those factor-2 items excluded in the previous model.[2] Again, these models are presumed to be hierarchical with a single, unified psychopathy disorder underlying the distinct but correlated factors.[21]

The Cooke & Michie hierarchical three-factor model has severe statistical problems—i.e., it actually contains ten factors and results in impossible parameters (negative variances)—as well as conceptual problems. Hare and colleagues have published detailed critiques of the Cooke & Michie model.[22] New evidence, across a range of samples and diverse measures, now supports a four-factor model of the psychopathy construct,[23] which represents the interpersonal, affective, lifestyle, and overt antisocial features of the personality disorder.


I was excited to recently find the long running blogs of Sandeep Gautam. The one I’ve looked at so far (“The Mouse Trap”) is mainly about psychology and neurology, but he seems to have a penchant for fourfolds, and some of the entries in “The Mouse Trap” talk about the same topics as my blog. He has also developed many interesting models of psychological and mental organization.

Guatam is a software developer and psychology enthusiast, from his short bio on his Psychology Today blog “The Fundamental Four”. He also has a multitude of other blogs (where does he find the time?), gives TED talks, and is just about everywhere on social media. Entries on “The Mouse Trap” are appearing less frequently these days, but it has been going since 2006.

Above is a diagram of the four fundamental evolutionary problems of humans as per psychologist Theodore Millon (and each also associated with a duality): survival or existence (pain/pleasure), adaptation (passive/active), replication (self/other), and abstraction (broad/narrow). sq_fundamental_four2Gautam’s Fundamental Four are the four basic drives to overcome these problems, a “lens” through which he sees psychology: food/foe (survival), flourishing (adaptation), family/friends (replication), and focus/frame (abstraction). (I hope he doesn’t mind if I added “frame” to “focus”.)

I was not familiar with either Millon’s or Gautam’s work until now. I was thinking about updating my article on Aristotle’s Four Causes, and I chanced upon Gautam’s post on the subject because of images from it. I was soon very intrigued, because I see similarities in many of his formulations and mine! Also please compare these fourfolds to the Relational Models Theory.

Some of the more interesting blog entries of “The Mouse Trap”:


The Millon Clinical Multiaxial Inventory-III (MCMI-III) is a psychological assessment tool intended to provide information on psychopathology, including specific psychiatric disorders outlined in the DSM-IV. It is intended for adults (18 and over) with at least an 8th grade reading level who are currently seeking mental health services. The MCMI was developed and standardized specifically on clinical populations (i.e. patients in psychiatric hospitals or people with existing mental health problems), and the authors are very specific that it should not be used with the general population or adolescents.[1] However, there is an evidence base that shows that it may still retain validity on non-clinical populations, and so psychologists will sometimes administer the test to members of the general population, with caution. The concepts involved in the questions and their presentation make it unsuitable for those with below average intelligence or reading ability.

The MCMI differs from other personality tests in that it is based on theory and is organized according to a multiaxial format. Updates to each version of the MCMI coincide with revisions to the DSM.[2]

It is composed of 175 true-false questions that reportedly takes 25–30 minutes to complete. It was created by Theodore Millon, Carrie Millon, Roger Davis, and Seth Grossman.

The test is modeled on four scales

14 Personality Disorder Scales
10 Clinical Syndrome Scales
5 Correction Scales: 3 Modifying Indices (which determine the patient's response style and can detect random responding); 2 Random Response Indicators
42 Grossman Personality Facet Scales (based on Seth Grossman's theories of personality and psychopathology)[



I just posted a reply to David McCandless on his blog ‘Information is Beautiful’ on the subject of the difference between data, information, knowledge and wisdom. David made a graphic based on this heirarchy of information, and also helpfully pointed out that these ideas have been around for a while. Here is his graphic:
Data_info_knowledge_wisdom
I responded with some thoughts and an excerpt from Robert Logan, a Canadian physicist who has just written a book called ‘What is information?’ I have not actually read it yet but intend to soon (it is not yet available). Here is an excerpt from this book that breaks down these four categories:

Data, Information, Knowledge and Wisdom

There is often a lack of understanding of the difference between information and knowledge and the difference between explicit and tacit knowledge, which we herewith define in the following manner;

• Data are the pure and simple facts without any particular structure or organization, the basic atoms of information,

• Information is structured data, which adds meaning to the data and gives it context and significance,

• Knowledge is the ability to use information strategically to achieve one’s objectives,

• Wisdom is the capacity to choose objectives consistent with one’s values within a larger social context.

(Robert Logan, What is Information?, 2010)

Here is a quick graphic I just made (and not nearly as nice as McCandless’ but maybe I will give it a bit more attention later). I flipped the triangle because data is reductive. Wisdom is holistic.

Information-wisdom22w
The four levels of knowing (Data, Information, Knowledge and Wisdom) are similar to Gregory Bateson’s influential work ‘The Logical Categories of Learning and Communication’ published first in 1964 and then included in his book Steps to an Ecology of Mind (1972).

The idea that there are levels of learning and understanding has been an important theme in sustainable education. Dr. Stephen Sterling developed a staged theory of learning directly from Bateson’s work in his Phd – http://www.bath.ac.uk/cree/sterling/index.htm Information processing occurs at the different levels. We live in an information rich world, but information does not necessarily lead to understanding. Critical pedagogy practices have developed processes to help learners move from processing information to developing deeper understanding and capacity for action.

I am developing work around this theme in regards to the visual communication of ecological literacy. Learning about context, relationships, patterns and interdependence is part of moving from data to understanding. I have also made a pyramid to represent a movement from concrete information to metaphysics (based on Sterling’s work). See: http://www.eco-labs.org and http://bit.ly/fQ32S8





An interesting fourfold I saw while browsing through “The Power of the 2×2 Matrix” that I mentioned previously was the Means and Ends matrix of Russell Ackoff, known as a pioneer in the fields of systems and management sciences.

Composed of the relationships between two purposeful agents, where the means and the ends of each are separately considered to be compatible or incompatible.

Conflict: Incompatible means, incompatible ends
Competition: Incompatible means, compatible ends
Coalition: Compatible means, incompatible ends
Cooperation: Compatible means, compatible ends
Ackoff is also known for the “Hierarchy of Understanding” of Data, Information, Knowledge, and Wisdom, which probably begs for its own entry.


These four livelihoods: artist, designer, scientist, and engineer, make a nice fourfold. They are called the “four hats of creativity” by Rich Gold. They are also called the “four winds of making” by computer scientist Richard P. Gabriel.

Some say the artist and scientist are “inward” looking, and the designer and engineer are “outward” looking. Some say the artist and the designer “move minds”, and the scientist and engineer “move matter”. One can observe that the artist sorts the important from the boring, the scientist separates the true from the false, the designer discerns the cool from the uncool, and the engineer divides the good from the bad.

artist designer engineer scientist


The Creativity Compass
Jul 27, 201345,654 views587 Likes112 CommentsShare on LinkedInShare on FacebookShare on Twitter

I think this framework first came up in a conversation with John Maeda. The original observation was that artist and scientists tend to work well together, and designers and engineers work well together, but that scientists and engineers don’t work as well together, and likewise, neither do artists and designers. Engineers and designers tend to focus on utility and understand the world through observation and gathering the constraints of a problem to come up with a solution. Artists and scientists, on the other hand are inspired by nature or math, and they create through pure inner creativity and pursue expression that is more connected to things like truth or beauty than something so imperfect as mere utility. Which is to say, there are many more ways to divide the brain than into left and right hemispheres.

However, I think a lot of the most interesting and impactful creative works tend to require all the use of all four quadrants. Many of the faculty at the Media Lab work in the dead center of this grid—or as I like to call it, this compass—or perhaps they lean in one direction, but they’re able to channel skills from all four quadrants. Neri Oxman, one of our faculty members who recently created The Silk Pavilion, told me that she is both an artists and a designer but switches between the modes as she works on an idea. And to look at The Silk Pavilion, it’s clear she could easily qualify as either a scientist or engineer, too.

I think that there are a variety of practices and ways of thinking we can use to get to the center of this compass. The key is to pull these quadrants as close together as possible. An interdisciplinary group would have a scientist, an artist, a designer, and an engineer working with each other. But this only reinforces the distinctions between these disciplines. And it’s much less effective than having people who use all four quadrants, as the project or problem requires.

The tyranny of traditional disciplines and functionally segregated organizations fail to produce the type of people who can work with this creativity compass, but I believe that in a world where the rate of change increases exponentially, where disruption has become a norm instead of an anomaly, the challenge will be to think this way if we want to effectively solve the problems we face today, much less tomorrow.


Here’s something I recently read in a book about teaching our children to think through classroom discussion (Philip Cam, 20 Thinking Tools, 2006) that is helpful for thinking about questions that focus on ideas rather than facts. Cam has designed The Question Quadrant to facilitate philosophical inquiry in classrooms.

question quadrant

In the quadrant, questions move from closed to open and from textual to intellectual or thinking questions. In the fourth quadrant, “inquiry” questions encourage students (to use Bloom’s taxonomy) to apply, analyse and evaluate what they are learning. To encourage our kids to think deeply about issues we need to be moving them from first quadrant questions towards fourth quadrant questions. This doesn’t mean we shouldn’t be using questions from the first quadrant, rather that we need to make sure that we get to the questions in the fourth quadrant.

Although all questions we ask don’t fit neatly into this quadrant, I think it is helpful for evaluating either the type of questions we are preparing, or the questions that we are given in prewritten kids’ ministry lessons. As you look at the Question Quadrant think about where most of the questions you ask fit.

In my next post I am going to take a kids’ ministry lesson and use the Question Quadrant to look at the questions that are suggested.












































Sociology chapter

The BCG Matrix is a portfolio model developed by the Boston Consulting Group (BCG) in 1968. It has been popularized over time through inclusion in many strategy and marketing textbooks.

One of its prime appeals is that it communicates an important element of strategy – that is, the role of competitive strengths and market opportunity – through a very straightforward and simple to understand matrix.

It tends to have a high level of recall among students and practitioners because the four matrix boxes have unusual names – namely cash cows, stars, question marks and dogs – making it one of the more memorable marketing and strategy models available.


General purpose of the BCG Matrix
The BCG matrix was originally developed to assist companies that have multiple product portfolios (or strategic business units) to help guide their decisions on reinvestment. In other words, it was a model designed for use in large conglomerate organizations that operated many different divisions.

For example, a large retailer may have chains of supermarkets, department stores, convenience stores, specialty stores and so on. These all represent different business portfolios – sometimes referred to as cheating business units – for the organization. Because it helps analyze and guide business portfolios, it is considered to be a portfolio model.

The BCG matrix was a simple way of plotting the firm’s different business portfolios onto the same matrix and providing some guidance to how the company’s resources should be allocated across the different business units.

The two dimensions of the BCG matrix
The BCG matrix considers two elements to form its four quadrants. These two dimensions are:

Growth rate of the market
Relative market share
The growth rate of the market dimension is used as a proxy measure of the attractiveness of the market, with high-growth markets being seen as more attractive and offering more potential and opportunity.

Relative market share is used as a surrogate of competitive strength. The larger the firm’s market share, relative to its largest competitor, the stronger the firm is in the marketplace.

Therefore, the BCG matrix combines a measure of market attractiveness against overall competitive strengths in order to identify the quadrant of the model with the firm or business unit is situated.

bcg four quadrants

The four quadrants of the BCG matrix
Businesses in high growth markets, with a high relative market share are considered to be “stars”.
Those businesses operating in low growth markets, but also with a high relative market share are considered to be “cash cows”.
If the business is in a high-growth market, but has a low relative market share, the net business is classified as a “question mark”.
The final quadrant is for businesses in low growth markets, where they had a relatively low market share – and these businesses are classified as “dogs” in the BCG matrix.
General


There are a number of different types of models used in retail to forecast sales. In general, they can be categorized into one of the following:

Square 1: low probability of usefulness and low total cost
Aggregate Regression Models-
How they work: Analyze the trade area as one big chunk using mathematical equations that contain information about the trade area (like population, income, competition, etc.). When you calculate the equation for any given trade area, the result is a sales forecast.

Pros: Relatively easy to build. Can be easy to run.

Cons: Less accurate than other techniques unless combined with a heuristic model (below).

Square 2 :high probability of usefulness, high total cost
Disaggregate Regression Model- How they work: These are like aggregate regression models except you analyze the trade area in little pieces and add up the results to get a sales forecast.

Pros: Can be more accurate than aggregate regression models

Cons: Harder to build. May be harder to run than aggregate models

Heuristic Model

How they work: "If this, then that." If there's more than 20 thousand people within 2 miles, then if there are less than 4 competitors within 5 miles... And so on. These models can run the gamut from overly simplistic to extremely sophisticated and are often combined with regression models to say, "If it's like this, use this regression model. If it’s like that, use that regression model" and so on.

Pros: Can be very powerful, especially combined with other model types.

Cons: Can be overly simplistic
Spatial Interaction Model

How they work: This is the only modeling technology that actually tries to model people's shopping behavior directly. The goal is to create mathematical equations that describe the flow of dollars or people from each neighborhood to each store. Once you find the math, you can put a new store into the model and recalculate. The model will then tell you how many dollars or people should flow to your proposed store.

Pros: Can be purchased commercially. Can implicitly provide estimates of sister store and competitive impacts.

Cons: Requires a very skilled person to run them. Need very accurate data about competitor locations and demand or they will work poorly.

Square 3: High probability of usefulness low total cost
Analog Model
How they work: This is a different kind of model. Instead of providing a sales forecast, it provides a list of your existing stores that have characteristics similar to your site, usually along with a score of how well they match. You can then look at the stores that match and see how well they perform.

Pros: Analog models are typical fairly easy to build and to run. They have the added benefit of being easy to explain to stakeholders.

Cons: They do not provide an actual sales forecast although some people use an average of the matching stores sales to estimate one.
The question that people always ask the model builders is "How accurate is your model?" They usually answer with a number (like within ±10%). The question that should be asked by the model builder in return (but usually isn’t) is "How much does the quality of the store manager impact your sales?" Whatever number that is, the model builder’s answer should be higher. Many retail chains claim that the quality of manager can impact sales ±15% or more. In a grocery store case I heard from a reliable source, a bad store manager cut sales in half!

There are other factors that can drastically affect sales that can't be readily captured by models. For example, site characteristics are too complex to be fully considered by any model. In addition, most models use some measure of competition, usually from a 3rd party source. But the best of the best of these is only 85% accurate, implying 15% error. And almost no one is using this quality of data to build models; it's too expensive. So, unless the competitor data is field verified, the real error rate is worse.

Square 4: low probability of usefulness and high cost-- there is none



4-Digits (abbreviation: 4-D) is a lottery in Singapore and Malaysia. Individuals play by choosing any number from 0000 to 9999. Then, twenty-three winning numbers are drawn each time. If one of the numbers matches the one that the player has bought, a prize is won. A draw is conducted to select these winning numbers. 4-Digits is a fixed-odds game.

Magnum 4D is the 1st legalized 4D Operator licensed by the Malaysian Government to operate 4D. Soon thereafter, other lottery operators followed suit, as this is a very popular game in Malaysia and Singapore.

Singapore Pools is the sole provider of gambling games in Singapore. 4-D and lottery 6/45 are two of the most popular. A similar 4-D game with its prize structure fully revealed can be found in Taiwan.[1]

4-Digits is somewhat similar to "Pick 4" in the United States and Canada.



It is now time to take a holistic approach to your mobility strategy and determine the order in which you implement potential mobile apps, based on your business objectives. The “Business value to Complexity “ matrix provides a basic framework for this exercise. The intent is to rank business processes that make sense to mobile-enable against two criteria: implementation complexity and business value.

High complexity low value- do not pursue
low complexity low value- best effort
high complexity high value- pursue in time
low complexity high value- low hanging fruits.



Steve Mann describes that there are four possible futures for societies in the future in regards to surveillance, with the veillance compass- which is a quadrant matrix
1. The Surveillance-Sousveillance Distinction
The four possible future arise from the intersection of two competing approaches to “veillance” technologies. These are the surveillant and sousveillant approaches, respectively. You may be familiar with the distinction already, or have a decent enough grasp of etymology to get the gist of it (“sur” means “from above”; “sous” means “from below”). Nevertheless, Mann offers a couple of competing sets of definitions in his work, and its worth talking about them both.

The first set of definitions focuses on the role of authority in the use of veillance technologies. It defines surveillance as any monitoring that is undertaken by a person or entity in some position of authority (i.e. to whom we are inclined/obliged to defer). The authority could be legal or social or personal in nature. This is the kind of monitoring undertaken by governments, intelligence agencies and big corporations like Facebook and Google (since they both have a kind of “social” authority). By way of contrast, sousveillance is any monitoring undertaken by persons and entities that are not in a position of authority. This is the kind of citizen-to-authority or peer-to-peer monitoring that is now becoming more common.

The second set of definitions shifts the focus away from “authorities” and onto activities and their participants. It defines surveillance as the monitoring of an activity by a person that is not a participant in that activity. This, once again, is the kind of monitoring undertaken by governments or businesses: they monitor protests or shopping activities without themselves engaging in those behaviours (though, of course, people employed in governments and businesses could be participants in other activities in which they themselves are surveilled). In contrast, sousveillance is monitoring of an activity that is undertaken by actual participants in the activity.

I’m not sure which of these sets is preferable, though I incline toward the first. The problem with the first one is that it relies on the slightly nebulous and contested concept of an “authority”. Is a small, local shop-owner with a CCTV camera in a position of authority over the rich businessperson who buys cigarettes in his store? Or does the power disparity turn what might seem in the first instance to be a case of surveillance into one of sousveillance? Maybe this is a silly example but it does help to illustrate some of the problems involved with identifying who the authorities are.

The second set of definitions has the advantage of moving away from the concept of authority and focusing on the less controversial concepts of “activities” and their “participants”. Still, I wonder whether that advantage is outweighed by other costs. If we stuck strictly to the participant/non-participant distinction then citizen-to-authority monitoring would seem to count as surveillance not sousveillance. For example protestors who record the behaviour of security forces would be surveilling them, not sousveilling them. You might think that’s fine — they’re just words after all — but I worry that it misses something of the true value of the sousveillance concept.

That’s why I tend to prefer the first set of definitions.

2. Four Types of Veillance Society
And the definitions matter because, as noted above, the surveillance-sousveillance distinction is critical to understanding the possible futures that are open to us. You have to imagine that surveillance and sousveillance represent two different dimensions or matrices along which future societies can vary. A society can have competing attitudes toward both surveillance and sousveillance. That is: they can reject both, embrace both, or embrace one and reject the other. The result is four possible futures, which can be represented by the two-by-two matrix below.

Let’s consider these four possible futures in more detail:

The Equiveillance Society: This is a society which embraces both surveillance and sousveillance. The authorities can watch over us with their machines of loving grace and we can watch over them with our smartphones, smartwatches and other smart devices (though there are questions to be asked about who really controls those technologies).
The Univeillance Society: This is a society which embraces sousveillance but resists surveillance. It’s not quite clear why it is called univeillance (except for the fact that it embraces one kind of veillance only, but then that would imply that a society that embraced surveillance only should have the same name, which it doesn’t). But the basic idea is that we accept all forms of peer-to-peer monitoring, but try to cut out monitoring by authorities.
The McVeillance Society: This is a society that embraces surveillance but resists sousveillance. Interestingly enough, this is happening already. There are a number of businesses that use surveillance technologies but try to prevent their customers or other ordinary citizens from using sousveillance technologies (like smartphone cameras). For example, in Ireland, the Dublin Docklands Development Authority tries to prevent photographs being taken in the streets of the little enclave of the city that it controls (if you are ever there, it seems like the streets are just part of the ordinary public highways, but in reality they are privately owned). The name “McVeillance” comes from Mann’s own experiences with McDonalds (which you can read about here).
The Counterveillance Society: This is a society that resists both types of veillance technology. Again, we see signs of this in the modern world. People try to avoid being caught by police speed cameras (and there are websites set up to assist this), or having their important life events being recognised by big data, and or having their photographs taken on nights out.

The modern world is in a state of flux. It is only recently that surveillance and sousveillance technologies have become cheap and readily available. As a result we are lurching between these possibilities. Still, it is worth asking what do we want the future to look like?


16 is the squares of the quadrant model. I do not think it is a coincidence that 16 arbitrarily is seen as a birthday of significance in American culture. For Jews it is 13, but 13 is also significant because 13 is the first square of the fourth quadrant.
My Super Sweet 16 is a MTV reality series documenting the lives of teenagers, usually in the United States, Canada and UK, who usually have wealthy parents who throw huge coming of age celebrations, which had a very exclusive target audience. Parties include the quinceañera (a sweet 15), the sweet 16, and other birthdays including a My Super Sweet 21 (which was broadcast during MTV's Spring Break party) and My Super Swag 18. The show premiered on January 18, 2005 and ended its run on June 15, 2008. The opening theme is "Sweet Sixteen" sung by Hilary Duff. The series had two spinoffs, Exiled and The Real deal, which have both ended their run by 2010.

The show has also covered a number of celebrity coming of age parties. Bow Wow, Sean Kingston, Aly and AJ, Chris Brown and Soulja Boy Tell Em have all had their parties featured on the show




Performance art is a performance presented to an audience within a fine art context, traditionally interdisciplinary. Performance may be either scripted or unscripted, random or carefully orchestrated; spontaneous or otherwise carefully planned with or without audience participation. The performance can be live or via media; the performer can be present or absent. It can be any situation that involves four basic elements: time, space, the performer's body, or presence in a medium, and a relationship between performer and audience. Performance art can happen anywhere, in any type of venue or setting and for any length of time. The actions of an individual or a group at a particular place and in a particular time constitute the work.



The Road Less Traveled,[6] published in 1978, is Peck's best-known work, and the one that made his reputation. It is, in short, a description of the attributes that make for a fulfilled human being, based largely on his experiences as a psychiatrist and a person.

The book consists of four parts. In the first part Peck examines the notion of discipline, which he considers essential for emotional, spiritual, and psychological health, and which he describes as "the means of spiritual evolution". The elements of discipline that make for such health include the ability to delay gratification, accepting responsibility for oneself and one's actions, a dedication to truth, and "balancing". "Balancing" refers to the problem of reconciling multiple, complex, possibly conflicting factors that impact on an important decision—on one's own behalf or on behalf of another.

In the second part, Peck addresses the nature of love, which he considers the driving force behind spiritual growth. He contrasts his own views on the nature of love against a number of common misconceptions about love, including:

that love is identified with romantic love (he considers it a very destructive myth when it is solely relying on "feeling in love"),
that love is related to dependency,
that true love is linked with the feeling of "falling in love".
Peck argues that "true" love is rather an action that one undertakes consciously in order to extend one's ego boundaries by including others or humanity, and is therefore the spiritual nurturing—which can be directed toward oneself, as well as toward one's beloved.

In the third part Peck deals with religion, and the commonly accepted views and misconceptions concerning religion. He recounts experiences from several patient case histories, and the evolution of the patients' notion of God, religion, atheism—especially of their own "religiosity" or atheism—as their therapy with Peck progressed.

The fourth and final part concerns "grace", the powerful force originating outside human consciousness that nurtures spiritual growth in human beings. In order to focus on the topic, he describes the miracles of health, the unconscious, and serendipity—phenomena which Peck says:

nurture human life and spiritual growth,
are incompletely understood by scientific thinking,
are commonplace among humanity,
originate outside the conscious human will.
He concludes that "the miracles described indicate that our growth as human beings is being assisted by a force other than our conscious will" (Peck, 1978/1992,[6] p281).

Random House, where the then little-known psychiatrist first tried to publish his original manuscript, turned him down, saying the final section was "too Christ-y." Thereafter, Simon & Schuster published the work for $7,500 and printed a modest hardback run of 5,000 copies. The book took off only after Peck hit the lecture circuit and personally sought reviews in key publications. Later reprinted in paperback in 1980, The Road first made best-seller lists in 1984 – six years after its initial publication.[5]


In The Road Less Traveled,[6] Peck talked of the importance of discipline. He described four aspects of discipline:

Delaying gratification: Sacrificing present comfort for future gains.
Acceptance of responsibility: Accepting responsibility for one's own decisions.
Dedication to truth: Honesty, both in word and deed.
Balancing: Handling conflicting requirements. Scott Peck talks of an important skill to prioritize between different requirements – bracketing

The four stages of spiritual development[edit]
Peck postulates that there are four stages of human spiritual development:[12][13]

Stage I is chaotic, disordered, and reckless. Very young children are in Stage I. They tend to defy and disobey, and are unwilling to accept a will greater than their own. They are extremely egoistic and lack empathy for others. Many criminals are people who have never grown out of Stage I.
Stage II is the stage at which a person has blind faith in authority figures and sees the world as divided simply into good and evil, right and wrong, us and them. Once children learn to obey their parents and other authority figures, often out of fear or shame, they reach Stage II. Many so-called religious people are essentially Stage II people, in the sense that they have blind faith in God, and do not question His existence. With blind faith comes humility and a willingness to obey and serve. The majority of good, law-abiding citizens never move out of Stage II.
Stage III is the stage of scientific skepticism and questioning. A Stage III person does not accept things on faith but only accepts them if convinced logically. Many people working in scientific and technological research are in Stage III. They often reject the existence of spiritual or supernatural forces since these are difficult to measure or prove scientifically. Those who do retain their spiritual beliefs, move away from the simple, official doctrines of fundamentalism.
Stage IV is the stage where an individual starts enjoying the mystery and beauty of nature and existence. While retaining skepticism, he starts perceiving grand patterns in nature and develops a deeper understanding of good and evil, forgiveness and mercy, compassion and love. His religiousness and spirituality differ significantly from that of a Stage II person, in the sense that he does not accept things through blind faith or out of fear, but does so because of genuine belief, and he does not judge people harshly or seek to inflict punishment on them for their transgressions. This is the stage of loving others as yourself, losing your attachment to your ego, and forgiving your enemies. Stage IV people are labeled as Mystics.


Based on his experience with community building workshops, Peck says that community building typically goes through four stages:

Pseudocommunity: In the first stage, well-intentioned people try to demonstrate their ability to be friendly and sociable, but they do not really delve beneath the surface of each other's ideas or emotions. They use obvious generalities and mutually established stereotypes in speech. Instead of conflict resolution, pseudocommunity involves conflict avoidance, which maintains the appearance or facade of true community. It also serves only to maintain positive emotions, instead of creating a safe space for honesty and love through bad emotions as well. While they still remain in this phase, members will never really obtain evolution or change, as individuals or as a bunch.
Chaos: The first step towards real positivity is, paradoxically, a period of negativity. Once the mutually sustained facade of bonhomie is shed, negative emotions flood through: Members start to vent their mutual frustrations, annoyances, and differences. It is a chaotic stage, but Peck describes it as a "beautiful chaos" because it is a sign of healthy growth. (This relates closely to Dabrowski's concept of disintegration).
Emptiness: In order to transcend the stage of "Chaos", members are forced to shed that which prevents real communication. Biases and prejudices, need for power and control, self-superiority, and other similar motives which are only mechanisms of self-validation and/or ego-protection, must yield to empathy, openness to vulnerability, attention, and trust. Hence this stage does not mean people should be "empty" of thoughts, desires, ideas or opinions. Rather, it refers to emptiness of all mental and emotional distortions which reduce one's ability to really share, listen to, and build on those thoughts, ideas, etc. It is often the hardest step in the four-level process, as it necessitates the release of patterns which people develop over time in a subconscious attempt to maintain self-worth and positive emotion. While this is therefore a stage of "Fana (Sufism)" in a certain sense, it should be viewed not merely as a "death" but as a rebirth—of one's true self at the individual level, and at the social level of the genuine and true Community.
True community: Having worked through emptiness, the people in the community enter a place of complete empathy with one another. There is a great level of tacit understanding. People are able to relate to each other's feelings. Discussions, even when heated, never get sour, and motives are not questioned. A deeper and more sustainable level of happiness obtains between the members, which does not have to be forced. Even and perhaps especially when conflicts arise, it is understood that they are part of positive change.
The four stages of community formation are somewhat related to a model in organization theory for the five stages that a team goes through during development. These five stages are:

Forming where the team members have some initial discomfort with each other, but nothing comes out in the open. They are insecure about their role and position with respect to the team. This corresponds to the initial stage of pseudocommunity.
Storming where the team members start arguing heatedly, and differences and insecurities come out in the open. This corresponds to the second stage given by Scott Peck, namely chaos.
Norming where the team members lay out rules and guidelines for interaction that help define the roles and responsibilities of each person. This corresponds to emptiness, where the community members think within and empty themselves of their obsessions in order to be able to accept and listen to others.
Performing where the team finally starts working as a cohesive whole, and to effectively achieve the tasks set of themselves. In this stage individuals are aided by the group as a whole, where necessary, in order to move further collectively than they could achieve as a group of separated individuals.
Transforming This corresponds to the stage of true community. This represents the stage of celebration, and when individuals leave, as they invariably must, there is a genuine feeling of grief, and a desire to meet again. Traditionally, this stage was often called "Mourning".
It is in this third stage that Peck's community-building methods differ in principle from team development. While teams in business organizations need to develop explicit rules, guidelines and protocols during the norming stage, the emptiness stage of community building is characterized, not by laying down the rules explicitly, but by shedding the resistance within the minds of the individuals.

Peck started the Foundation for Community Encouragement (FCE) to promote the formation of communities, which, he argues, are a first step towards uniting humanity and saving us from self-destruction.

The Blue Heron Farm is an intentional community in central North Carolina, whose founders stated that they were inspired by Peck's writings on community. Peck himself had no involvement with this project.



QMRThe Carhart four-factor model is an extension of the Fama–French three-factor model including a momentum factor, also known in the industry as the MOM factor (monthly momentum).[1] Momentum in a stock is described as the tendency for the stock price to continue rising if it is going up and to continue declining if it is going down. The MOM can be calculated by subtracting the equal weighted average of the highest performing firms from the equal weighed average of the lowest performing firms, lagged one month (Carhart, 1997). A stock is showing momentum if its prior 12-month average of returns is positive. Similar to the three factor model, momentum factor is defined by self-financing portfolio of (long positive momentum)+(short negative momentum).

The four factor model is commonly used as an active management and mutual fund evaluation model.

Notes on risk adjustment

Three commonly used methods to adjust a mutual fund’s returns for risk are:

1. The market model:

EXR_t=\alpha^J+
\beta_{mkt}\mathit{EXMKT}_t+\epsilon_t
The intercept in this model is referred to as the “Jensen’s alpha”

2. The Fama-French three-factor model:

EXR_t=\alpha^{FF}+
\beta_{mkt}\mathit{EXMKT}_t+
\beta_{HML}\mathit{HML}_t+
\beta_{SMB}\mathit{SMB}_t+\epsilon_t
The intercept in this model is referred to as the “three-factor alpha”

3. The Carhart four-factor model:

EXR_t=\alpha^c+
\beta_{mkt}\mathit{EXMKT}_t+
\beta_{HML}\mathit{HML}_t+
\beta_{SMB}\mathit{SMB}_t+
\beta_{UMD}\mathit{UMD}_t+\epsilon_t
The intercept in this model is referred to as the “four-factor alpha”

Where EXRt is the monthly return to the asset of concern in excess of the monthly t-bill rate. We typically use these three models to adjust for risk. In each case, we regress the excess returns of the asset on an intercept (the alpha) and some factors on the right hand side of the equation that attempt to control for market-wide risk factors. The right hand side risk factors are: the monthly return of the CRSP value-weighted index less the risk free rate (EXMKT), monthly premium of the book-to-market factor (HML) the monthly premium of the size factor (SMB), and the monthly premium on winners minus losers (UMD) from Fama-French (1993) and Carhart (1997).

A fund has excess returns if it has a positive and statistically significant alpha.

SMB is a zero-investment portfolio that is long on small capitalization (cap) stocks and short on big cap stocks. Similarly, HML is a zero-investment portfolio that is long on high book-to-market (B/M) stocks and short on low B/M stocks, and UMD is a zero-cost portfolio that is long previous 12-month return winners and short previous 12-month loser stocks.





The four functions of AGIL into external and internal problems, and further into instrumental and consummatory problems. External problems include the use of natural resources and making decisions to achieve goals, whereas keeping the community integrated and maintaining the common values and practices over succeeding generations are considered internal problems. Furthermore, goal attainment and the integral function belong to the consummatory aspect of the systems.[2]

It is common to use a table to illustrate the four functions and their differences in spatial and temporal orientation. (The following only addresses the AGIL component examples for the social system—for example, "political office" is not a unit for the categories on the action-system level).

Instrumental functions Consummatory functions
External problems Adaptation
- natural resources
- commodity production

Goal-attainment
- political offices
- common goals

Internal problems Latency (or Pattern Maintenance)
- family
- schools

Integration
- religious systems
- media

Each of the four individual functional necessities are further divided into four sub-categories. The four sub-categories are the same four functions as the major four AGIL categories and so on. Hence one subsystem of the societal community is the category of "citizenship," which is a category we today would associate with the concept of civil society. In this way, citizenship (or civil society) represents according to Parsons, the goal-attainment function within the subsystem of the Societal Community. For example, a community's adaption to the economic environment might consist of the basic "industrial" process of production (adaption), political-strategic goals for production (goal-attainment), the interaction between the economical system and the societal community, which integrates production mechanisms both in regard to economic as well as societal factors (integration), and common cultural values in their "selective" relevance for the societal-economic interchange process (latency (or Pattern Maintenance)). Each of these systemic processes will (within the scope of the cybernetic hierarchy) be regulated by what Talcott Parsons calls generalized symbolic media. Each system level of the general action-paradigm has each their set of generalized symbolic media (so that the set of generalized symbolic media for the social system is not identical with those of the action system or those of the human condition paradigm). In regard to the social system, there are the following four generalized symbolic media:

A: (Economy): Money. G: (Political system): Political power. I: (Societal Community): Influence. L: (Fiduciary system): Value-commitment.[3]

He creates these four functions through a two by two matrix quadrant




Margaret Archer objected to the inseparability of structure and agency in structuration theory.[10] She proposed a notion of dualism rather than "duality of structure". She primarily examined structural frameworks and the action within the limits allowed by those conditions. She combined realist ontology and called her methodology analytical dualism. Archer maintained that structure precedes agency in social structure reproduction and analytical importance, and that they should be analysed separately. She emphasised the importance of temporality in social analysis, dividing it into four stages: structural conditioning, social interaction, its immediate outcome and structural elaboration. Thus her analysis considered embedded "structural conditions, emergent causal powers and properties, social interactions between agents, and subsequent structural changes or reproductions arising from the latter."[2] Archer criticised structuration theory for denying time a place because of the inseparability between structure and agency



The contemporary discipline of sociology is theoretically multi-paradigmatic[74] as a result of the contentions of classical social theory. In Randall Collins' well-cited survey of sociological theory[75] he retroactively labels various theorists as belonging to four theoretical traditions: Functionalism, Conflict, Symbolic Interactionism, and Utilitarianism.[76] Modern sociological theory descends predominately from functionalist (Durkheim) and conflict-centered (Marx and Weber) accounts of social structure, as well as the symbolic interactionist tradition consisting of micro-scale structural (Simmel) and pragmatist (Mead, Cooley) theories of social interaction. Utilitarianism, also known as Rational Choice or Social Exchange, although often associated with economics, is an established tradition within sociological theory.[77][78] Lastly, as argued by Raewyn Connell, a tradition that is often forgotten is that of Social Darwinism, which brings the logic of Darwinian biological evolution and applies it to people and societies.[79] This tradition often aligns with classical functionalism. It was the dominant theoretical stance in American sociology from around 1881 to 1915[80] and is associated with several founders of sociology, primarily Herbert Spencer, Lester F. Ward and William Graham Sumner. Contemporary sociological theory retains traces of each of these traditions and they are by no means mutually exclusive.


The structuralist movement originated primarily from the work of Durkheim as interpreted by two European anthropologists. Anthony Giddens' theory of structuration draws on the linguistic theory of Ferdinand de Saussure and the French anthropologist Claude Lévi-Strauss. In this context, 'structure' refers not to 'social structure' but to the semiotic understanding of human culture as a system ofsigns. One may delineate four central tenets of structuralism: First, structure is what determines the structure of a whole. Second, structuralists believe that every system has a structure. Third, structuralists are interested in 'structural' laws that deal with coexistence rather than changes. Finally, structures are the 'real things' beneath the surface or the appearance of meaning.[


Here is a model of cultural differences, with two major axes:

Egalitarian (Decentralized) vs. Hierarchical (Centralized)
Person (Informal) vs. Task (Formal)

Leading to the following types (and orientations):

Incubator (Fulfilment) [Egalitarian/Person]
Family (Power) [Hierarchical/Person]
Guided Missile (Project) [Egalitarian/Task]
Eiffel Tower (Role) [Hierarchical/Task]
Trompenaars’ research later expanded these into seven cultural differences (universalism vs. particularism, individualism vs. communitarianism, neutral vs. emotional, specific vs. diffuse, achievement vs. ascription, sequential vs. synchronic, and internal vs. external control)! I’m not clear on how the four map into the seven.

Another model of cultural dimensions was developed by Geert Hofstede, who first found four dimensions (power distance index, individualism vs. collectivism, uncertainty avoidance index, and masculinity vs. femininity), and later increased these to six (adding long-term vs. short-term, and indulgence vs. restraint). Again, I’m unsure what the differences are between Trompenaars’ and Hofstede’s models.

Trompenaars’ model of four cultures is somewhat similar to another fourfold I found in the article “How to Build Scenarios”. It consists of two axes: individual vs. community and fragmentation vs. coherence.

Ectopia [Community/Fragmented]
I Will [Individual/Fragmented]
Consumerland [Individual/Coherent]
New Civics [Community/Coherent]
This fourfold is also mentioned in the book “The Power of the 2×2 Matrix”, which looks quite interesting. I think it is generally geared towards business decision applications, but has a compendium of various 2×2 matrices that appear to be broadly useful.

Also, the ChangingMinds.org website looks like it has a wealth of models and introductory information about them (and not only those with four aspects).

Fons Trompenaars is another Dutch culturalist who is into international culture. This is a 2x2 model which is much simpler than the more complex Trompenaars' and Hampden-Turner's cultural factors.

The four diversity cultures
This model assumes major dimensions of person vs. task and centralised (which is also assumed to be hierarchical) vs. decentralised (which is assumed to be more egalitarian). Both of these dimension are very common measures and can often be easily determined.

.

Egalitarian/decentralised

Person/Informal Style

Incubator
(fulfilment- oriented)

Guided Missile
(project- oriented)

Task/Formal Style

Family
(power- oriented)

Eiffel Tower
(role- oriented)

Hierarchical/centralised

Factors in each model
Relationship between employees
Family: diffuse relationship to organic whole to which one is bonded

Eiffel Tower: specific role in mechanical system of required interactions

Guided Missile: specific tasks in cybernetic system targeted upon shared objectives

Incubator: diffuse spontaneous relationships growing out of shared creative processes

Attitude to authority
Family: status is ascibed to parent figures who are close and all powerful

Eiffel Tower: status is ascribed to superior roles who are distant yet powerful

Guided Missile: status is achieved by project group members who contribute to the targeted goal

Incubator: status is achieved by individuals exemplifying creativity & growth

Ways of thinking and learning
Family: intuitive, holistic, lateral and error correcting

Eiffel Tower: logical, analytical, vertical and rationally efficient

Guided Missile: problem centred, professional, practical, cross disciplinary

Incubator: process oriented, creative, ad-hoc, inspirational

Attitudes to people
Family: as family members

Eiffel Tower: human resources

Guided Missile: specialists and experts

Incubators: co-creators

Managing change
Family: “Father” changes course

Eiffel Tower: change rules and procedures

Guided Missile: shift aim as target moves

Incubator: improvise and attune

So what?
When working in other countries and with people from overseas, first research their national culture along these dimensions, then check first whether the people use these. By default and when talking with national groups, take account of these factors.


In coded theory there are four stages of analysisStages of analysis[edit]
Stage Purpose
Codes Identifying anchors that allow the key points of the data to be gathered
Concepts Collections of codes of similar content that allows the data to be grouped
Categories Broad groups of similar concepts that are used to generate a theory
Theory A collection of categories that detail the subject of the research


The fourfold pattern of risk prospects of prospect theory is put in a quadrant with two dyads creating the four squares

To see how Prospect Theory can be applied, consider the decision to buy insurance. Assume the probability of the insured risk is 1%, the potential loss is $1,000 and the premium is $15. If we apply prospect theory, we first need to set a reference point. This could be the current wealth or the worst case (losing $1,000). If we set the frame to the current wealth, the decision would be to either

1. Pay $15 for sure, which yields a prospect-utility of \scriptstyle v(-15),

OR

2. Enter a lottery with possible outcomes of $0 (probability 99%) or −$1,000 (probability 1%), which yields a prospect-utility of \scriptstyle \pi(0.01) \times v(-1000)+\pi(0.99) \times v(0)=\pi(0.01) \times v(-1000).

According to the prospect theory,

\scriptstyle \pi(0.01)>0.01, because low probabilities are usually overweighted;
\scriptstyle v(-15)/v(-1000)>0.015, by the convexity of value function in losses.
The comparison between \scriptstyle \pi(0.01) and \scriptstyle v(-15)/v(-1000) is not immediately evident. However, for typical value and weighting functions, \scriptstyle \pi(0.01)>v(-15)/v(-1000), and hence \scriptstyle \pi(0.01) \times v(-1000)<v(-15). That is, a strong overweighting of small probabilities is likely to undo the effect of the convexity of \scriptstyle v in losses, making the insurance attractive.

If we set the frame to −$1,000, we have a choice between \scriptstyle v(985) and \scriptstyle \pi(0.99) \times v(1000). In this case, the concavity of the value function in gains and the underweighting of high probabilities can also lead to a preference for buying the insurance.

The interplay of overweighting of small probabilities and concavity-convexity of the value function leads to the so-called fourfold pattern of risk attitudes: risk-averse behavior when gains have moderate probabilities or losses have small probabilities; risk-seeking behavior when losses have moderate probabilities or gains have small probabilities.

Below is an example of the fourfold pattern of risk attitudes. The first item in quadrant shows an example prospect (e.g. 95% chance to win $10,000 is high probability and a gain). The second item in the quadrant shows the focal emotion that the prospect is likely to evoke. The third item indicates how most people would behave given each of the prospects (either Risk Averse or Risk Seeking). The fourth item states expected attitudes of a potential defendant and plaintiff in discussions of settling a civil suit.[4]

Example Gains Losses
High Probability (Certainty Effect) 95% chance to win $10,000 or 100% chance to obtain $9,499. So, 95%×$10,000 = $9,500 > $9,499. Fear of disappointment. RISK AVERSE. Accept unfavorable settlement of 100% chance to obtain $9,499 95% chance to lose $10,000 or 100% chance to lose $9,499. So, 95%×-$10,000 = -$9,500 < -$9,499. Hope to avoid loss. RISK SEEKING. Accept unfavorable settlement of 95% chance to lose $10,000
Low Probability (Possibility Effect) 5% chance to win $10,000 or 100% chance to obtain $501. So, 5%×$10,000 = $500 < $501. Hope of large gain. RISK SEEKING. Accept unfavorable settlement of 5% chance to win $10,000 5% chance to lose $10,000 or 100% chance to lose $501. So, 5%×-$10,000 = -$500 > -$501. Fear of large loss. RISK AVERSE. Accept unfavorable settlement of 100% chance to lose $501
Probability distortion is that people generally do not look at the value of probability uniformly between 0 and 1. Lower probability is said to be over-weighted (that is a person is over concerned with the outcome of the probability) while medium to high probability is under-weighted (that is a person is not concerned enough with the outcome of the probability). The exact point in which probability goes from over-weighted to under-weighted is arbitrary, however a good point to consider is probability=.33. A person values probability=.01 much more than the value of prob=0 (probability=.01 is said to be over-weighted) . However, a person has about the same value for probability=.4 and probability=.5. Also, the value of probability=.99 is much less than the value of probability=1, a sure thing (probability=.99 is under-weighted). A little more in depth when looking at probability distortion is that π(p) + π(1 − p) < 1 (where π(p) is probability in prospect theory).[5]



The Quadruplex telegraph is a type of electrical telegraph which allows a total of four separate signals to be transmitted and received on a single wire at the same time (two signals in each direction.) Quadruplex telegraphy thus implements a form of multiplexing.

The technology was invented by American inventor Thomas Edison, who sold the rights to Western Union in 1874 for the sum of $10,000.

The problem of sending two signals simultaneously in opposite directions on the same wire had been solved previously by Julius Wilhelm Gintl and improved to commercial viability by J. B. Stearns; Edison added the ability to double the number in each direction.

To send two signals in a single direction at the same time, the quadruplex telegraph used one signal to vary the absolute strength or voltage of the signal (amplitude modulation) and the other signal to vary the phase (polarity) of the line (phase modulation), i.e., the direction of current flow imposed upon the wire.[1]

Today this concept is known as polar modulation, considering amplitude and phase as radius and angle in polar coordinates.




Four-player chess (also known as Four-handed, Four-man, or Four-way chess) is a family of chess variants typically played with four people. A special board made of standard 8×8 squares with an additional 3 rows of 8 cells extending from each side is common. Four sets of differently colored pieces are needed to play these variants. Four-player chess follows the same basic rules as regular chess. There are many different rule variations; most variants, however, share the same board and similar piece setup.




No comments:

Post a Comment