Saturday, January 25, 2020

The Positive Research Paradigm

The Positive Research Paradigm A paradigm can be defined as a set of shared assumptions about some aspect of the world. A research paradigm directs our approach towards research by defining the ontology and epistemology of our research. That is, a paradigm denotes its members shared premise regarding the nature of reality, the purpose of research and the form of knowledge it provides. (OATES, 2007:282; LEE, 2004:5) Lee (2004:5-6) notes that research paradigms can be separated by their various ontologies and epistemologies. A paradigms ontology encapsulates the researchers view of what the real world is. An ontology flows to one or more epistemology. Epistemology is the over-arching process by which a school of thought performs its logical and empirical work. Epistemologies are usually labeled to be either quantitative or qualitative. Again, and epistemology is divided into several lower levels of methodology which is he more specific manner in which research is conducted. The devices defined in each methodology are called methods. The positivistic research paradigm, or scientific method, is an approach towards research founded on the premise that our world is defined by a set of regular laws or patters, and that we can investigate these laws objectively (OATES, 2007:283). Lee (2004:8) defines the positivism paradigm as one in which theory is typically provided as a set of related variables express by some form of formal logic, proven empirically to be significant. Positivism is term used to characterize a specific research position in which scientific theory is grounded on objective empirical observation. Positivism offers predictions based on the knowledge of laws that connect specific outcomes with specific initial conditions. (ROMM, 1991:1). 2. Comte and Popper on positivism The positivistic school of thought can be found in early work of such as Bacon, Galileo and Newton (OATES, 1992:283). Auguste Comte and Karl Popper contributed significantly towards systematizing, clarifying and formalizing the arguments posed by earlier authors (ROMM, 1991:1; LEE, 1992:8). Comte was born in 1798, just after the French revolution which characterized a period of social and political revolt against aristocratic rule in European. At this time positivistic philosophy has already filtered down to the physical sciences but it was yet to reach the social sciences. Comte became concerned with finding theoretical and practical solutions to the social anarchy of the period. He argued that the social research will only be able to serve as moral compass if it was to become a science. (ROMM, 1992: 10-11) Popper, born in 1902, grew up in a socialist pre-Viennese society which was characterized by doctornistic views as opposed to critical thinking. Popper was intrigued by Einsteins approach to theorizing. Einstein regarded his own theory as plausible only if it failed critical tests. Popper became convinced that the only way to build strong theory was to define critical test that could refute the theory but never verify it. (ROMM, 1992: 28-29) ROMM (1992:9-97) defines positivism with reference to the original writings of Comte (1975) and Popper (1992). He discusses this philosophy on the grounds of its definition of knowledge, the logic that governs its investigation, the methods used in investigation and the practical utility of knowledge: 2.1. The definition of knowledge. Comte criticized theological and metaphysical view of the world. According to the theological view all abnormalities in the universe is the direct and conscious intervention of a supernatural agent, while the metaphysical view describes all phenomena as the reaction of some abstract forces, real entity or personified abstraction being invoked. Comte regarded this as untrue and incompatible with science. Comte defined phenomena as being governed by set natural laws which, if known, can be used to predict the outcomes. These natural laws state under which circumstance we can expect to encounter a certain outcome. We can learn these laws by analyzing the circumstances that produce an outcome and drawing inference to its succession. Only by asking questions about these natural laws can we create knowledge. Asking questions about first-and-final truths is futile, because this is beyond the reach of human comprehension. Popper agrees with Comte that a natural law is an unvarying regularity that defines the outcome that flows from a certain set of circumstances. Knowledge is added by uncovering theses laws operating in the respective fields of enquiry. According to Popper, scientist should not be sidetracked by the essentialist meaning of things (first-and-final truths) but rather in observing occurrences in the world in order to find true theories and descriptions of the world. Popper also adds that even the formulation and falsification of untrue theories advances knowledge, such that by discovering mistakes we better approximate truths. 2.2. Logic that governs its investigation. Comte argues that observed facts are the only basis for speculation. We should observe and reason about facts to form knowledge, rather than sterile empiricism. Sound theorizing should guide our observations; science therefore is a cycle of theorizing, observing and building theories. The process of building knowledge starts by deducing or inducing a hypothesis from general theory or specialized theory respectively. Induction is the logical formation of generalized theory form specialized consequences. For example, every life form we know of depends on liquid water to exist; all life depends on liquid water to exist. Deduction is the logical formation of specialized consequences from generalized theory. A popular example, all men are mortal; Socrates is a man; therefore, Socrates is mortal. Comte (1975) describes a hypothesis as the provisional supposition, altogether conjectural in the first instance with regards to some of the notions which are the object of enquiry. Simply put, a hypothesis states the anticipated result for undertaking a scientific enquiry. A hypothesis is valid if it is able to accurately predict what it is proposing given the initial set of circumstances. Science therefore is the activity of observing and disclosing the new observable consequences that confirms or invalidates our primitive supposition. We constantly incorporate new knowledge by making new observations or more profound meditations that either refute or confirm our hypothesis. Through repeated scientific endeavors knowledge comes to approximate reality. Poppers definition of a hypothesis is similar to that of Comte, but he differs in his approach of validation a hypothesis. Popper criticize Comte arguing that our experience can only falsify our theories not validate them. He postulates that a statement can only be verified as not being untrue as opposed to being true. He argues that theories can never be validated, but only corroborated. A theory is corroborated if we are, based on experience to date, unable to falsify a theory. The strength of a hypothesis lies in its openness to be tested on observations made. Popper describes science then to be the formulation of testable theories which cannot be falsified through experience. Poppers work also introduces an additional idea of probability statements which is hypothesis with some form of probability of outcome attached. Logically these statements can never be proven to be untrue. Popper argues that this hypothesis can be corroborated if they reasonably present all possible outcomes and if they cannot be falsified given reasonable and fair samples. Unlike Comte, Popper does not follow the notion that theories can be induced from specialized facts. With reference to the work of David Hume (1748), Popper argues that induction cannot be justified rationally. We should not go from fact to theory, but rather deduce our hypothesis into lower level statements which are individually testable hypothesis, which when falsified proving our original hypothesis wrong. Popper argues that we should ensure purity and objectivity in our research by subjecting our decision (on whether our most basic lower level statements should be accepted) to peer criticism. Through criticism science will become unbiased and detached from individuals. 2.3. The methods used in investigation. Compete argues that we should test our hypothesis by observing how they hold in reality. Direct observation is when we look at phenomena before our eyes. As example, in astronomy we observed that planets are elliptical with flat ends on both sides. Observation by experiments is when we observe how phenomena react to artificially modified circumstance. As example, in physics we experiment with gravity by having different particles fall to the ground. Comparison is when we observe a series of analogous cases in which the phenomena is more and more simplified. As example, imagine the comparison of the same chemical fluid under different combinations of pressure and temperature. Popper also distinguishes between experiments and observations as the two main positivistic methods of research. He does however disagree that comparison is a methods on its own, but argues that it is inherent to the other two methods. In an experiment, as example, a researcher compares the artificially induced results with the results under normal conditions. Both Popper and Comte both (referencing Francis Bacon) argue that empirical methods are superior as they provide objectivity to researchers that untimely removes bias from the science. They do however mention that observation should take place in all five senses, and though it might be possible to objectively measure an observed distance, it might be less possible to objectively measure smell. Qualifying these abstract observations should be done in a way that is unambiguous. For example, the distinct rotten-egg smell of H2S is widely cited in modern scientific literature. 2.4. Practical utility of knowledge. Comte theorized that once we know a certain outcome will always occur given conditions presented we are able to produce the outcomes we want. Theories formed for truly scientific purposes will result in knowledge to be acquired, and eventually lead to practical uses. If science is able to furnish the theoretical basis for practical action, Comte hoped, we will be able to direct social outcomes. We are able to use the knowledge of the laws that govern society to correct the negative externalities in the world. Popper argues that knowledge allow us to predict on the basis of engineering the initial conditions. With the knowledge that science provide we can plan to make their society a better and more reasonable one. We should use piece-tinkering (as termed by Popper for policy that is aimed at singular results) to mitigate unavoidable results of change rather than striving towards ideal. 3. Discussion of positivism Ramm (1991:55) defines positivism as the belief in logico-deductive theory as the idealized conception of scientific theory. Many researchers do accept the principles of the positivistic approach without explicitly noting positivism as there ontology. Positivistic research tries to find cause and effect relationships between dependent and independent variables in order to make predictions about our reality. According to the positivistic paradigm science should seek to find all the regular laws or patters in our universe. These laws and patterns in our world exist independently of any individual cognition. We can carry out experiments or observe reality to determine cause and effect relationships and test hypotheses regarding these relationships. Aim of science is to explain the variation in the dependent variable with reference to the variation in the independent variable. (Ramm, 1992:57; Lee, 2004:8; Oates, 2007:284) Our hypotheses can either be refuted by empirical investigation or corroborated. Some hypothesis will seem to be true for all observations made, and after reasonable peer review we can accept them to be true. If something is found to be false just once, it is false. In the positivistic ontology theories and explanations should be seen as the best knowledge that approximates reality at the current time. (Ramm, 1992:52; Oates, 2007:285) Modern positivism is seen as the cycle between induction and deduction (Ramm, 1992:61). o Derive hypothesis from more general statements o Test these hypothesis through observation o Generate empirical generalizations o Induce theoretical principles which should again be tested. Our observations should be tested empirically. Ramm (1992:60) notes that data collected should not be treated as formless mass; neither should theoretical categories be imposed on the data a priori. Theoretical notions become grounded in empirical observation, and data is offered theoretical treatment. Oates (2007:288) refers to this in terms of internal validity external validity. The data generated should be designed to provide the necessary insight into the research topic under observation as well as be applicable in a more general context. According to Oates (2007:285) the techniques that lie at the center of positivistic research are: Reductionism: breaking complex things into smaller things that are more easy to be studies Repeatability: researcher dont rely on the results of just one experiment, they repeat the experiments many times to be sure that their first set of results was not just a fluke Refutation: If other researchers cant repeat an experiment and get the same results as the original researchers they refute they hypothesis. The more a hypothesis can stand up to test designed to refute it, the stronger it is Oates (2007:33) lists the following possible research strategies: survey, design and creation, experiment, case study, action research, ethnography and interviews. Ramm (1992:67) suggest that the experiment and the survey are the favored methods to observe within the positivistic epistemology. The research paradigm in question is not determined by the research strategy used but rather on the shared assumptions about how to view the world. Oates (1992: 286) names five characteristics of the positivistic research ontology: The world exists independently of humans: Physical and social world exist independently from individual how the world works. Measurement and modeling: Researcher discovers this world by making observations and measurements and producing models of how it works Objectivity: the researcher is neutral and objective and impartial to observer Hypothesis testing: Research is based on the empirical testing of theories and hypotheses lead to confirmation or refutation of them Quantitative data analysis: Research often have a strong preference for mathematical modeling and proofs and statistical analysis Universal laws: A researcher looks for generalizations universal laws patterns or irrefutable facts that can be shown to be true regardless of the researcher and the occasions. Furthermore Oates (2007:287) characterizes quality positivistic research as being, Objective: Research needs to be free of bias and individual preferences. Reliable: The research instruments used need to be neutral, accurate and reliable. Repeated use of the same instrument should yield the same results. Internally valid: The research methods well chosen and designed to provide the necessary insight into the research topic under observation. Externally valid: The research should be applicable in a more general context. Positivism should not be confused with qualitative research, it does tend to follow apply quantitative research methods; but it should be distinguished on the grounds of its ontology (Oates, 2007:287). The Oxford (2010:1198) dictionary distinguishes quantitative research as being characterized by assigning values, measures or numbers to variables representing the entity under observation, whereas qualitative research describe entities in terms of adjectives. Both Popper (1992) and Comte (1975) mentions that empirical methods provide objectivity to researchers. Qualitative methods are not excluded from positivistic research though; they can be included in positivistic research if they are done in an objective, neutral and repeatable fashion. Furthermore, Oates (1992) distinguishes between four data generation data generation methods: observation, interviews, questionnaires and documents. These methods fall under the ontology of positivism when they meet the characteristics listed above. In most cases interviews are not objective and repeatable, but this research method to can be applied in the framework of positivistic research. As example, physiological ink blob tests/interviews are conducted in an objective and repeatable fashion and responses modeled and evaluated empirically. Documents can fall inside or outside the positivistic ontology as well. They can directly provide quantified facts and measures or they can be analyzed objectively. For example, Google search engines use heuristic measures to objectively measure the relevance of internet pages. Observation and questionnaires can also fall within or outside the scope of the positivistic ontology depending on how the research approaches are designed. Questionnaires with open ended questions falls under the interpretive approach, while questionnaires that require respondents to rate options or provide short true-false yes-no responses are positivistic. Whether observation falls within the scope of positivistic research again depends on whether this research instrument is objective and repeatable. 4. Conclusion The essence of the positivistic approach is systematic skepticism. The proper approach is to disprove that which researcher believe is actually true. Empirical testing can never proof without a doubt a hypothesis. Science is a method by which theories are formulated and tested repeatedly and objectively against appropriate observations. It is therefore the continuous process of deciding how to observe, code and analyses our observations, and in the light of these observations we decide to temporally accept or reject the postulated hypothesis. Primary Sources: OATES, B. 2006. Researching Information Systems and Computing. London: Sage. 341 p. ROMM, N.R.A. 1991. The Methodologies of Positivism and Marxism. A sociological Debate. Hong Kong: Macmillan. 208 p. LEE, A. 2004. Thinking about Social Theory and Philosophy for Information Technology. 26p. http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.137.3685rep=rep1type=pdf Date of access: 20 Feb 2010. OXFORD. 2010. Oxford dictionary: International Students Edition. 8th ed. Oxford Press. 1888p. Significant Secondary Sources: COMTE, A. 1907. Auguste Comte and positivism edited by John Stuart Mill. 5th ed. London: Paul, Trench, Trubner. POPPER, K,R. 1992. The logic of scientific discovery. London: Routledge. 479 p. HUME, D. 1748. An Enquiry concerning Human Understanding. London

Friday, January 17, 2020

Is Decoherence a Solution for the Measurement Problem

Abstract—Decoherence is considered as one of the important topics in quantum computing research area. Some researchers stated that decoherence solved the measurement problem and on the other hand many researchers stated the opposite. In this paper we will prove whether decoherence is a solution or not through an exhaustive survey of the different ideas, methodologies, and experiments. Index Terms—Quantum computing, decoherence, measurement problem INTRODUCTION Decoherence is considered as one of the important research areas since 1980s.Quantum decoherence is the Loss of coherence or ordering of the phase angles between the components of a system in quantum computing superposition and the consequence of this is classical or probabilistically additive behavior†¦ (Zurek Today 10 (1991)) Wave function collapse is the reduction of the physical possibilities into a single possibility as seen by observer can appear in quantum decoherence also it justifies the framework an d can predict using classical physics as an acceptable approximation†¦ Namiki and Pascazio 1991). However, decoherence is a mechanism that emerges out quantum stating point also it determines the location of the quantum classical boundary moreover decoherence appear when the quantum system interacts with its environment in a thermodynamically irreversible way and that lead to prevent different factors in the quantum superposition of the system and environments wave function from interfering with each other†¦ Zurek Today 10 (1991)) Decoherence can be viewed in different ways such as flowing information from the system to the environment lead to lose information this is known as heat bath since each system is losing some of its energetic state because of its surroundings environments†¦ (Kumar, Kiranagi et al. 012) There is also another view of decoherence that is called isolation; which is the combination of the system and the environment which known as non-unitary ther efore the dynamics of the system alone are irreversible also as a result of combination of system and environment the entanglements are generated between them and that will lead to sharing quantum information without transferring these information to the surroundings†¦ (Lidar and Whaley 2003) Describing how the wave function collaps occurs in quantum mechanics called measurement problem. The disability of observing the process directly lead to different nterpretations regard quantum mechanics, also it rises too many qustions that each interpretation must answer. However there are some researches provides aprove that the decoherence solved the measurment problem and some other researchers prove the opposite thus in this paper we will make a comparasion between these two different point of views†¦ (Kumar, Kiranagi et al. 2012) PROBLEM IDENTIFICATION Decoherence is a real challenge that prevents implementing quantum computers; because the machines rely on undisturbed evolutio n of quantum coherences†¦ (Chen, Ang et al. 003), (Flitney and Abbott 2004) Decoherence provides an explanations for the appearance of the wave function collapse it does not generate actual wave function collapse and that is the nature of quantum systems it leaks into the environment and that done by decomposing the component of the wave function from the coherent system and then applying phases to the environment†¦(Flitney and Abbott 2004) P. W Anderson claimed that decoherence has solved the Quantum measurement problem while S. L Adler prove the opposite†¦. Adler 2002) In this paper, we will conduct a comprehensive survey of the different views and experiments to come up with a solution for the relationship between decoherence and measurement problem. Decoherence is the Solution Zurek (1991), Tegmark and Wheeler (2001), and Anderson (2001) stated that decoherence has solved the quantum measurement problem by eliminating the necessity for Von Neumann’s wave fu nction collapse postulate. Osvaldo Pessoa Jr. wrote an article titled â€Å"Can the Decoherence Approach Help to Solve the Measurement Problem? He concluded that decoherence could help to solve measurement problem in open systems. From that point, he wanted to count on the open systems to solve the measurement problem of individual systems. He also mentioned that decoherence helps to get an approximate solution for the statistical version of the measurement problem. Wallace (2011) mentioned that decoherence explains why the measurement problem is a philosophical rather than a practical problem and stated that decoherence could solve the measurement problem.He claimed that the quantum state continues to describe the physical state of the world. So, decoherence finds its natural role in the measurement problem as the process which explains why quantum mechanics can be fundamentally deterministic and non-classical, but emergently classical. It does not dull the aspect of Everett's pro posal, which states that all are equally part of the underlying quantum reality. Decoherence is Not the SolutionThe decoherence initiative was to explain the transformation from large to conventional by evaluating the relationships of a program with a determining program or with the environment. It is realistic to think about a large specialized substance or program of pollutants as a divided program boating in unfilled space. Dynamical Failing Styles Somehow do not like the idea of failure due to professionals trying to rig the pattern function improvement so to advantage loss of the situation vector in a well described way.One way is to say that the pattern function, or at least an element of it, consistently gets â€Å"hit† in such a way as to cause localization in the position base. Another way is to add a not unitary term into Schrodinger’s program. There are suggestions stated that we can infer using mind ability to get the collapse of the wave function Tohmas Br euer at 1996 try to investigate these suggestions and apply it to recent results of quantum machines regard restrictions on measurement from inside.Tohmas Breuer count on these restrictions to come up with a phenomena of subjective decoherence therefore he split his article into parts. The first part is â€Å"measurement from inside† and he makes a presentation to illustrate why it is impossible for an observer to make a distinction between all states in a system in which an observer is contained and that consider as restriction on the measurability from inside, he conclude that bigger system O need more parameters to fix its state.However, this will lead to situations that big O can be determined of each physically possible state by the state of a subsystem A together with some constraint. Second part is â€Å"EPR-Correlations† and he focus on situation which stronger results hold when we take into account particular features of quantum mechanical situations for examp le if we have two systems A and some environment R then the union of two system A ? R equal to Big O. also if both systems A and R have Hilbert spaces HA and HR as state spaces then EPR correlations can be obtained in the vector states HA HR .Therefore he conclude that A con not make a distinction between states of O which is make difference only in the EPR correlations between A and R. however observers can only be able to make a measurement of EPR correlations between A and R in A ? R†¦(Breuer 1996) David Wallace wrote his article to achieve two goals the first one is to present an account of how quantum measurements are dealing with in modern physics in other word quantum measurements does not involve a collapse of the wave functions also to present the measurement problem from that account perspective of view.The second part is concentrate on clarifying the role of decoherence plays in modern measurement theory and what affects it has on the different strategies that have been proposed to solve measurement problem. Wallace concluded that it seems not possible to have a complete understating of the microscopic predictions quantum mechanisms without interpreting the state in a probabilistic way and that because of interference since quantum states cannot be thought of as probability distributions in physical states of airs.Therefore it is allowable to try to resolve the incoherence by two ways, the first one by philosophical methods which means trying hard to think about how to make a full understanding quantum states so as to come out with a non-incoherent way, also the second way done by making modifications on the physics which means trying to make a replacement of the quantum mechanics by using some new theory which does not prima facie lead to the conceptual incoherence.Finally, Wallace state that the natural role of decoherence can be found in the measurement problem as the process which provides an explanations regard why quantum mechanics, inte rpreted can be basically not classical and deterministic, but critically classical†¦(Wallace 2011) Dan Stahlke based on application he made state that the most important point of decoherence theory is that it provides understanding about the process of wave collapse. Some systems need to be built in way that it stays in coherent superposition.However, tendency of system that been in superposition can be immediately calculated. Also he stated that decoherence does not give the ultimate solution in the measurement problem but it bring some light to the matter†¦(Stahlke 1999) Maximilian Schlosshauer makes a distinctive discussion regard the role of decoherence in the foundation of quantum mechanics, and focusing of the effectiveness of decoherence regard the measurement problem. He concludes that within a standard interpretation of quantum mechanics that decoherence cannot solve the problem of definite outcomes in quantum measurement.Therefore he mention the effectiveness of environment super selection of quasiclassical pointer states along with the local superposition of interference terms can be put to large use in physical motivation, assumptions and rules regard alternative interpretation approaches that can change the strict orthodox eigenvalue-eigenstate link or make modification on the unitary dynamics to account for the awareness of definite outcomes†¦(Schlosshauer 2005) Elise M.Crull mention in his article that it has been claimed that decoherence has solved the measurement problem. In other hand, some researchers stated that it does not solve the measurement problem. However, the Crull target is â€Å"Which measurement problem? †, thus he argue three questions depending on Max Schlosshauer which has neat catalog on the different problems which called â€Å"the measurement problem†Ã¢â‚¬ ¦(Schlosshauer 2008; Crull 2011) Harvey Brown stated that there are many attempts to proof the insolubility of the measurement problem in non -quantum mechanics.We can use these attempts for quantum mechanics. These proofs tend to establish that if mechanical interaction between object system A and measuring instrument B is described through a suitable defined unitary operator on the â€Å"Hilbert† tensor product space, so the final state of the A + B together cannot be described by a density operator of a specific kind in that space.Therefore this leads us to a resolution in terms of weighted projections which be useful to interpreted as mixture of pure A + B states, which are eigenstates corresponding to the â€Å"pointer position† that observable connected with the instrument†¦(Brown 1986) Ford, Lewis and Connell count on a book â€Å"Decoherence and the Appearance of a Classical World in Quantum Theory† †¦(Giulini, Joos et al. 1996) which state that ‘‘irreversible coupling to the environment seems to have become widely accepted ~and even quite popular!During the last decade, not least through the various contributions by Woljciech Zurek and his collaborators. ’’ And he conclude that general and simple formulation of quantum measurement gives a good method regard discussing quantum stochastic systems†¦(Ford and Lewis 1986) . Also authors stated that decoherence appear at high temperature with or without dissipation and the time for both cases are the same furthermore at zero temperature, decoherence occurs only in the presence of dissipation†¦(Ford, Lewis et al. 001) In 1980s and 1990s techniques are established to cool single ions captured through a trap and to control their state by using laser light and the single ion can be observed using photons with minimal interaction with the environment. Photons can observed without being destroyed during interaction together with atoms in designed experiment. That leads to make a study regard pioneers that make a test for basis of quantum mechanics also the transition between microscopic and microscopic world.The most important stage in controlling the quantum state regard ion is cooling it to the lowest energy of the trap using a common technique called sideband cooling†¦(F. Diedrich, Bergqvist et al. 1989) this technique consists of exiting the ion, increasing inside energy also decreasing the vibration energy†¦(SCIENCES 2012) Bas Hensen starting his discussion by defining the measurement problem and he stated that measuemet problem begin naturally from quantum theory’s success through describing the realm regard microscopic particles also permitting them to have definite values for quantities like momentum and position.Then he split the problem into several parts. The first two parts are â€Å"the problem of outcomes: Why does one perceive a single outcome among the many possible ones in equation? †, â€Å"The problem of the collapse: What kind of process causes the state of the system to ‘collapse’ to the outcome one percei ved (in the sense that a repeated measurement yields the same answer)? † in these two part he found that in quantum the world must be divided into a wave quantum system and the rest stays in some classical system.Also in accuracy point of view the division is made one way or another in a particular application. The third part is â€Å"The problem of interference: Why do we not observe quantum interference effects on macroscopic scales? † in this part author stated that the best way to illustrate this problem by using the double slit experiment. The experiment shows that the physical setup suggests that grouping the probability distribution gained with either one of the slit opened should occur in the probability distribution regard the two slits opened.For this situation of electrons as particles the probability distribution regard course differs, but regard a similar setup using macroscopic particles it doesn’t†¦(Hensen 2010 ) Dieks reviewed several proposa ls that solved the quantum mechanical measurement problem by taking into account that in measurement interactions there are many unobserved degrees of freedom. He found out that such â€Å"solutions† are unsatisfactory as they stand, and must be supplemented by a new empirical interpretation of the formal state description of quantum mechanics (Dieks 1989).Zurek mentioned in † Decoherence, Einselection, and the Quantum Origins of the Classical† that decoherence is caused by the interaction in which the environment in effect monitors certain observables of the system, destroying coherence between the pointer states corresponding to their eigenvalues. Then, he mentioned that when the measured quantum system is microscopic and isolated, this restriction on the predictive utility of its correlations with the macroscopic apparatus results in the effective â€Å"collapse of the wave packet†; which implicitly states that decoherence did not solve the measurement p roblem (Zurek 2003).Elby scrutinized the claim that the measurement problem is solved by decoherence, by examining how modal and relative-state interpretations can use decoherence. He mentioned also that although decoherence cannot rescue these interpretations from general metaphysical difficulties, decoherence may help these interpretations to pick out a preferred basis (Elby 1994). Janssen mentioned that the alleged relevance of decoherence for a solution of the â€Å"measurement problem† is subjected to a detailed philosophical analysis.He reconstructed a non-standard decoherence argument that aimed to uncover some hidden assumptions underlying the approach. He concluded that decoherence cannot address the â€Å"preferred-basis problem† without adding new interpretational axioms to the standard formalism (Janssen 2008). Busch et al (1996) explained decoherence using the many-worlds interpretation and stated the decoherence cannot solve the measurement problem. Legge tt (2005) concentrated on the paradox of Schrodinger's cat or the quantum measurement paradox to prove that dechorence is not a practical solution.Other researchers and scientists including Gamibini and Pullin (2007), Zurek (2002), Joos and Zeh (1985), Bell (1990), Albert (1992), Bub(1997), Barrett (1999), Joos (1999), and Adler (2002) stated that decoherence did not solve the measurement problem. Conclusion There is a serious and unresolved quantum measurement problem. Some, like Ghirardi, Rimini, and Weber (1986), try to solve it by modifying quantum mechanics. If successful, such attempts would result in a theory, distinct from but closely related to quantum mechanics, that is no longer subject to a measurement problem. That problem may be unsolvable (Healey 1998).

Thursday, January 9, 2020

Final Exam Studyguide Essay - 668 Words

History of Art 3521: Introduction to Italian Renaissance Art Final Study Guide I decided to prepare this sheet as a short description of some of the important themes that we’ve taken up in lecture in the second half of the quarter. In preparation for our final, I would recommend reviewing your notes and textbook, putting together answers to each of these. Indeed, if you have good answers in your head (making use of examples shown in class) for each question, you should do well. 1. How was Michelangelo different from Leonardo, especially in respect to naturalism, and why did Leonardo look down on Michelangelo? How did Leonardo think he could improve the naturalism of painting (think in part about sfumato here)? Why did†¦show more content†¦How did both Bronzino and Cellini make art about art while still serving Duke Cosimo’s political ambitions in their work? ïÆ'  Cellini’s Perseus (Perseus brought into the world by slaying medusa and releasing Pegasus) ïÆ'  this is art about art by referencing the importance of the arts, and bringing them back to Florence. ïÆ'  Bronzino= Eleanora of Toledo=display of artistic virtuosity while still showing that Cosomo has heirs so his family’s dominance and rule will live on. 7. What was the Counter-Reformation and what did it mean for art? Did Michelangelo endeavor to participate in the reform of the church through his art? What two aspects of art came into conflict during the Counter-Reformation (for example, in the work of Veronese)? 8. What is special about 16th century Venetian art? Did Venetian arts keep on doing what they had done or did they take their art in new directions? What is the difference between disegno and colorito? Which cities and artists were associated with these terms and why? How did Titian change religious art and could these changes be connected to the Counter-Reformation? How does the art of Tintoretto differ from that of Veronese? ïÆ'  Venician artists focused on eye witness style ïÆ'  Tintorretto (Annunciation/St.Mark rescues slave) *cramped view, puts you into the action *counter reformation lovers would love Tintoretto ïÆ'  Varenase (feast in the house of Levi) 9. What are the two

Wednesday, January 1, 2020

Shirin Ebadi The fight for Human Rights in the Middle East

The fight for human rights has been a lengthy struggle around the world. Many people in the Islamic state of Iran, particularly women and children, have suffered through a life long battle of the government limiting their natural rights, such as freedom and equality, due to religious traditions colliding with the state. Shirin Ebadi, an Iranian lawyer and activist who was awarded the Nobel Peace Prize in 2003, is a courageous, kind-hearted woman who was determined to help the people of her country gain their freedoms. Although Shirin Ebadi is widely known for her fight for the justice of women and children, a few critics have considered Ebadi’s efforts as small or limited in shaping reform; however, Ebadi fought her hardest for the†¦show more content†¦Ebadi says in a 2004 Interview, â€Å"You see violations of women’s rights in Iran. A Muslim man can have up to four wives. He can divorce his wife without offering any reason, while it is quite difficult for a woman to get a divorce. The testimony of two women is equal to that of one man. Any woman who wishes to travel needs the written permission of her husband. And the number of unemployed women is four times that of men†¦the dominant culture is going to insist on an interpretation of religion that happens to favor† (Shirin Ebadi, Interview with Amitabh Pal, The Progressive). Ebadi was furious with the state that women were put in because it was clear that these men in positions of high power used their own interpretations to justify what they wanted. As a female and human rights activist in Iran, Shrin Ebadi knew she had to help women and other groups of oppressed people, including children, students and journalists. Although Ebadi lost her job as a judge, she did not give up, and eventually obtained her lawyers license. Ebadi worked as a pro bono lawyer for many families, women, and dissidents in Iran. (Encyclopedia of World Biography). Ebadi’s says in her famous quote, â€Å"I’d rather be a free Iranian than an enslaved attorney† (Sector, A Dissenting Voice). Ebadi worked as a lawyer to help the people of her country become free, as well as herself. Iran did not have a Freedom of Speech Law, therefore Ebadi defended journalists