Thursday, October 31, 2019

Impact of social networking Research Paper Example | Topics and Well Written Essays - 1250 words

Impact of social networking - Research Paper Example As a result, it was only selected few military officials who had access to it. Due to the security level required in military, it was initially kept as a secret. By this time, computers were not accessible to many persons, which remedied its fast expansion. In addition, very few people understood the machine language. To many people, sitting in front of a machine seemed isolation and was not welcomed. Preliminary stages The social networking gains its roots from the use of computer software referred to as Bulletin Board System, which ensured that the people had a chance to log in into the system. This program used a terminal program. By using the code, people were able to share files and information. The communication was done via telephone lines and managed by few technological conversant personnel who had passion in it. By that time, communication through telephone lines was free only for people living within town. As time passed by, people started gathering while accessing the ser vices, which slowly transformed the people’s perception of the system as antisocial. During 1970’s to early 1990’s, the system access points were fixed at specific points. Due to the technological advancement at the time, the users were allowed to share only limited type of information. The speed of the system was not that fast but good enough to meet the needs of the time. Another program that was crucial to the early development of the social communication network was CompuServe. This program was mainly used for business transactions. In the 1980,s, it expanded to be used by other persons other than businesspersons. The service allowed persons to share files and information irrespective of their location. In addition, People were able to access different discussions undertaken by other persons and participate. This was a breakthrough in technology as very few people had an idea of it becoming a reality. This brought forth the advent of emails. American online (AOL) also played a significant in the continuous improvement of social networking. It provided a platform where one could search for other persons. This feature was very outstanding which attracted more people into the social networking. Internet Boom The internet came to boom in mid-1990’s. This enabled persons to create their profiles as well as search for different persons such as schoolmates online though social networks such as Classmates.com. Another social networking network operational at the boom of the internet was SixDegrees.com. This network allowed members to create profiles as well as share different materials. Due increase their membership numbers, the members were asked to bring more persons. As time passed by, the security of the network was compromised reducing loyalty by the members that eventually led to its collapse at the start of the twenty first century. Other numerous sites such as Asianavenue.com, BlackPlanet.com, and MiGente.com were developed in 1 990’s have thrived up to date. In the twenty first century, technological advancement helped emerging networking sites such as LinkedIn, MySpace, Friendster, and Facebook emerge and develop new strategies that have assisted them in getting very many members. These networks vary on their purpose with some integrating various aspects such as dating and normal chatting in their framework. Their graphical user interface and security levels have constantly been improving to offer safety for their customer’

Tuesday, October 29, 2019

What role has air-power played in warfare after 1945 Essay

What role has air-power played in warfare after 1945 - Essay Example However, in the event that the enemy proves stronger than anticipated, it uses airpower to carry out air offensives against the enemy. Airpower in this case refers to a military strategy that involves carrying out aerial bombardments from the air, which in most cases are done using fighter jets. Some of the wars that America has involved airpower include World War II, the Korean War, the Vietnam War, the Serbian War and the Afghan War, just to name but a few. However, Corum (2007) claims that the use of airpower has been marred with controversy since 1945. In this regard, there are those that believe that airpower has helped in emerging victorious while there are critics who believe that airpower has been a failure in the U.S. warfare. The aim of this paper is to analyze the successes and failures of airpower based on case studies. Opinion is divided down the middle regarding the effectiveness of airpower in warfare. On one hand is a section of the society that believes that airpower has played a huge role in enhancing the success rates of the U.S. force in the wars it has fought in the past. On the other hand, are critics who feel that air power has not achieved any meaningful success in some of the wars that the U.S. has fought. The U.S. has fought, the U.S. military has been using airpower as a tactics of maintaining superiority by suppressing the enemy’s ability to fight. This was witnessed during the World War II in which the U.S. and its allies used airpower to gain superiority over their opponents (the German forces especially) through aerial bombardments. As a result, it became very hard for the German forces to mount strong resistance against the U.S. and its allies (Keegan 1990, p. 31). This enabled the allied forces win the war more easily than earlier anticipated. The events of WWII are one of the success stories of airpower in warfare. Apart from the success of Airpower in WWII, there exists other warfare where airpower has been successful. For instance, the use of airpower played a huge role in winning the Gulf War during Operation Desert Storm. The Desert Storm was a war against Iraq for its invasion of Kuwait. The war begun on August 2, 1990 and ended on February 28, 1991 with the U.S. and its allies emerging victorious. However, the success in winning this war has been linked mainly to the use of airpower. Momyer (2003, p. 5) reveals that the U.S. and its allies used airpower during the first days of intervention in the war to destroy Iraq’s air defense system, thereby allowing the allied forces to gain control of Iraq’s air before introducing the ground forces. The U.S. forces and its allied forces carried out massive bombardments from the air to gain control over Iraq’s airspace. In so doing, the allied forces also managed to capture all the Iraq government’s infrastructures before deploying the military to takel vantage positions. Momyer (2003) notes that the main aim of air attacks w as to pave way for the U.N. backed ground forces. In fact, reports show that after taking control of the air through aerial bombardments, the ground forces found it easy fighting the Iraqi forces as their major efforts was now devoted mainly to attacks against the communication lines that the Iraqi forces used. This was followed by assaults on the defense lines of the Iraq forces. In the end, the U.S. led forces emerged victorious in the war. Supporters of airpower have argued that without the use of airpower, the war would have taken the U.S. and allied forces more time to win than it actually took. This is because the Iraqi forces would have taken advantage of the air to attack the allied forces. Even though the airpower proved effective in winning the G

Sunday, October 27, 2019

Calculating Blood Components of Cholesterol Research Design

Calculating Blood Components of Cholesterol Research Design Good health is absolutely important to a human being and to remain healthy people need to check their blood level parameters. Cholesterol is a very important constituent of over100 constituents in human blood. It is important to develop an instrument wherein blood parameters can be calculated which will be non-invasive, user friendly, portable and reliable. The thesis explains the designing and making of an instrumentation setup to calculate the blood constituents. It comprises of the study of samples made in the laboratory according to the various constituents present in whole blood in the RF range of 10MHz-4000MHz. The data is later fed to a regression analysis matrix which can be programmed in VLSI chips such as Altera FPGA in order to calculate the constituent concentration. This thesis is proposed to contain 6 chapters with proposed chapters as given below Chapter I (Introduction) This chapter includes the introduction to the thesis, health and diseases, overview of cholesterol, types of cholesterol, role of cholesterol in humans, the various diseases due to high cholesterol, the worldwide scenario, the testing of cholesterol blood test range of different constituents. Total mental, physical social wellness is a condition of health as well as the presence of infirmities or diseases.[1][2][3] Good health is often marred by diseases and illnesses which are sometimes incurable.[4][5] The most dreaded diseases include Cardio Vascular Diseases (CVD) and Strokes due to high Cholesterol. About 7,000,000 persons die of heart disorders annually in the world, of which 2,400,000 are Indians. Strokes are the next principal source of death at 6,200,000 of which 1,600,000 are Indians. Cholesterol is important for normal body functioning, which appears to be a fat-like material which is waxy in nature. It is used in making of hormones and for cellular functions. The Total Cholesterol (TC) in the blood consists of High Density Lipoprotein (HDL), Low Density Lipoprotein (LDL) triglycerides. The cholesterol obstructs the arteries when it amasses in the body resulting in the limitation of blood flow. It could be tested invasively by visiting a doctor extracting the blood using a needle syringe technique. Since this procedure is painful, it develops a fear among the patients could also be infection prone. Non-invasive methods are easier to use in one’s home. Instant report could be attained therefore non-invasive technique is gaining a lot of importance as the electronics industry now offers many smart sensors. Blood has many constituents and it depends on aspects such as age, diet, state of health and other particulars.[6][7][8] The chief blood components are Cholesterol, NaCl, Glucose, Urea, Lactate Alanine. There are quite a few number ways to compute blood cholesterol in humans, invasive non-invasive. They can be categorised into chemical tests and physical tests. The significant ones are based on Photo Acoustic Spectroscopy (PAS), Stimulated Emission spectroscopy, Thermal Emission Spectroscopy (TES), Optical absorption spectroscopy, Liquid Chromatography method, Chemical Method, Ultracentrifugation, Electrophoresis and Impedance measurement. The important techniques together with their working principles and the merits demerits are discussed below. Near Infrared Spectroscopy (NIRS): The principle of NIRS is that constituents absorb Infrared light at their characteristic wavelength. The absorption level is comparative to the constituents present. Hence the contents present can be predicted. It uses a physical rather than a chemical technique. It is rather sensitive to calibration errors, but probes for non-invasive measurement are not available. However, new spectroscopic methods are now available with IR optical fiber for guiding the light to the tissue. Chemical Method: In order to determine plasma cholesterol, the chemical procedure of Abell-Kendall is done which comprises of the Liebermann-Burchardt response after hydrolysis and eradication of cholesterol. Plasma cholesterol triglyceride content determinations are usually examined by computerized techniques at clinical research facilities. Default values for plasma TC are achieved using autoanalyzer frameworks to which either the Liebermann-Burchardt test or the ferric chloride–sulfuric acid technique could be applied. A fluorometric investigation is utilized to decide the triglyceride reference values. Basic plasma estimations of triglycerides and TC can be relied on for the analysis of the diverse lipoprotein issues. It is an invasive method and there is wastage of chemicals in testing. Chromatography: Chromatography techniques can be sorted out into 2 categories, i.e. Gas chromatography (GC) and Liquid Chromatography(LC). GC is a typical kind of chromatography utilized scientifically for dividing and analyzing constituents that can be vaporized without decay. GC is used to test the purity of a specific substance, or segregating the distinctive parts of a mixture. In High Performance Liquid Chromatography (HPLC), a mobile phase comprises of either polar or non polar solvents. The specimen is constrained by a fluid at a huge pressure through a section that is filled with a stationary phase for the most part made out of sporadically or roundly formed particles picked or derivatized to achieve specific sorts of separations. Chromatography has low uncertainty, high precision, high accuracy and good linearity but it is expensive and not portable. Impedance Measurement: An Impedance Plethysmograph framework is made up of a V-I converter and a sine generator. Passing current into a body section is done with the assistance of two current electrodes. The current path which produces the voltage signal is sensed with the assistance of an alternate pair of voltage electrodes.[9][10] The impedance is correlated to the amplitude of the signal. Impedance qualities measured at a series of frequencies or at a few distinct frequencies may aid in clarifying the differences in body composition more accurately than impedance estimation at a specific frequency.[11][12] Chapter II (Objectives Literature Review) Mas S. Mohktar et al recommended a method to estimate the cholesterol level in blood utilizing neural network bioimpedance techniques non-invasively. Bioelectrical Impedance Analysis (BIA) estimation was executed utilizing the bio impedance analyzer, Biodynamic Model 450. A current signal [13] E. Aristovich et al recommended a non-invasive impedance technique estimation of blood cholesterol by 3D finite field modelling. This process supports the variation of calculating impedance over a conducting medium since the concentration of particles is altered. To calculate impedance, the current is computed between 2 electrodes throughout the conducting media created by the electric field distribution. It is obtained by computing modelling 3D electric fields for known voltages connected between the electrodes utilizing Finite Element Method (FEM). The intricacy of FE models is accredited to particle distribution, the material geometrical parameters, and the size shape that can be of several orders of degrees lesser as when compared to the general problem domain under investigation. The paper prevails over the setback by implementing a useful particle aggregation technique in FE modelling exclusively influencing the accurateness of the field calculation. [14] J. Nystrà ¶m et al proposed to study a set of 34 men with various degrees of diabetic levels, including Multi Frequency (MF) BIA and skin changes by NIR. A fiber-optic probe to measure skin reflectance spectra was used on 4 sites. A joint multivariate analysis was carried out on the spectral range of 400-2500nm, using a lead sulphide detector (1100nm-2500nm) and a silicon detector (400nm-1100nm). NIR method can recognize skin conditions identified with diabetes. The 2 procedures combined together can offer a higher possibility for discrimination classification of skin condition with exact classification rising from 63% to 85%. [15] K. Cheng et al proposed to design a current source which includes a voltage controlled current source (VCCS), a microcontroller (uC) and a waveform generator (WG). The uC is made use of to program the WG to produce a sine voltage signal from 100 Hz 100 kHz. The VCCS based Howland current pump converts the signal to current. The total harmonic distortions of the o/p current are 0.25% at 1 kHz 0.40% at 100 kHz for the load resistance of 1 kà ¢Ã¢â‚¬Å¾Ã‚ ¦. The output currents phase difference varies from 0 ° to 19.6 ° over the above mentioned frequency range. The proposed multi-frequency BI measuring system provides an inexpensive solution for BI applications. During system testing, the output current signal is constant. Hiroshi Shiigi Hiroaki Matsumoto et al proposed a simple non-invasive technique to measure cholesterol by using a solvent to extract the skin component. A self-assembled monolayer (SAM) sensor and a HPLC were utilized to analyze the extracted solution. The SAM electrode having an excellent responsiveness sensitivity, attributed to its strong attraction towards hydrophobic cholesterol. Higher cholesterol was shown by the person with high cholesterol of the skin. The coefficient of correlation of non-invasive invasive method was 0.9408, hence this method could be used practically.[17] M.V. Malahov et al recommended to recognize hematological biochemical blood parameters that can be precisely estimated by means of BI technique. Samples of blood from 46 people were poured into four test tubes. Blood (2.5ml) was put in test tubes with Ethylenediaminetetraacetic acid for hematological investigation, next blood (3ml) was collected in tubes having heparin for BIA, later blood (2ml) was collected in tubes having sodium citrate for fibrinogen estimation and finally blood (4ml) was collected into unfilled tubes for biochemical serum examination. BIA analyzer à Ã‚ BC-01 Medass was utilized to perform BI spectroscopy of blood (1.5ml) from 5–500 kHz. Results show that the principle extracellular plasma particles: Na+ Cl concentrations are not related to extracellular fluid resistance of the blood. [18] Objectives The objective of the research is to design and develop an easy method to measure the level of cholesterol. The work envisages a development of an instrumentation using advanced microelectronics circuits, which is programmable and having interpretation mechanism to enable a common man to know his level of cholesterol. It is proposed to use multivariate system approach to enhance cholesterol signature in DSP domain. Chapter III (Methodology and Instrumentation) This chapter gives elaborate details on the preparation of samples, designing of cell, experimental setup and the instruments used. Human blood consists of many constituents; the major ones are Cholesterol (225mg/dL), Glucose (70-110mg/dL), Urea (10-20mg/dL), Lactate (10-15mg/dL) and Alanine (10-20mg/dL). Experiments are conducted with the above constituents. Samples are prepared using 14 mL distilled water, 1mL alcohol and the above constituents in varied concentrations. The average concentration is denoted as ‘1’, half the average is denoted as ‘0.5’ and approximately ‘0.75’ to ‘1.25’ is the actual range of blood components. The experiments are conducted with various concentrations as well, which are over the standard range for extreme cases, are denoted as 1.5, 1.75 2, 2.25 3. A cell was designed which was rectangular in shape having dimensions 12.5cms x 1cm x 2cms. The cell was used to measure RF response of various blood constituents. The cell was lined with a thin Cu foil and a copper wire was connected to 2 connectors which were placed on extreme ends of the cell. The external radiations were reduced by placing the cell in an iron box which was earthed. This forms the dielectric loss cell. The cell was then connected via RF cables to the tracking generator and signal analyzer. The entire setup was secured firmly avoid mechanical movements. Experiments were carried out using the slow sweep and the fast sweep. The experiment was conducted after an hour and 24 hours to verify the accurateness of the results. In comparison to the initial results, these were precise. The tracking generator used is Signal Hound USB-TG44A which ranges from 10 Hz 4400 MHz and the signal analyzer used is Signal Hound USB-SA44B which ranges from 1 Hz 4400 MHz. A separate power supply is not essential as it is fed from the USB cable. The tracking generator and signal analyzer are approximately 8† long, light in weight and could be used practically anywhere. Chapter IV (FPGA for Non-Invasive Cholesterol Measurement) Software and hardware components operating together to perform a definite application is called Embedded Systems. The hardware platform comprises of an i/p device, an o/p display, a microcontroller (uC) / microprocessor (uP), application software and an onboard memory. Designing embedded systems is getting more complicated nowadays due to the stiff restraints on power consumption, performance, size area usage. Hence, the software/hardware co-design procedure is utilized to plan embedded systems to decrease the measure of time used on debugging development. uPs whose behaviour architecture are completely described utilizing a subset of an Hardware Description Language (HDL) are called soft-core processors. They can be synthesized for any Field Programmable Gate Array (FPGA) or Application Specific Intergrated Circuit (ASIC) technology; hence they supply designers with much flexibility. A platform for combining multiple design functions into a package or a group of packages is provided by an FPGA device. Incorporation of functionality results in reduced power higher performances. Design combination can be accomplished by integrating soft or hard processor cores in an FPGA to execute processing functionality and required control. The capability to incorporate design functionality and system-level components can reduce schedule, cost and risk. Nios II Altera Organization Altera Organization is a top seller of FPGAs and Programmable Logic Devices (PLDs). They proffer the Cyclone, Stratix and Stratix II groups of FPGAs and are extensively utilized in DSP applications and design of embedded systems. Nios II Processor being a Reduced Instruction Set Computer (RISC) processor depicts Harvard memory architecture. The various features of this processor are single-instruction 3232 divide and multiply operations, instructions for 128-bit 64-bit multiplication, 32-bit Instruction Set Architecture (ISA) 32 general purpose registers. Chapter V (Multivariate Data Analysis) This chapter describes the multivariate data analysis, Partial Least Square Regression (PLSR), the different algorithms, i.e. Non-linear Iterative PArtial Least Square (NIPALS) and SIMple Partial Least Square (SIMPLS), the advantages and disadvantages of the algorithms, the ParLes software which is priority software developed for research applications, used for calculating unknown constituents. Nowadays several factors add to numerous problems which are multivariate. Multivariate analysis is a tool to obtain relationships and patterns amongst several variables concurrently. It can predict how an alteration in one variable affects other variables. It is very graphical which allows an analyst to observe the inner or unknown structure of big data sets and to visually recognize the factors which influence the outcome. PLSR is a bilinear form of technique where information in x data is assigned onto a small amount of latent variables known as PLSR components. The y data that are used in predicting the underlying variables to guarantee the first components are those that are most applicable for calculating the y variables. The relationship betweenxandydata is simplified as it is focussed on the minimum probable number of constituents. Chapter VI (Results Conclusions) This chapter includes the results and conclusions and the future direction of research. The multi-frequency BI spectrum was modelled through curve-fitting and multivariate statistical applications to extend parameters to predict body constituents like Cholesterol, Glucose, Salt, Urea, Alanine Lactate. The various components were mixed in different ways and some were used in the calibration file and the rest were treated as unknown. The spectra of cholesterol in different concentrations of 0.5, 1, 2 3 in the RF range of 10MHz to 4GHz was shown in Fig. 1. The cholesterol shows a good variation only in certain regions at specific frequencies (575 MHz, 995 MHz, 1145 MHz, 1285 MHz 2185 MHz) and one of them i.e. 575 MHz is shown in an expanded form in Fig. 2. The data obtained from the graph is then used in a calibration set to determine the unknown constituents presents in the blood. When the calibration set has more than 20 samples, it shows that it has less error. Since the spectra of every blood constituent are unique, the data of the spectra is fed to a ParLes software to get out unknown values of blood constituents. Table I gives the actual concentration of blood constituents in the experiment. Unknown concentration of cholesterol and known concentration of others were fed to a multivariate system. Table II shows the results of predicted values of cholesterol which are 43.75mg and 48.75mg whereas the actual values of cholesterol are 42.5mg and 51mg respectively. The results attained are within +/- 5% of the actual content in the sample are within the limits of the percentage error defined by WHO.

Friday, October 25, 2019

The Problem with Presidential Primaries Essay -- Politics Political Es

The Problem with Presidential Primaries Ever since the election season of 1972, presidential primaries have become â€Å"the dominant means of selecting the two major party candidates.†i[i] The primary system is one in which the eligible voters of each state do one of the following: 1) Vote for a presidential candidate to run for their party in the general election. 2) Vote for a delegate pledged to vote for a certain candidate at the party’s national convention. As intended, this process would bring the candidate selection processes out into the open and â€Å"let the people vote for the candidate of their choice.†ii[ii] On the surface, this may look very democratic (and admittedly, in some instances it was/is), but upon closer examination, it becomes overwhelmingly clear that the candidates are chosen long before the people cast their vote. The culprit: the structure of the presidential primary system. The most influential structural element of the new primary system is the newfound practice of the political parties choosing a favored candidate before the primary season. The parties then throw all their support and financial backing behind this candidate and instantly make him/her a front-runner. While this element is standard among the two parties, the remaining structure of the primary system differs between the two main political parties. While both the Democratic and Republican parties hold open and closed primaries, the two parties hold many of their state primaries on separate dates. Additionally, the two parties have different rules that determine how each state’s delegates are allotted. The Democrats practice the proportional representation method of delegate allocation. The Republicans, on the other hand, pract... ... Online. Internet. 18 Mar. 2000. Available: http://www.thegreenpapers.com/Definitions.html#Prop. i[iv] â€Å"New Hampshire Republican Delegation 2000.† The Green Papers: Election 2000. 1 Mar. 2000. Online. Internet. 18 Mar. 2000. Available: http://www.thegreenpapers.com/PCC/NH-R.html. v[v] â€Å"Delaware Republican Delegation 2000.† The Green Papers: Election 2000. 9 Feb. 2000. Online.Internet. 18 Mar. 2000. Available: http://www.thegreenpapers.com/PCC/DE-R.html. v[vi] â€Å"South Carolina Republican Delegation 2000.† The Green Papers: Election 2000. 4 Mar. 2000. Online. Internet. 18 Mar. 2000. Available: http://www.thegreenpapers.com/PCC/SC-R.html. v[vii] â€Å"The Green Papers: Election 2000 Presidential Primary Season.† The Green Papers: Election 2000. 18 Mar. 2000. Online. Internet. 18 Mar. 2000. Available: http://www.thegreenpapers.com/.

Thursday, October 24, 2019

Animal Farm Analytics

Leaders use many tactics to withhold power and maintain control over the ignorant people. Joseph Stalin, the leader of the USSR from 1922-1952, used many clever and sometimes gory techniques to keep his power over the Soviet people. These strategies are shown in George Orwell's allegory of the Russian revolution, Animal Farm. Napoleon, the self-proclaimed leader of Animal Farm and allegorical representation of Joseph Stalin, has quite a few crafty and cunning ways to retain his authority over the animals. For example, by only educating the piglets and dogs, Napoleon keeps the majority of the animals uneducated and ignorant and therefore easier to manipulate. By blaming mistakes and wrongdoings on Snowball, an exiled pig who is an allegorical representation of the exiled Russian leader Leon Trotsky, Napoleon is able to create a common enemy. This takes the blame off of himself and instills a fear in the animals, making it easier for Napoleon to control the public. Finally, he trains puppies to become attack dogs and uses them as a police force, forcing the animals obey his every word by fear of bodily harm. By keeping the masses ignorant and afraid, Napoleon is able to retain his power over Animal Farm. Since he restricts formal education to the piglets and dogs, Napoleon is able to keep the remaining animals uneducated and docile, using their stupidity to his advantage. For example, after Napoleon murders many of the animals who are supposedly in league with Snowball, the animals are a bit uneasy because they recall a Commandment that states, â€Å"No animal shall kill any another animal† (Orwell 58). Muriel, a literate goat, reads the Commandment after the massacre, and it says, â€Å"No animal shall kill any other animal without cause† (151). She thinks that â€Å"somehow or other the last two words had slipped out of [her] memory. But [she] saw now that the Commandment had not been violated; for clearly there was good reason for killing the traitors† (165-166). Because Napoleon only educates the wealthy, the rest of the animals are oblivious to what is going on and believe everything that they are told. When Napoleon changes the Commandments, the animals blame their own faulty memories and proceed to believe whatever is written in the commandment because â€Å"Napoleon is always right†(111). Napoleon exploits the animals' gullibility when he modifies the Commandments to justify his atrocities and garner even more power. Since the animals only believe what they are told by Napoleon and the media, he is able to maintain his control over the farm. By limiting education to only a select few, Napoleon is able to manipulate the masses and get away with changing things to his benefit. Napoleon creates a scapegoat and common enemy for the animals by blaming everything that goes awry on Snowball. This, in turn, brings about a sense of fear that helps Napoleon strengthen his rule. One instance where Napoleon executes this strategy is early in the spring, when the animals receive news from Napoleon that Snowball is secretly frequenting the farm at night and disturbing the animals in their sleep. After hearing this news, â€Å"the animals [are] thoroughly frightened. It seemed to them as though Snowball were some sort of invisible influence, pervading the air about them and menacing them with all kinds of dangers† (147). Blaming Snowball for everything that goes askew is a good way for Napoleon to create a common enemy and inculcate fear in the animals. By putting Snowball under a bad light, Napoleon makes it seem as though he is the good guy and Snowball is the bad one. His actions make his reign seem â€Å"perfect† since everything is blamed on Snowball. This way, he will receive no opposition. Additionally, by depicting Snowball as the reason for all their troubles, the rest of the animals look up to Napoleon to make the right decisions and lead them through this time of crisis. Animals that are afraid are always easy to control. Creating a scapegoat allows Napoleon to deflect the blame from himself and create nationalism within the animals, making it easier for Napoleon to rule. Napoleon uses the dogs as a police force to control the animals through fear of bodily harm. After Jessie and Bluebell have nine puppies, Napoleon takes and trains them in seclusion. They soon grow into nine vicious killer dogs. One day, Napoleon assembles the animals in the yard to confess their crimes. Any animal who opposed or rebelled against Napoleon steps up, confesses his crimes, and is slain on the spot by the attack dogs, right in front of the other animals. The dogs are ruthless and tear the animals' throats out. After witnessing this bloody massacre, none of the animals know why â€Å"they had come to a time when no one dared speak his mind, when fierce, growling dogs roamed everywhere, and when you had to watch your comrades torn to ieces after confessing to shocking crimes. There was no thought of rebellion or disobedience†(161). Napoleon, by publicly executing anyone who happens to displease or disobey him, sets a precedent of what will happen if any of the animals rebel. The attack dogs are able to crush any signs of rebellion. This puts Napoleon in supreme power because every animal will do whatever he tells them to do, in fear of being kil led. By using the dogs as a means to control, Napoleon is able to crush any signs of rebellion and maintain his control on the farm through fear of physical harm. In Animal Farm, the public is uneducated and afraid, which makes them much more easily manipulated by Napoleon and the pigs. By only educating the wealthy, Napoleon makes sure that the general public is uninformed and therefore easier to control. Blaming Snowball for everything that goes astray creates a scapegoat and common enemy, which inflicts a fear in the animals. This also makes Napoleon seem more perfect, so the animals are more likely to listen to him and give him more power. Napoleon also creates a â€Å"police force† from the attack dogs, who help control the general public by fear of physical harm. In Animal Farm, Napoleon goes to extreme lengths to remain in power, much like other dictators around the world. In the 1900's, Stalin did many violent things to keep in control of the Soviet Union, including mass murdering innocent people who spoke up against the government. Leaders go to extremes and use oppressive tactics to remain in control of their land. In future circumstances, the public should be careful not to trust their leader too much or give them too much power, otherwise the leader will become the dictator of a totalitarian regime.

Wednesday, October 23, 2019

Exam Notes

Chapter 1- PRE MID Study Questions : 1) What are the challenges of working in the new economy 2) What are the organizations like in the new workplace? 3) Who are the managers and what do they do? 4) What is the management pricess? 5) How do you learn the essential managerial skills and competencies? Overview of the 21st century workplace -Organizations must adapt to rapidly changing society -Economy is global and driven by innovation and technology -High performing companies gain extraordinary results from people working for them -Interdependent, knowledge based STUDY QUESTION 1Intellectual Capital- People are the ultimate foundations of organizational performance, it is the collective brainpower or shared knowledge of a workforce that can be used to create value. A knowledge worker adds to the intellectual capital of an organization. Globalization- National boundaries of world business have largely disappeared. Globalization is the worldwide interdependence of resource flows, produc t markets, and business competition that characterize the new economy. Technology- There is an increasing demand for knowledge workers with the skills to full utilize the technology such as (internet computers and information technology)Diversity- Workforce diversity reflects differenes with respect to gender, age, race, ethnicity, religion, sexual orientation, and able bodiednes. Creates a diverse and multicultural workforce but challenges and offers opportunities to employers. Ethics- Code of moral principles, society requires business to operate according to high moral standards. Emphasis today is on restoring the strength of corporate governance. STUDY QUESTION 2 Some Critical skills for success in the workplace are; mastery, contacts, entrepreneurship, love of technology, marketing, passion for renewal.Organization- A collection of people working together to achieve a common purpose. Organizations provide useful goods and or services that return value to society and satisfy cus tomer needs. Organizations are Open Systems- Composted of interrelated parts that function together to achieve a common purpose and interact with their environments. They transform resource inputs into product outputs(goods and services). Environmental feedback tells organization how well it is meeting the needs of customers and society.Organizational Performance- value is created when an organization’s operations ads value to the original cost of resource inputs. Value creation occurs when businesses earn a profit or nonprofit organizations add wealth to society. Organizational Performance -Productivity: an overall measure of the quantity and quality of work performance with resource utilization taken into account -Performance Effectiveness: An output measure of task or goal accomplishment -Performance Efficiency: An input measure of the resource costs associated with goal accomplishmentWorkplace changes that provide a context for studying management; belief in human capital , demise of â€Å"command and control†, emphasis on teamwork, Preeminence of technology, Embrace of networking, New workforce expectations, concern for work-life balance, focus on speed. STUDY QUESTION 3 Importance of human resources and managers; toxic workplaces that treat employees as costs, High performing organizations treat people as valuable strategic assets, managers must ensure that people are treated this way.Manager- a person in an organization who supports and is responsible for the work of others, they are the ones who help those whose tasks represent the real work of the organization. Levels of Management: a)Top Managers- responsible for performance of an organization as a whole or for one of its larger parts. b) Middle managers- in charge of relatively large departments or divisions. c) Project managers- coordinate complex projects with task deadlines d) Team Leaders or supervisors- in charge of a small work group of non-managerial workers.Reponsibilities of te am leaders: Plan meetings and work schedules, clarify goals and tasks, and gather ideas for improvement, appraise performance and counsel team members, recommend pay raises and new assignments, recruit, develop and train team members, encourage high performance and teamwork, inform team members about organization goals and expectations, inform higher levels of work unit needs and accomplishments, co-ordinate with others teams and support the rest of the organization. Types of Managers: a)Line Managers: responsible for work activities that directly affect organizations outputs. )Staff managers: use technical expertise to advise and support the efforts of line workers c) Functional managers: responsible for a single area of an activity d) General managers: responsible for more complex units that include many functional areas. e) Administrators: work in public and nonprofit organizations. Managerial Performance and Accountability- accountability is the requirement of one person to answ er to a higher authority for relevant performance results. Effective managers fulfill performance accountability by helping others to achieve high performance outcomes and experience satisfaction in their work.Quality of work life (qwl) – an indicator of the overall quality of human experiences in the workplace. Some indicators are: fair pay, safe working conditions, opportunities to learn and use new skills, room to grow and progress into a career, protection of individual rights, pride in work itself and in the organization. High performing managers: build working relationships with others, help others develop their skills and performance competencies, foster teamwork, create a work environment that is performance driven and provides satisfaction for workers.The organization as an upside down pyramid: each individual is a value-added worker. A managers job is to support workers’ efforts. The best managers are known for helping and supporting. STUDY QUESTION 4 Managem ent is the process of planning, organizing, leading and controlling the use of resources to accomplish performance goals. All managers are responsible for the four functions, and they are carried on continually. Functions of management a) Planning – the process of setting objectives and determining what actions should be taken to accomplish them. ) Organizing- the process of assigning tasks, allocating resources and arranging the coordinated activities of individuals and groups to implement plans c) Leading- the process of arousing people’s enthusiasm to work hard and direct their efforts to fulfill plans and accomplish objectives. d) Controlling- the process of measuring work performance, comparing results to objectives and taking corrective action as needed Managerial activities and roles a) Interpersonal roles- involve interactions with persons inside and outside the work unit b) Informational roles- Involve giving, receiving, and analyzing of information. ) Decisio nal Roles- involve using information to make decisions in order to solve problems or address opportunities Characteristics of managerial work: Managers work long hours, work at an intense pace, work at a fragmented and varied tasks, work with many commutation media, work largely though interpersonal relationships. Agenda setting- Development of action priorities for ones job, includes goals and plans that span long and short Networking- The process of building and maintaining positive relationships with people whose help may be needed to implement ones work agendasSTUDY QUESTION 5 Essential managerial skills: Skill-the ability to translate knowledge into action that results in desired performance Technical skill- the ability to apply a special proficiency or expertise to perform particular tasks* lower level managers have more of this Human skill- the ability to work well in cooperation with others Conceptual skill- the ability to think critically and analytically to solve complex p roblems. * top level managers have more of this Managerial Competency- A skill-based capability that contributes to high performance in a management job.Managerial competencies are implicit in: Planning, organizing, leading and controlling. Informational, interpersonal, an decisional roles. Agenda setting and networking. Chapter 7-PRE MID Study Questions: 1) How is information technology changing the workplace? 2) What is the role of information in the management process? 3) How do managers use information to make decisions? 4) What are the steps in the decision-making process? 5) What are the current issues in managerial decision making? STUDY QUESTION 1Knowledge and knowledge workers provide a decisive competitive factor in today’s economy. Intellectual Capital- shared knowledge of a workforce that can be used to create wealth * irreplaceable organizational resources* Electronic commerce- the process of buying and selling goods and services electronically through use of the internet. Implications if IT within organizations: Facilitation of communcation and information sharing, operating with fewer middle managers, flattening of organizational structures, faster decision making and increased coordination and control.How IT is changing the office: progressive organizations activiely use it to help achieve high performance in uncertain environments. Key developments in networked offices are instant messaging and peer to peer sharing (p2p) STUDY QUESTION 2 Data- raw facts and observations Information- Data made useful for decision making drives management functions Characteristics of useful info: timely, high quality, complete, relevant, understandable. Information system- Use of the latest IT to collect, organize and distribute data for use in decision making.Management Information System (MIS)- specifically designed to meet the information needs of managers in daily decision making. Decision to support syste (DSS)- An interactive information system that allows users to organize and analyze data for solving complex and sometimes unstructured problems. Group Decision Support System (GDSS)- facilitates group efforts to solve complex and unstructured problems. *use groupware Artificial Intelligence (AI)- computer systems with the capacity to reason the way people do. Expert Systems (ES)- Software systems that use AI to mimic the thinking of human experts.Managerial advantages of IT utilizations 1) Planning advantaes- better and more timely access to useful information, involving more people in planning. 2) Organizing advantages- more ongoing and informed communication among all parts of the organization, improved coordination and integration 3) Leading advantages- improved communication with staff and stakeholders, keeping objectives clear. 4) Controlling advantages- more immediate measure of performance results, allows real-time solutions to performance problems STUDY QUESTION 3Performance deficiency- actual performance being less th an desired performance Performance opportunity- actual performance being better than desired performance Problem Solving- the process of identifying a discrepancy between actual and desired performance and taking action to resolve it. Decision- a choice among possible alternative course of action. Programmed decisions- apply solutions that are readily available from past experiences to solve structured problems, these problems are ones that happen often and are familiar.Nonprogrammed decisions- develop novel solutions to meet the demands of unique situation that presents unstructured problems. Commonly faced by higher-level management Crisis Decision making – a crisis involves an unexpected problem that can lead to disaster if not resolved quickly and appropriately. Certain Environment- offers complete info about possible action alternatives and their outcomes Risk Environment- lacks complete info about action alternatives and their consequences, but offers some estimates of probabilities of outcomes for possible action alternatives.Uncertain Environments- Information is so poor that probabilities cannot be assigned to likely outcome of known action alternatives. Systematic v/s intuitive thinking- systematic thinking approaches problems in a rational step by step and analytical fashion. Intuitive thinking approaches problems in a flexible and spontaneous fashion. Multidimensional thinking applies both intuitive and systematic thinking. Effective multidimensional thinking requires skill at strategic opportunism. STUDY QUESTION 4 Decision making ProcessStep 1- Identify and define the problem: focuses on information gathering, info processing and deliberation. Decsion objectives should be established Step 2- Generate and evaluate possible solutions; potential solutions are formulated and more info is gathered, data are analyzed, the advantages and disadvantages of alternative solutions are identified. Step 3-decide on a preferred course of action; classica l decision model managers act rationally in a certain world, managers face clearly defined problems and have complete knowledge of all possible alternatives and their consequences this results in an optimizing decision.OR behavioral decision model; managers act in terms of what they perceive about a given situation, recognizes limits to human information-processing capabilities, they will choose the first satisfactory alternative Step 4- Implement the decision solution; involves taking action to make sure the solution decided upon becomes a reality, managers need to have willingness and ability to implement action plans. Step 5- evaluate results; involves comparing actual and desired results, positive and negative consequences of chosen course of action should be examined.STUDY QUESTION 5 Availability Heuristic- people use information â€Å"readily available† from memory as a basis for assessing a current event or situation Representativeness Heuristic- People assess the like lihood of something happening based upon its similarity to a stereotyped set of occurrences Anchoring and adjustment Heuristic- People make decisions based on adjustments to a previously existing value or starting point. Ethics double check- any decision should follow this ethics rule ask yourself â€Å" how would I feel if my family found out about this decision? â€Å"how would I feel if this was published in the newspaper† *ethical decisions satisfy the following criteria : utility, rights, justice, caring. Chapter 2- POST MID Study Questions 1) what can be learned from classical management thinking? 2) What ideas were introduced by the human resource approaches? 3) What is the role of quantitative analysis in management? 4) What is unique about the systems view and contingency thinking? 5) What are the continuing management themes of the 21 century? STUDY QUSTION 1Classical Approaches to management: 1) Scientific Management-(Frederick Taylor) Decelop rules of motion , st andardized work implements and proper working conditions for every job. Carefully select workers with the right abilities for the job. Carefully train workers and provide proper incentives. Support workers by carefully planning their work and removing obstacles. (The Gilbreths) Motion study, science of reducing a job or taskt to its basic physical motions. Eliminating wasted motions imporves performance. ) Administrative Principles ( Henri Fayol) – RULES OF MANAGEMENT a) foresight- co complete a plan of ation for the future b) organization- tp provide and mobilize resources to implement the plan c) coordination- to fit diverse efforts together and ensure information is shared and problems are solved. d) Control- to make sure things happen according to plan and to take necessary corrective action PRINCIPLES OF MANAGEMENT a) Scalar chain- there should be a clear and unbroken line of communication from the top to the bottom of the organization. ) Unity of command- each person sh ould receive orders from only one boss c) Unity of direction- one person should be in charge of all activities with the same performance objective. MARY PARKER FOLLET Group and human cooperation; Groups are mechanisms through which individuals can combine their talents for a greater good, Organizations are cooperating communities of managers and workers. Mangagers job is to help people in the organization cooperate and achieve an integration of interests.Forward-looking management insights; making every employee an owner creates a sense of collective responsibility (precursor of employee ownership, profit sharing, an gain sharing). Business problems involve a variety of inter-related factors. Private profits relative to public good (precursor of managerial ethics and social responsibility) 3) Bureaucratic Organization (max Weber)- Bureaucracy is an ideal intentionally rational and very efficient form of organization. Based on principles of logic, order, and legitimate authority.Char acteristics of Bureaucratic organizations : clear division of labor, clear hierarchy of authority, formal rules and procedures, impersonality, careers based on merit. STUDY QUESTION 2 Human resource approaches include : 1) Hawthorne Studies – initial tudy examined how economic incentives and physical conditions affected worker output. No consistent relationship found. â€Å"Psychological factors† influenced results. Relay assembly test room studies manipulated physical work conditions to assess impact on output, was designed to minimize the â€Å"psychological factors† of previous experiment.Factors that accounted for increased productivity : group atmosphere and participative supervision. Employee attitutes, interpersonal relations and group processes- some things satisfied some workers but not others, people restricted output to adhere to group norms. Lessons from the Hawthorne Studies: Social and human concerns are keys to productivity, hawthorne effect-peopl e who are singled out for special attention perform as expected. 2) Maslows theory of human needs- a need is a physiological or psychological deficiency a person feels compelled to satisfy.Need levels: physiological, safety, social, esteem, self actualization. Deficit principle- a satisfied need is not a motivator of behavior Progression principle- a need becomes a motivator once the preceding ower level need is satisfied. *Both principles cease to perate at a self actualization level 3) McGregors Theory X assumes that workers: dislike work, lack ambition, are irresponsible, resist change, prefer to be led. McGregors Theory Y assumes that workers are: willing to work, capable of self control, willing to accept responsibility, imaginative and creative, capable of self direction.Implications of Theory x and y : managers create self fulfilling prophecies, theory x managers create situations where workers become dependent and reluctant. Theory Y managers create situations where workes r espond with initiative and high performance * central to notions of empowerment and self management 4) Argyris’s theory of adult personality – classical management principles and practices inhibit worker maturation and are inconsistent with the mature adult personality.Management practices should accommodate the mature personality by: increasing task responsibility, increasing task variety, using participative decision making. STUDY QUESTION 3 Management science (operations research) foundations – scientific application of mathematical techniques to management problems. Techniques and applications include: mathematical forecasting, inventory modeling, linear programming, queuing theory, network models, simulations.Quantitative analysis today- use of staff and specialists to help managers apply techniques, software and hardware developments have expanded potential quantitative applications to managerial problems. Good judgement and appreciation for human factors must accompany use of quantitatitve analysis. STUDY QUESTION 4 System-collection of interrelated parts that function together to achieve a common purpose. Subsytem- A smaller component of a larger system Open systems- organizations that interact with their environments in the continual process of transforming resource inputs to outputs.Contingency thinking- triest to match managerial respinses with problems and opportunities unique to different situations. * espically indicidual or environmental differences. No â€Å"one best way† to manage. Appropriate way to manage depends on the situation. STUDY QUESTION 5 Quality and performance excellence- managers and workers in progressive organizations are quality conscious. * wuality and competitive anaylsis are linked Total Qaulity Management (TQM) – Comprehensive approach to continupus quality improvement for a total organization, creates context for the value chain.Global Awareness- pressure for quality and performance exce llence is created by a highly competitie global economy. Has promoted increasing intrest in new management concepts: process engineering, virtual organizations, Agile factories, network firms. Adoption of the theory Z management practices. Core Factors of a leraning Organization -mental models -personal mastery -systems thinking -shared vision -team learning In the 21st century managers must be Global strategists, masters of technology, inspiring leaders and models of ethical behaviour.