UNIVERSITY OF EDINBURGH, DEPARTMENT OF HISTORY
Which Contributed More to the Outbreak of the Spanish Civil War, The Reforms or the Weaknesses of Republican Governments?
KIERAN LEE
TUTORS NAME: JILL STEVENSON
20th of November 1990
The republican government took power in Spain on April 14th, 1931.
The Prime Minister was Alcala Zamora who lead a coalition government of republican radicals.
The republic inherited the traditional problems of the Spanish state; the rural unemployment, industrial under development, and demands for local rights.
In trying to overcome these problems the first republican government of 1931-33 set the stage for things to come; the political alienation of both the right and left wings of the political spectrum in Spain.
Unable to find common points the moderate reform produced fear in the right and disillusionment with republican politics in the left.
Increasingly, the middle ground lost to the growing political polarisation of the right or left.
It was the republic's failure to find a lasting democratic consensus that eventually pushed republican politics into violence and war.
There are three periods to republican reform prior to the Civil War.
These consists of the periods of time between the abdication of the king in 1931 and November 1933, between November 1933 and the election of the Popular Front in 1936, and from February 1936 to the outbreak of the Civil War in July 1936.
Within this period republican politics became polarised between extreme right and left as reform and counter-reform disillusioned people with the republican Parliamentary system and placed politics in the streets.
Prior to the republican government reforms political and social power in Spain lay in the hands of a powerful minority of the church, army, and landowners.
Their massive landownership, the Lactifundio system, and the army's role in politics enabled them to keep their power over a majority (70% of the Spanish population).
The republic failed to build on the potential support of this majority.
This failure was largely due to the doctrinaire approach to reform which other than striving to establish the democratic consensus, also sought to pay off old scores against the conservative right and build a sectarian leftist regime, isolating moderate support.
In 1931-33 the main aims of the republican reforms were a) reform and reduction of the army, b) separation of church and state and a restriction on Catholic rights and privileges, c) reform of the unitary structure of the Spanish state to permit Catalan regional autonomy, and d) broad social and economic reforms.
These reforms, although essentially moderate in their nature, challenged the traditional power of the right-wing oligarchies with underlying social and political implications.
The return of the army was carried out by the new defence minister Manuel Azana &mdash;; he was determined to break the political autonomy of the army.
The officer corps was reduced by 50%, with many officers retiring on full pay.
The overall size of the army was also cut.
This aroused the permanent hostility of many officers and actually strengthened the power of right wing officers in the army as republican officers stifled by the hierarchical atmosphere were glad to go.
The hostility of the army to the republic increased with the Catalan autonomy statute which represented to them the breaking up of the Spanish nation and a threat to national pride.
The anticlerical religious settlement was perhaps the most emotive and damaging reform of the republican government.
By cutting the state salaries of clergy and ending religious domination of education, the republican government pushed into opposition the near 25% of the Spanish population who were Catholics.
This was an important loss of potential moderate middle class support, who moved to the right wing in opposition to the republic.
This split made it impossible to unite a broad democratic majority and left the republic divided almost from the start of the regime.
The fourth goal of the republic was social reform &mdash;; primarily agrarian reform.
Legislation was introduced to expropriate land from absentee landlords and redistribute it to peasants.
Compensation was given for the land taken, but this limited the programme as only 1% of the republican budget was allocated towards this.
The main weakness of these republican reforms was that they threatened fundamental change but didn't fully implement it.
While the right's fears about a red revolution developed, the left became frustrated and disillusioned by the moderate nature of these reforms.
The republican group regarded the extreme left as great a threat as the right and was brutal in its suppression of C.N.T and U.G.T strikes.
By regarding the strikes as a definite act of political attack on the regime, the republican government strengthened left wing opposition and contributed to the socialist move left-ward.
Increasingly, the socialists declared that they saw the republic as a Bourgeois stepping stone in a transition to a socialist regime.
In the 1933 elections the socialists no longer associated themselves with the divided fractions of the middle class republican government, hoping to rally a broad following independently.
The Socialists wanted to dissociate themselves with the republican government, which was losing its strength and unity.
The Radicals also went into opposition and the Radical Socialists were split three ways; left, right, and centre.
As the socialists moved increasingly left of the republic, splitting the coalition government, the right gained support reorganising itself into a united opposition to the republic under the umbrella of the C.E.D.A. The right wing capitalised on the Catholic reaction against republican anticlericalism in order to gain power.
Their goal of restoring the power of the Church hid their aim of overthrowing the republic.
The 1933 elections demonstrated the growing right wing reaction against left wing republican parties.
The republican government was unable to maintain a broad middle class moderate consensus as the C.E.D.A. became the most powerful party in the Cortes.
This signalled a new wave of reform and counter-reform as the right-wing Radical government strove to halt the reforms of the republic and introduce conservative policies.
In doing this, the government effectively started to destroy the republic from within.
As the republican system began to be associated with right-wing policies, the left was pushed to greater political extremism as it was very difficult for them to effect political change within the Cortes.
Strikes grew more frequent and violent, the harsh suppression of them by the government increasing hostility towards the republican regime.
This became a signal for revolutionary insurrection in Catalonia and Asturia by the C.N.T.-F.A.I.
This insurrection was eventually undermined by the government.
After seeing its power threatened in this way, government policy became increasingly reactionary.
It made closer links to the army and the semi-fascist Falange movement.
The republican state was undermined by the right wing counter reforms which eased the road to violence.
Left wing hostility to the republic unified its various factions into a new &bquo;Popular Front&equo; in 1936.
The reactionary politics of the government made sure that two mistakes of 1933, left wing division and the aloofness of the C.N.T. to the parliamentary system, were not repeated.
In February 1936 the Popular Front won the elections by sender majority.
The Popular Front government found it more and more difficult to maintain its legitimacy.
Politics was increasingly moving outside the legal boundaries of the Cortes into street violence and C.N.T. revolutionary committees.
These committees were spreading throughout Spain on a local and provincial level, weakening republican power.
The socialist government feared a loss of government to the C.N.T. unless they held out a revolutionary future to the social masses.
Though the right wing had lost much of its political leverage in the Cortes, their opposition found voice in growing street violence and eventually a military uprising which was to start the civil war.
The outcome of the civil war was a victory for the right wing forces and the establishment of a military regime under Franco.
The state of the political parties was reflected in the civil war.
The right wing put forward a united front with the clear aim of overthrowing the republic.
The left was divided and split about tactics and what the war should achieve.
They fought within themselves, weakening their fight for the republic.
The right wing was also helped by international support from Italy and Germany while the republican side received only limited aid from the U.S.S.R.
Both the reforms and the weaknesses of the republican governments contributed to the outbreak and outcome of a Spanish civil war; that is the reassertion of conservative military rule in Spain.
The army had traditionally been the guiding force in Spanish politics.
The republic failed to curb its powers.
The reforms of the army consolidated the right wing within the army and increased its hostility to the republic rather than limit its powers.
The weaknesses of the republic represented the weaknesses of Spanish society, principally the massive divide between the powerful conservative minority and the majority of landless labourers.
This was reflected in a polarised and divided political system, in which it was almost impossible to build a common democratic consensus especially as many of the moderate middle classes were pushed into right wing opposition of the republic by the initial government's anticlericalism.
The reforms made the splits already present within the republican governments yet more defined and were used as tools to benefit the ruling regime, rather than build a broad democratic consensus.
Thus politics became widely split.
Unable to resolve the conflicts within the legal political arena they were pushed to violence and war, with the united right-wing army regaining its traditional power over Spanish politics.
UNIVERSITY OF EDINBURGH, DEPARTMENT OF POLITICS
Does Money Corrupt the Electoral Process in the United States?
KIERAN LEE
TUTORS NAME: PIPPA NORRIS
25th of November 1990
Money is an important part of the electoral process.
It enables candidates to use the media and campaign techniques which are now an essential part of the electoral process in America.
The expenditure involved with election campaigns has been rapidly increasing since 1976.
In 1976, direct political expenditures in the congressional elections totalled &dollar;104, 346, 477, compared with an increase by 359% to &dollar;476, 804, 432 in 1986.
This has given rise to fears that the electoral process in the US has become corrupted, as money increasingly &bquo;buys&equo; political influence through elaborate election campaigns that concentrate power in hands of a &bquo;small monied minority&equo; &lsqb;Alexander&rsqb;.
Campaign finance is related to broader aspects of the American political system.
These are constitutional limits on the regulation of politics, the separate elections to separate branches of government (many for fixed terms), the role of the media, and candidate centred campaigns in a climate of weakened political parties.
These have all contributed to a growth in campaign spending.
Ironically, the legislation introduced to reform campaign funding helped to facilitate increased spending in politics.
Reform began with the Revenue Act of 1971; the Federal Election Campaign Act and its subsequent amendments.
These set limits to the level of contributions with the aim to prevent candidates from becoming obligated to special interest groups.
H. E. Alexander describes the four basic aspects to this legislation as public disclosure of the monetary influences on elected officers, expenditure limits to meet the problem of rising costs, contribution restrictions to meet the problem of candidates obligating themselves to certain interest groups, and public funding which aimed to provide an alternative source of funding to replace the prohibited and limited contributions under FECA.
These reforms affected the manner in which money was involved in campaign funding with wide-ranging implications on the US electoral system.
The primary sources of campaign funding are Political Action Committees (PACs).
A PAC is a group representing some interest such as labour or more specifically abortion or gun control.
There is great diversity and variety among PACs as they represent different values and beliefs.
The Federal Election Campaign Act, its amendments of 1976, and the Buckley-Valeo ruling of the supreme court, also in 1976, made it easier to establish PACs.
No limit was imposed upon the spending of a PAC on behalf of a candidate as long as its activities were not coordinated with the candidate.
The Buckley-Valeo ruling supported the rise in campaign spending as it declared it unconstitutional to limit the personal contributions of a candidate for their own campaign.
As PACs have become the major source of campaign funds, inability to win PAC support may mean a candidate cannot mount an adequate campaign.
The rise of PAC powers has three main effects; the nationalisation of resources, the purchase of access and influence, and the distortion of partisan balance and electoral competition.
Political funding through PACs is creating a centralised system that puts the loyalty of candidates to the national source of funding above that to their local constituency.
Politics is moving out of political constituencies to &bquo;resource&equo; constituencies.
The fact that PACs tend to support incumbents as safe options for re-election has implications for democracy in America.
Party loyalty has weakened in favour of PAC support in the the choice of candidate and who can run for office is limited by the need for PAC support, which is mainly given to incumbents.
Thus a political elite with PAC backing is created.
David Broder argues PACs are a threat to the democratic ideal leading to gutless government, which avoids taking controversial stances for fear of losing PAC support.
These changes cannot be seen in isolation.
The growing power of campaign funding and its effects are also related to changes in the American political and social system which help to facilitate these effects.
As Sorauf points out, &bquo;political money reflects the configuration of influence on American politics&equo;.
Changes in American electoral politics, the role of the mass media/campaign technology, patterns of voting behaviour, and the candidate focus of the campaign all are interrelated and contributive to the way money is used in political system.
Sorauf argues that fundamental changes in American society and politics rather than money made candidates and interest groups chief actors in the campaign finance they own turned to money as a result of their enhanced power.
We can see how money is important in the American political system as funding is vital to fight a campaign.
Money can corrupt the political system through the power of PACs and the use of independent expenditure.
This prevents competition by helping to ensure the power of the incumbent and increasingly wealthy individuals are able to launch their own campaigns from personal funds.
This is especially true in Senate and House elections which receive no public funding.
Dahl points out that a person with a low income faces greater difficulty in fighting a campaign than a wealthy person but that money is not the only factor when it comes to winning an election.
The predisposition of the voters, issues of the moment, advantages of incumbency, and the support of various groups are all related to the final vote and often more important than money.
Money as an &bquo;independent base&equo; cannot win elections although it is vital in launching a successful campaign.
For, as Loomis points out, even though Republicans received more funding than Democrats between 1930 and 1970 the Democrats were able to hold power in the Senate and House of Representatives more often than the Republicans.
Thus we can see there are structural limits to the power of money to win elections though money is vital to purchase the resources necessary for a successful campaign, e.g. media.
There are growing doubts about the ability of money to win election campaigns.
Too much spending by a candidate can in fact cause a backlash.
This happened in Iowa, Ohio, and Vermont during the 1986 elections.
The candidate can become too closely related to the PAC money and thereby become alienated from local politics.
Alexander points out there is little scientific evidence about the incremental value of various levels of campaign spending or the effectiveness of different campaign techniques.
Campaign funding is most effective in marginal elections and pre-nomination campaigns where there is evidence that the wealthy have a head-start.
Sorauf points out that public opinion has become increasingly sceptical about the use of money in campaign politics.
They tend to be wary of public funding, but accept limits on contributions and expenditure.
The rapid development of PACs started to get an increased level of press coverage in the 1982 elections and there were growing calls for reform.
The main themes of this reform were to reduce the power of PACs, which were using election money to attain legislative influence.
The reformers believe these problems could be solved through reform of campaign finance laws.
Sorauf identified nine main aims of the reforms.
These were to ensure accountability, limit influence of contributors, provoke competitiveness, support the two party system, encourage a broad base of contributions, control the burdens of fund-raising, ensure campaign communication, and to protect political freedom.
The major legislative alternatives for reform were included in four bills.
The Synar bill tried to limit PACs directly and replace some of the money lost through campaigns by increasing the contribution limit for individuals.
The Obey bill also aimed to directly limit PACs and to create public finance in order to replace lost money.
It was hoped that this would reduce independent expenditure and thus overall spending.
The McHugh and Conable bill left PAC and candidate spending untouched.
It tried to decrease independent expenditures of PACs by repealing existing tax credit, thus making it more efficient to look for small individual contributions and thereby reduce the power of the PACs.
The Frenzel and Laxolt bill proposed to reduce the power of PACs by increasing the role of political parties.
Little legislation was actually passed, being successfully fought back with arguments that reducing PAC power would only increase power in other areas of campaign spending.
However, two changes to FECA were introduced.
Firstly, in 1986 there was a reform of Federal Insurance tax; congress dropped tax credit for individual contributions.
The second change was the rise in inflation which reduced the statutory limits on contributions by more than half.
A &dollar;5000 individual contribution in 1974 was worth &dollar;2252 1974 dollars twelve years later.
Increasingly it seems that the partial triumphs of the Federal Campaign Act are slipping away as individual contributions are getting larger and more frequently come from outside constituencies.
This essay has tried to show that money is an important part of the American political system and that its importance has been increasing.
Money can cause &bquo;corruption&equo;.
As evidence has shown, wealthy candidates receive a head-start in pre-nomination campaigns and in marginal congressional seats as they are able to help finance their own campaigns.
Incumbents with PAC support can ensure their power and keep out competition.
The power of money is changing.
This reflects structural changes in the American political system and changes in society.
Interest groups have become increasingly important to the electoral process, changing the role of campaign financing through the use of PACs.
PACs have helped to create a political elite of incumbents, whose power is secure with PAC backing.
However, money itself does not win elections.
Campaign financing is encouraged and yet limited through structural features of the American political system.
UNIVERSITY OF EDINBURGH, DEPARTMENT OF HISTORY
POLITICAL POWER IN THE SHAPING OF SOCIETY IN EUROPE, 1870-1940
ESSAY 7 &mdash;; &bquo;Their motives for social reform were far from benevolent, but in practice they created embryonic welfare states.&equo;
Discuss this view of social policy in Mussolini's Italy and Hitler's Germany.
KIERAN LEE
TUTORS NAME: Jill Stephenson
2nd of February 1991
In Mussolini's Italy and Hitler's Germany, social reform was a keystone of their policies.
Mussolini and Hitler concentrated power to create highly centralised systems with social policies aimed to create a disciplined mass following.
Hitler and Mussolini manipulated social policy to maintain their power, using social reform as a way of controlling society and creating a totalitarian basis of support.
This process was more thorough and effective in Nazi Germany than Fascist Italy.
However, in both regimes there was a difference in what they aimed to do and what they actually achieved.
The aims and motives of Mussolini's and Hitler's social policy were primarily political and economic.
Social reform was part of the policy of creating a totalitarian system through Gleichschaltung or &bquo;coordination&equo;.
That is the coordination of synchronisation of society and all institutions in society to the Fascist or Nazi state.
This meant the subordination of all individuals and self governing bodies to the Government.
The first aim of Mussolini's and Hitler's regimes was the creation of &bquo;National Solidarity&equo; (Schoenbaum, page 47) and a &bquo;disciplined support of the masses&equo; (De Grazia, page 3) to consolidate their political base.
Hitler and Mussolini aimed to transcend all differences of class and society and unite society in a &bquo;supra class&equo; (De Grazia) in which national identity rather than class or occupation determined one's place in society.
Thus we can see how social reform in Mussolini's Italy and Hitler's Germany was part of political, economic, and social control of all individuals and institutions in society, to the Fascist and Nazi cause.
Pro-natalist policies were the keystone of Mussolini's and Hitler's social reform program.
These policies were &bquo;expansionist in aim&equo; (Borrie) and highly developed, embracing many measures for discouraging celibacy and encouraging marriage and large families.
Both Mussolini and Hitler saw population policy as not only a means of increasing the population but also as a means of social control, using both repressive and persuasive measures.
In his Ascension Day Speech in May 1927, Mussolini asserted that &bquo;demographic power conditioned one political and thus the economic and moral power of nations&equo;.
In Italy pro-natalist policies were bound up with emigration and &bquo;anti-urban&equo; policies.
Both emigration and internal migration became increasingly controlled.
To prevent Italians leaving, emigrating was made increasingly difficult.
Within Italy migration was connected to the aim of moving people from the cities, where birth rates were lower, to the newly reclaimed land.
Reclaimed land also made more areas available for agriculture.
It was hoped that this would increase Italy's own food production and increase Italy's economic independence as part of the policy of autorkay.
In Italy repressive measures were prominent in pro-natalist policies.
Severe penalties were introduced for abortions and the sale of contraceptive devices.
Preference in employment was shown towards fathers of large families and a bachelor tax was imposed on single men to induce them to marry.
More constructive measures were also introduced to help families with children financially and medically.
Marriage loans were introduced, grants were made available to cover medical costs and family allowances increased in amount and coverage.
Tax concessions were also introduced for very large families.
However, these did not make much difference as the largest families tended to be the poorest families who paid little in the way of direct tax.
The Opera Nazionale coordinated various measures relating to mother and child welfare.
These policies had little effect on the Italian birth rate in the period 1930-1932, and between 1933 and 1937 not a single department showed a rise in fertility (Glass).
In 1937 the number of births and level of fertility did rise.
This coincided with the intensification of the pro-natalist campaign, but was probably due to the soldiers returning at the end of the Abyssinian war and the resultant rise in the number of marriages.
By 1939, the downward trend in the Italian birth rate was again apparent.
In Germany the pro-natalist measures introduced tended to be more effective than those introduced in Italy.
In his pro-natalist policy it can be seen that Hitler was continuing trends that began around 1918 when Germany had the lowest birth rate in Europe.
In 1917 the First German Association of Large Families was founded and at least two active pro-natalist organisations were established.
The techniques Hitler used were broadly similar to those used in Italy.
Stringent measures were taken to prevent abortion and the sale of contraceptive devices; birth control clinics (which were more prevalent in Germany than Italy) were closed down.
Marriage and pro-creation of &bquo;Aryan Stock&equo; (Borrie) was encouraged with financial incentives, cheap medical care and housing.
The &bquo;Bureau for Explaining Population Policies&equo; was set up to popularise large families and medals were given to prolific mothers.
From 1933 to 1939 the German birth rate rose by one-third from 14.7 to 20.3 per 1000 (Glass).
Whether this was a result of the improved economic conditions on one hand or the introduction of pro-natalist measures it is difficult to assess.
Thus we can see how Germany's pro-natalist policies had a greater measure of success those in Italy.
Social policy in Mussolini's Italy and Hitler's Germany also had broader aspects.
They related to attempts to control society through mass organisations and attempts to bring industry and the economy within the framework of a Fascist, Nazi state.
Policies to encourage marriage and child-birth were also designed to create a new army of workers and soldiers to serve the state.
Through these policies it was also hoped to pull women out of the work-place, by glorifying the role of nurturing, family women to to reduce unemployment.
Through the population policies of Fascist Italy and Nazi Germany we can recognise broader aspects of social policy relating to the economy and the creation of a disciplined mass society with the states attempts to repress and coerce the population at the same time.
However, what effect these social policies actually have on the population?
Many historians agree that there is a difference between what the state aimed to achieve and what it actually achieved.
Between was set out on paper as theory and what actually happened in practise.
Nazi Germany tended to carry out their policies with greater thoroughness and efficiency than Mussolini's fascist regime.
In Italy traditional social cleavages survived more openly than in Germany.
Fascist attempts to create mass classless and disciplined society, maintained and sometimes exacerbated traditional social divisions.
The main differences between Italian Fascism and Nazi Germany seem to be in the greater degree of autonomy of industry and the failure of Mussolini to unite the middle classes and the working classes even superficially.
Sarti points out that industry, under Fascist rule, &bquo;managed to return a degree of autonomy&hellip; preserve traditional prerogatives and broaden it's access to government&equo;.
Going against the totalitarian goal of &bquo;being absorbed in the state&equo; (Sarti).
Through mass organisations such as Dopolavro, ONB, and GIL, the fascist regime attempted to influence the social and leisure time of the masses.
These movements, especially the youth movement GIL, tended to keep the middle and working classes apart, reflecting deep social divides in Italy.
Victoria De Grazia points out that the Italian fascists practised what might be described as &bquo;selective totalitarianism&equo;, which had little of the &bquo;compulsive thoroughness&equo; of Nazi Gleichschattung or synchronisation.
Italian fascism tended not to regulate groups which posed no obvious threat to the regime.
Nor did it seek to extend state and party control into areas of civil life which served no immediate political and economic end.
Fascism did &bquo;intrude&equo; (De Grazia) society by using the state apparatus to legitimate their rule through force and consent.
De Grazia points out that there was a fundamental transformation in the way the power was exercised and that this did influence the state.
However, Tannenbaum points out, that Italian society remained as &bquo;hierarchical and divided as before&equo;.
Thus we can see how the social policies of Italian Fascism failed to meat its objective of creating a classless society enough they did change one nature of the state and politics.
Germany put in to practice its social ideology and policy in a much more vigorous way than Italy did.
On the surface Germany seemed far more successful than Italy in creating a classless society.
Ley declared &bquo;we are the first country in Europe to overcome the class struggle.&equo;
However, Schoenbaum points out the superficial level of social change pointing to the &bquo;schizophrenia&equo; of Nazi society &mdash;; where it &bquo;could be seen everything had changed and nothing had changed.&equo;
While he points out the &bquo;approximation of class and status came to an end&equo; and that &bquo;traditional class structures broke down&equo;.
He argues that there was a &bquo;dual society&equo; in Nazi Germany; one in which the &bquo;addition of a uniform or lapel pin&equo; could invalidate traditional ties.
Nazi life was detached from traditional underlying relationships which still continued.
&bquo;Employers remained employers even when addressed as workers&equo;.
The real triumph of national socialism, argues Schoenbaum, was not so much the creation of a new society as the creating of a &bquo;new social consciousness&equo;.
Dahnendorf emphasises the failure of Nazi Germany to establish a complete, totalitarian basis of support through social reform.
He points out that coordination was a dominant tendency, but it was not total.
The hold of the state and party &bquo;did not extend in to every family&equo; and some sections of society &bquo;maintained a certain autonomy&equo;.
Thus we can see how social reform in Fascist Italy and Nazi Germany did not achieve, fully, its aim of a classless, mass-disciplined society.
Though in Germany traditional cleavages were less obvious than in Italy.
Did the social reform of Fascist Italy and Nazi Germany create &bquo;embryonic Welfare states.&equo;
In both Italy and Germany legislation relation to social reform was introduced.
But as Syrup points out this could only be called &bquo;minimal&equo; though it was not retrogressive.
When the Fascists and National Socialists took office they promised fast and far reaching social reforms.
However, virtually nothing came of them except occasional improvements.
For instance, in Germany between 1935 and 1939 legislation was introduced, guaranteeing work for the building trade, creating a unitary administration of public employment, limited obligatory old-age insurance, and health insurance for some trades.
In Italy, due to the low level of wages, the fascist trade unions secured complementary forms of compensation such as family subsidies, sick pay, year-end bonuses, and paid national holidays.
However, it must be remembered that despite these minimal gains, labour lost all of its institutional rights, the right to organise, freedom of movement, the right of collective bargaining, and freedom of vocational choice.
The limited social policy introduced by Mussolini in Italy and Hitler in Germany should be seen within the context of a totalitarian Fascist/Nazi state.
The legislation can be seen as a by-product of these regimes attempts to create a mass, totalitarian basis of support.
Structurally speaking, it can be said that the highly centralised structure of these states and the minimal legislation introduced could constitute an &bquo;embryonic welfare state&equo;.
However, morally speaking, the nature of these states cannot be seen to be catering for the betterment of the broad masses but the consolidation of a ruling groups power.
The minimal structural nature of these &bquo;welfare states&equo; can be seen to have arisen out of their regime's policies rather than be created by them.
UNIVERSITY OF EDINBURGH, DEPARTMENT OF HISTORY
POLITICAL POWER IN THE SHAPING OF SOCIETY IN EUROPE, 1870-1940
ESSAY 7 &mdash;; Did Tsarism show any real signs of re-establishing it's grip on Russia in the period after 1905.
KIERAN LEE
TUTORS NAME: Jill Stephenson
28th of April 1991
The Autocracy in Russia, following 1905, increasingly faced new challenges and a growing opposition that lead to its eventual downfall.
In 1905, the massacre on Bloody Sunday galvanised political opposition against Tsarism and was seen by Liberals and Marxist alike &bquo;as the first engagement if a larger and longer battle&equo; (Rogger).
The Bolsheviks and Mensheviks proclaimed Bloody Sunday as the &bquo;beginning of revolution&equo; (Rogger).
The threat of violence and real fear of revolution prompted the Government to adopt limited constitutional changes.
The Witte concessions in the October Manifesto were aimed to close the breech between the state and society and to close the gap between &bquo;the inherited forms of Russian social and political life and the emergence of new forms of opposition&equo; (Sir Arthur Nicholson).
This essay will examine the forces that lead to Tsar Nicholas losing his &bquo;grip&equo; on Russia; how the Tsar tried to re-establish his power and whether the destruction of the autocracy was a political mishap or part of deeper forces which meant the Russian &bquo;road&equo; to a constitutional state was an impossibility.
The seeds of crises and destruction of the Russian autocracy were apparent well before 1905.
The growth of a heavy industrial base, itself not imposed to Tsarism, unleashed enormous tensions in Russian society and politics.
The major force of industrialisation was foreign policy.
Russia's desire to maintain her status as a great power demanded a program of modernisation and rearmament.
Increasingly, to finance this, the Russian economy was tied to foreign loans and an erratic international money market.
Mendel points out that this helped pave the way for economic crises that aggravated social discontent before 1905.
Sergei Witte, the Tsar's minister of finance, realised that creating an industrial base for Russia had risks.
His gamble on the ability of industry to produce improved living conditions before the peasantry and proletariat found the burden intolerable, did not come off.
Kochan supports this view about the economic burden of industrial development and points out that &bquo;industrial development and its implications&hellip; were the most potent challenge to the status quo&equo;.
Russian society became increasingly more diverse and isolated from the state.
New merchant and professional classes arose and a proletariat developed out of the peasantry.
At the turn of the century, Russian society was changing and it was the challenge of the autocracy to absorb these changes or push increasingly alienated social groups further into political and social isolation.
As their grievances were not met, or even acknowledged by the ultimate power in the state &mdash;; the Tsar.
Kochan demonstrates how the autocracy failed to absorb these grievances and changes in society.
Especially those of the poor workers and peasants.
It was the mass of poor Russians, the workers and peasants that bore the cost of industrialisation.
Mendel argues that the government's policy of a high rate of indirect taxation and the high price of imported goods meant that industry, far from bringing benefit to the masses, actually contributed to their impoverishment.
The failure of the government to acknowledge the &bquo;irredeemable poverty&equo; (Kochan) of the masses, forced the peasants and workers to demonstrate their grievances.
The workers and peasants increasingly found a common economic opposition to the Tsar.
Kochan further argues that the grievances of the workers and peasants supported each other.
He points out that the working classes consisted mainly of peasants forced off the land through extreme poverty.
However the workers ties to the countryside remained strong due to the seasonal nature of factory work, the close of factories to the countryside and the legal and family ties of the workers to the peasant commune.
The harnessing of the village to the town in this way helped to bring radical ideas to the village and give a political voice to the problems of poverty and land shortage in the countryside&hellip;
The hardship and poverty of the workers meant that, increasingly, strikes became part of industrial life in the towns.
Kochan points out from 1895 to 1905 the strike movement grew, increasingly it had political rather than simple economic concerns.
The workers not only suffered from moderate wages but also from a sense of resentment, humiliation and social isolation at taking factory work.
Their grievances were almost totally ignored by the government and thus since the government suppressed almost any step taken collectively by the workers to improve their economic position, the political aims of the workers became dominant over the economic goals.
As the measures of repression by the government grew, it became necessary to use the army to quell strikes and disturbances.
This demonstrated the disturbed state of Russian industrial relations as the proletariat remained outside society &mdash;; deprived of any stake in the existing order.
Thus we can see how the isolation of the Tsar and the grievances of the masses, especially after the Russia's loss in the Russo-Japanese war (which severely aggravated poverty and hardship) forced the peasants and workers to widen their initial economic grievances into demands for political change.
In a state where autocracy sought to monopolise all forms of political life no political party could really exist.
Yet within two years three parties illegally took to the field in opposition to Tsarism, demonstrating the strength of feeling against the autocracy.
In 1901 the Social Revolutionaries were formed and in 1903 the Liberal Party and Social Democrats were established.
The quickening of political life &bquo;demonstrated the increasingly fluid state of Russia at the turn of the century&equo; (Kochan).
Rogger points out that Bloody Sunday &bquo;galvanised the political opposition&equo; and the following uprising, triggered by the massacre in St. Petersburg, demonstrated the grievances of the masses.
The real fear that this revolution might overthrow the Tsar forced him to make some political concessions to appease the masses.
Thus a limited constitution with a representative body, the Duma appeared in Russia for the first time.
Riha, in his book &bquo;Russian Constitutionalism&equo;, showed how the Tsar tried to reassert his power and how the forces of opposition heeded to this.
He points out that on April 24th, 1906 the Emperor Nicholas II addressed the deputies of his first Duma, stating that the Duma represented the &bquo;rebirth of her best forces&equo;.
However, many remained sceptical, remembering the Tsars pledge in 1895 in a speech to Zemstvo representative to maintain autocracy.
The first Russian parliament was established in a country which had little done to prepare the ground for constitutionalism.
Rogger points out that any moves towards a representative democracy were blocked by the negative attitude of representative institutions which was shared by the Tsar.
Municipal and rural self-government remained &bquo;weak plans in autocratic Russia&equo; (Rogger).
Despite the promises of the October manifesto in the spring of 1906, there were still no guarantees of political freedom.
Thus, in 1906, Russian constitutionalism was born into an atmosphere of &bquo;promise and denial&equo; (Riha).
The party's anger and resentment at the limited nature of government action did not augur well for the smooth functioning of the Russian Parliament.
The revolutionary experience 1905 pushed Russia under of Semi-constitution, where the emperor ceased to be seen as an autocrat.
However, Tsar Nicholas II still retained formidable powers &mdash;; an absolute veto on all legislation, full control of foreign affairs, he had emergency powers under Article 87, and appointed half the members of the state council, which could veto the Duma.
However, Riha points out, Russia an instrument by which popular will, even if filtered by restrictive suffrage, could be expressed in legislation.
The individual citizen was beginning to learn he had recourse against the all powerful government.
The results of the five years of legislation were modest, but not insignificant.
However the hostility of the Duma increased as the country as a whole became more radical.
Despite this the Duma was reduced from a legislative to a consultative body and hedged by all manner of restrictions.
In May, 1914 a resolution against the government was passed &bquo;the ministry's activities arouse dissatisfaction among the broad masses who have hitherto been peaceful.
Such a situation threatened Russia with untold dangers&equo;.
The war of 1914 and the outburst of patriotism saved the government from an approaching storm.
But the military failures revealed the weaknesses and incompetences of one regime.
Riha points out that a monarch is a vital prerequisite for a constitutional state in Russia.
However, Tsar Nicholas II did everything to discredit itself in the public eyes instead of cooperating with his nation he withdrew further from realities.
Van Laue asserted &bquo;there was no chance for a liberal constitution in Russia whatsoever&equo;.
Historians are divided into two viewpoints about the Tsars ability to reassert his power and avoid revolution.They are the optimists and pessimists.
The optimists feel that, in the last decade of imperialism, Russia was trying an evolution to Western style parliamentary government and open society.
The pessimists argue that there were deeper forces determining the fate of Russia toward a Bolshevik denouement.
The optimists feel that in the absence of war Russia would have continued on the road to progressive modernisation.
Through the pressures of the &bquo;growing labour movement, the liberal middle-class and the socially conscious intelligentsia&equo; (Mendel).
The optimists point out that the most substantial progress in Russian political life was the establishment of the Duma monarch (1906-1914), the closest Russia got to a constitutional Government.
During the reign of Nicholas II, the speedy industrial growth, the transformation of peasants to small proprietors through the emancipation and Stolypin's reforms, the spread of education local government through the Zemstovs and the Duma all contributed to create a constitutional climate in Russia.
The Bourgeois revolutions of 1905 and 1917 seemed to the optimists a natural development on the Russian road to a Western style constitutional state.
However, in 1917, a new revolutionary wave was in ascent.
There was an increase in strikes, an all intensification of the strain between the opposition and the state which increased trends towards invited political activity.
The war raised discontent to revolutionary pitch.
And once again the autocracy appeared unable to mobilise effectively the resources that they created, through an industrial economy, to fight a modern war.
Union discontent was lead by a well-organised unified opposition front, and the Rasputin affair further lost legitimacy of Tsarism with an opposition stronger than in 1905.
The essential question, Mendel points out, is whether a social and economic transformation could take place without altering the political system.
Mendel's answer is no, for not only economic resources had to change through modernisation, but also the traditional political regime had to be replaced by urban classes used by commercial and bourgeois interests the main beneficiaries of change.
The case of the optimists is under persistent attack.
The most vigorous is from Professor Theodore Van Laue, allowing &bquo;no chance for a liberal constitutional Russia whatsoever&equo;.
Van Laue is willing to acknowledge progressive development listed by an united opposition, economic expansion, and advance in education, and the improved peasant status.
But he easily discredits them, stating &bquo;next to voting was accomplished on regard to the basic necessities confronting the country&equo;.
He sees the revolution of 1905 as a &bquo;disintegrating halfway stage rather than a hopeful beginning&equo;, stressing the &bquo;superficial glow&equo; and &bquo;gloss of success&equo;.
Van Laue puts forward the argument that Russia was fatally trapped in an agonising contradiction that produced the establishment of a parliamentary government or in fact any other government than the Bolsheviks.
The main proof Van Laue offers for this argument is the polarisation of society between global conflict and a backward society that made a liberal constitution impossible.
He states that without economic development in Russia impossible without rapid industrialisation which is incompatible with a liberal constitution.
Why was this incompatible?
Van Laue points out that rapid industrialisation would never have found majority support under a parliamentary regime, &bquo;especially as Russian liberalism was largely agrarian in orientation&equo;.
He also points out &bquo;the freedoms of the Western model were incompatible with government initiative in the Russian tradition&equo; and &bquo;parliamentary government under Russian conditions&equo; simply could not solve Russian problems.
Stavov supports Van Laue argues that the only way Russia might advance fast enough to avoid the disaster of inevitable global conflict was through a &bquo;totalitarian planned economy&equo;.
Liberal society was unprepared to withstand the &bquo;shocks&equo; of economic development.
However, Van Laue points out, the &bquo;shock&equo; of economic development could be managed by the Bolsheviks.
This view is shared by Kochan, who argues that only a party with a highly disciplined party with techniques of mass organisation could replace the autocratic system that held a fractured empire together&equo;.
Van Laue supports this &mdash;; &bquo;the Soviets proved what seemed to be highly effective solutions to the crises that confronted the Russian state under the Tsars&equo;.
Van Laue's assertions about the inevitability of a Bolshevik revolution can in part be supported by the revolutionary theories of the Bolsheviks themselves.
If we examine the revolutionary theory of the Bolsheviks we can see how they shared with Van Laue many common points about the nature of Tsarism in Russia and the crises which it faced.
This is especially so if we examine Trotsky's contribution to the Bolsheviks theory of &bquo;permanent revolution&equo;.
Like Van Laue, Trotsky correlates the Russian and the &bquo;global&equo;.
&bquo;For backward countries&equo; writes Trotsky &bquo;the road to democracy passed through the dictatorship of the proletariat.
Thus democracy is not a regime that remains self-sufficient for decades but is only a direct prelude to the socialist revolution.
In other words &bquo;the Russian revolution will create conditions in which power can pass in to the hands of the workers&hellip; before the politicians of bourgeois liberalism get the chance to display to the full their talent for governing&equo;.
We can see how Trotsky stressed the urgency of revolution before a middle class developed out of the Tsars industrial program.
Thus the crises that faced the autocracy through the alienation of the &bquo;masses&equo; was seen by the Bolsheviks as ripe time to launch a revolution before the bourgeois had time to develop.
However, Trotsky also had to affirm that although the Russian proletariat in power could win the support of the peasantry by appropriate measures, it would be unable to maintain itself in power or pass over to a socialist regime in Russia &bquo;without the direct state support of the European proletariat&equo;.
This, Kochan points out, is where the theory fell down in its optimistically erroneous assessment of the prospects of revolution in the Western World.
Yet, Kochan continues, if it was not a guide to what happened after 1917, it retains its importance as a dynamic diagnosis of the weakness of Tsarism and the force required to overthrow it.
Did Tsarism show any real signs of re-establishing its grip on Russia in the period after the 1905 revolution?
Russian Tsarism faced a challenge of absorbing new elements of society into the Russian social system and dealing with the political consequences of the economic change it introduced.
It offered limited change in the October Manifesto, and the creation of a representative body &mdash;; the Duma.
However, these changes can be seen as largely superficial of and as a way of retaining the power of autocracy rather than as a genuine effort in creating a constitutional state.
The reductions in status of the Duma from a legislative to a consultative body illustrates this.
As does the negative attitude of the Tsar and his court represented in the council chamber which constantly hedged the power of the Duma.
Thus Tsarism did show signs of establishing its power after the 1905 revolution.
However, its failure perhaps and inability to tackle the root of the problems and deal with the grievances of the masses.
The apparent blindness of the autocracy to these grievances, which were greatly aggravated by war increased the frustrations of the masses.
The desire for improvement in economic conditions developed in to a demand for political change.
The Bolsheviks recognised the crises of the autocracy and harnessed there party to the revolutionary mood of the masses.
Kochan points out how effective the revolutionary theory of the Bolsheviks was in recognising certain weaknesses in the Russian state and arising disciplined party methods to overcome them.
The tsar failed to do anything constructive in dealing with the problems of the mass of Russian people.
Sir Arthur Nicholson, a contemporary observer noted: &bquo;should the peasants excited by socialist and anarchist agitators be led on&hellip; and should the working classes simultaneously rise in the towns there will be a catastrophe such as history has rarely witnessed&equo;.
Name: Kieran Lee (Graeme Conn)
Week of Experiment: Summer term, Weeks 1-3
Lab: Project
Class: Wednesday
Demonstrator: Dr. P. Caryl
Due: May 7th, 1991 (Extension to May 8th, 1991)
Submitted: May 8th, 1991
Hemispheric Specialisation with Subjects Presented with Stimuli in Either Visual Field while Shadowing Verbal Material
University undergraduates viewed stimuli using a tachistoscope.
These stimuli comprised of verbal (trigram) and spatial (dot location) tasks with the stimuli directed either to the left or right visual field of the subject while s/he shadowed auditory verbal stimuli.
There is evidence for a right field advantage for verbal tasks, while visuo-spatial tasks have been found to have a left field advantage.
This suggests that verbal ability is dominant in the left hemisphere and spatial ability is dominant in the right hemisphere.
It was expected that due to this left hemispheric specialisation for verbal ability, the right field advantage would be reduced for the trigram task when subjects were shadowing auditory verbal material.
The dot location task, which involves predominantly the right hemisphere, was expected to remain unaffected.
As expected, shadowing did result in a significant reduction in right field advantage for the verbal task.
It was, however, also found to cause a slight change in right field advantage for the visuo-spatial task.
Introduction
The human brain is divided in half and control of the body's basic movements and sensations is evenly divided between the two halves, or cerebral hemispheres.
This occurs in a crossed fashion, with the left hemisphere controlling the right side of the body and the right hemisphere controlling the left side of the body.
There is asymmetry of function, however, between the two hemispheres as can be seen by the fact that very few people are truly ambidextrous.
There is usually dominance in one hemisphere.
Historically, clinical evidence has been the greatest source of research into differences between the hemispheres.
That is, patients who have suffered brain damage to one of their hemispheres have been used in order to examine the effect of this damage on their linguistic and spatial abilities.
More recently, however, interest in the left and right brain is due to work involving split-brain patients.
These patients have undergone medical surgery to cut the cortical pathways, most notably the corpus collosum, that normally connect the hemispheres.
This gives researchers the ability to study the abilities of each hemisphere of a single brain separately.
For instance, Evan Zaidel has developed a device known as the Z lens.
This is a contact lens that permits a split-brain patient to move his/her eyes freely when examining an object, but at the same time ensures that only one hemisphere of the patient's brain receives the visual information.
Thus, this lens allows the patient to view a stimulus for as long as s/he desires while also enabling the investigator to present the stimulus to one hemisphere alone.
Of course, this apparatus would not be valid for use in normal subjects as the corpus collosum would allow the transfer of information from one hemisphere to the other.
On the basis of these split-brain studies, the most general statement that has been made about right hemisphere specialisation is that they are non-linguistic functions that seem to involve complex visuo-spatial processes.
Evidence supporting right-hemisphere superiority includes differences in the abilities of the two hands of a split-brain patient to draw a figure of a cube.
Invariably, the left hand produces a better drawing.
It has also been found that the left hemisphere is specialised for language functions, but these specialisations may be a consequence of the left hemisphere's superior analytic skills, of which language is a manifestation.
Similarly, the right hemisphere's superior visuo-spatial performance is derived from its synthetic, holistic manner of dealing with information with ambiguous instruction simply to match the similar stimuli.
Presently, research is also being carried out using normal patients.
Due to the anatomical crossing described above, the optic nerves from the right halves of each eye travel to the visual cortex in the right hemisphere and vice-versa.
As a result, when visual stimuli are flashed briefly in the left visual field it projects first to the right hemisphere whereas stimuli flashed in the right visual field project initially to the left hemisphere.
This is only true if the image is projected for less time than it takes for the eye to move.
The time required for an eye movement is approximately 150 msecs, and so if the stimulus is presented for less than 150 msecs we can be certain that the image is projected only to one hemisphere directly.
Using a tachistoscope and testing normal subjects it has been repeatedly shown that letters and words presented in the right visual field are more easily identified than the same stimuli presented in the left visual field.
This right field superiority has been interpreted as due, at least in part, to the fact that stimuli presented to the right visual field will have readier access to regions of the left hemisphere specialised for the reception of verbal stimuli.
It has also been proposed that stimuli where accurate detection rests on some form of visuo-spatial ability are better perceived when projected to the right hemisphere.
Conn and Lee (1991) confirmed these findings.
In addition, they found differences in lateralisation between men and women.
This experiment hypothesises that when shadowing verbal material, tasks using the left hemisphere are more likely to be affected than those using the right hemisphere.
Design:
The accuracy of response to verbal and spatial stimuli which were projected using a tachistoscope was measured with and without subjects shadowing verbal material, using a between subjects design.
Subjects:
17 undergraduate students at the University of Edinburgh and Queen Margaret College &lsqb;71 subjects from a previous study (Conn and Lee 1991)&rsqb;.
Materials:
A tachistoscope and two different kinds of stimulus materials:
CVC Trigrams (32 card set): 6x4 cards on which are printed, either to the left or right of the centre point by approximately 1.5 cms, consonant-vowel-consonant trigrams.
In each set there were 16 trigrams appearing on the left and 16 trigrams appearing on the right.
Dot Location (32 card set): Cards on which a single dot appears either to the left or right of the centre, and which appears in any of 16 positions.
A key card numbering the various positions was mounted on the top of the tachistoscope.
Procedure:
Seventeen subjects were tested with both the verbal task (trigrams) and the visuo-spatial task (dot location), and in addition were asked to shadow an excerpt from &bquo;The Real Frank Zappa Story&equo;.
For the location task, fixation cards with two half square fields on them were used.
After each trial, the subject was asked to write the position of the dot according to grid numbers mounted on the tachistoscope.
For the trigram task, fixation cards with a central + were used.
After each trial, the subject was asked to write the trigram.
The score for each trial was the number of correct letters of the trigram given in the right position.
The cards were presented in a random sequence using examples from the Gelleman Series, which avoids long runs of identical R or L stimuli, given below: 
When sorting the stimulus cards into order, it was ensured that subjects could not see the cards.
During the run, each subject wrote their answers on a answer sheet provided by the experimenter.
Results:
The difference in the degree of lateralisation for each task when performed without and with shadowing was compared in order to see if there was a reduction in right field advantage for trigrams while dot location remained unaffected.
As one of the two tasks, however, may be simply easier by nature than the other the result may be affected because, for instance, it is easier to recognise and report the location of dots than trigrams.
The dichotic nature of the tasks when performed with shadowing may also cause slight reductions in both visual fields for both tasks.
The Right Field Advantage, RFA, for each stimulus type was calculated in order to indicate the relative performance in each visual field for each task.
The RFA for each type of stimulus material was calculated using 
The RFAs calculated for each stimulus are shown in Table 1 and Table 2 for the subjects performing the task with and without shadowing respectively.
The results given for the tasks without shadowing are from the work done by Conn and Lee (1991), with 71 subjects performing identical dot location and trigram tasks.   
The results in Table 3 show that shadowing of verbal material causes an extremely significant reduction in RFA for the trigram task.
It was not expected that there would be any effect on dot location by the shadowing.
There was, however, a slightly significant increase in right field advantage.
Discussion
From the results in Conn and Lee (1991) we see that subjects more accurately recognised trigrams presented on a card when the material is presented to the left hemisphere.
This supports the hypothesis that there is a right field advantage for verbal ability over spatial ability.
This right field advantage reflects left-hemisphere specialisation for language functions.
The significant lack of this advantage in the tasks of dot location reflects right-hemisphere specialisation for the processing of visuo-spatial stimuli.
The results in Table 3 show that shadowing verbal material causes greater reduction in right field advantage with the verbal than the spatial tasks.
This is because both recognition of the trigram and comprehension of the passage being shadowed involve linguistic ability which is specialised in the left hemisphere.
There was also found to be some increase in right field advantage for dot location.
It is possible that this unexpected increase may not have occurred if a greater number of subjects had been used for the shadowing tasks.
The use of the same subjects both when shadowing was and was not involved may also have altered this result by accounting for any individual anomalies.
A study to investigate similar effects for the right hemisphere was originally considered as part of this work, but was abandoned due to lack of availability of subjects.
This study involves asking subjects to perform the dot location task and another task accessing the right hemisphere simultaneously to look for reduction in left field advantage for the dot location task a result.
This could be done by asking subjects to listen to and hum back music while performing the dot location task.
Response to music is believed to be dominant in the right hemisphere and as a result would be expected to exact a similar reduction in left field advantage to that found for right field advantage in the trigram tasks discussed here.
Other variables could also be isolated in order to examine their influence on hemispherical asymmetry.
These include handedness, familial sinistrality, depression, anxiety, schizophrenia, fatigue, age, smoking, and introversion/extraversion.
Name: Kieran Lee (Graeme Conn)
Week of Experiment: Autumn term, Week 4
Lab: Memory
Class: Wednesday
Demonstrator: Dr. P. Caryl
Due: November 14th, 1990
Submitted: November 14th, 1990
The Effects of Serial Position and Time Delay on Recall
University undergraduates listened to a list of words and were required to recall as many of these words as possible in any order they wished, either immediately or after a delay.
Undergraduates were also given groups of consonants and asked to recall as many as possible in their correct position after various time delays.
In the first case, the delay was expected to prevent further rehearsal of words late in the word list and as a result reduce the chance of their recall.
It was found that recall of words at the end of the list was indeed significantly greater than when recall was immediate and this gave rise to the phenomenon known as the recency effect.
A primacy effect was also found for recall, both when recall was immediate and delayed indicating a distinction between a short-term and long-term memory store.
In the second case, the number of consonants recalled was expected to decrease as the delay increased.
This was, in fact, found to be so and the results were interpreted to show that short term memory decayed after about 20 seconds.
Introduction
The study of recall over time has been used to find whether there are separate system underlying short-term and long-term memory or only one system functioning at different levels (Baddeley, 1976).
The present research was carried out to investigate the decay of short-term memory and show that rehearsal of an item increases the likelihood of its recall.
This can be shown in terms of the recency and primacy effect respectively.
This can be described using the conceptualisation of memory shown over:
The primacy effect is thought to reflect the operation of secondary memory, i.e. recall from a long-term memory store while the recency effect is believed to reflect the operation of primary memory, i.e recall from a short-term memory store (Bernstein et al, 1988).
The experiments were performed in order to test for the following experimental hypotheses.
For subjects given a list of words, when recall is immediate words with serial positions late in the list are more likely to be recalled than when recall is delayed.
If subjects are given a list of words, words with serial positions early in the list are more likely to be recalled than those further towards the middle of the list.
Also, short term memory decays over a period of time.
Experiment 1: Serial position effects on recall
Design:
The number of items recalled from a certain position in a list over successive trials was measured, where recall was immediate or delayed, using a between subjects design.
Subjects:
Thirty-four second-year psychology undergraduates at the University of Edinburgh.
Materials:
Word lists (one for testing immediate recall (set A) and one for testing delayed recall (set B)).
Procedure:
Immediate Recall
Subjects were divided into two groups of 17.
One member of each group was assigned a partner from the other group.
Each pair was then placed in an isolated cubicle.
One member of the pair was told by his/her partner &bquo;I am going to read out a list of words to you.
When I have finished reading out the list, I shall tap on the table.
I then want you to write down as many of the words as you can remember and in any order you wish.
You will have 90 seconds to recall the word list.
Then we will start another trial.&equo;
Word Set A was provided for these trials.
Fifteen seconds was given between trials in which the subject folded over the the piece of paper on which s/he had recalled the word list and wrote the number of the trial on the paper.
Twelve such trials were carried out.
Delayed Recall
In each pair, experimenter and subject swapped roles.
The same procedure was adopted as for the immediate recall part of the experiment but words were used from set B. However, before the subject was allowed to recall the words an arbitrary three digit number, provided below the word list for each trial, was given to the subject and the subject asked to repeat the number and count backwards by threes as fast as they could for thirty seconds.
Scoring recall:
For both parts of the experiment, for each trial (represented by columns 1 to 12 in each word list), the experimenter simply ticked each word correctly recalled and totalled the number of words recalled for each of the 20 serial positions.
As a result 20 scores were obtained for each part of the experiment, each between one and twelve.
Results:
 
Table 1 and Table 2 show the means and standard deviations for recall by serial position when recall was immediate and delayed respectively.
Fig 1 shows the mean probability of recall of a word plotted against its serial position in the word list when recall is immediate and delayed.
From this graph it can be seen that recall values are generally higher towards the beginning and end of the word list when recall is immediate and that the latter recency effect is absent when recall is delayed&hellip;
Respective values for the later serial positions were compared between the two groups to test for the recency effect produced in the first group. 
Table 3 shows recall to be significantly greater for the last five words in the list when recall is immediate and thus supports the recency effect.
From the information in Table 3 find that recall is fairly stable until at least the 14th word when recall is immediate.
To find an estimate of the point when recall is worst, take the middle of the word list to be serial position 10.
Approximating from the curve that recall is reasonably level from serial position 8, can calculate a &bquo;mean low&equo; value for recall: 
This value was compared with recall values for early serial positions in order to test for the primacy effect. 
Table 4 supports the primacy effect as recall is significantly greater for the first three words in the list.
Experiment 2:
Design:
The number of consonants in a four-consonant sequence recalled after varying periods of time, using a within subjects design.
Subjects:
Twelve undergraduates at the University of Edinburgh and Queen Margaret College
Procedure:
Twelve subjects were individually given a four-consonant sequence and an arbitrary number to count backwards from in threes as fast as they could before they were given the signal.
Upon the signal, the subjects were asked to recall as many consonants in their correct places as possible.
The subjects were tested using items on the list given in Appendix A. A period of 15 seconds was allowed between trials.
Each subject completed a total of 18 trials; covering six time delays with three trials each.
A period of just a few seconds was allowed for recall and if they could recall only one or two letters the subjects were asked to specify their position in the sequence.
Delay times were rearranged for the consonant groups in the list for each trial so that particular consonant groups were not always associated with the same delay time.
Scoring:
One point was computed for each letter that was correctly recalled in the correct place.
If a letter was not in its correct place, it was not counted.
All the scores for the trials which involved the same time delay were totalled up.
The mean number of letters recalled across the 12 subjects for each time delay was then calculated.
Results:
Table 5 shows the mean and standard deviation values for recall of the letters in the consonants-groups by the subjects after varying delays.
The number of letters in the consonant-group recalled after 0 seconds was compared with recall after longer delays. 
Also, the number of letters recalled after 45 seconds was compared with recall after smaller delays. 
From Table 6 and Table 7, we see that short-term memory has already started to decay after 15 seconds, but has not completely decayed until somewhere between 15 and 25 seconds.
This decay can also be seen in Fig 2, which shows the probability of a single letter being recalled in its correct position after various time delays.
Anomalies indicated by the large standard deviations in Table 5 for the larger time delays could be attributed to some of the subjects attempting to rehearse the letters using some mnemonic technique while performing the distractive task.
Discussion
Words which appear early in the lists were remembered more easily as the subjects had more opportunity to rehearse these after being read than the rest of the words in the list.
Thus they have been re-encoded into long-term memory and this gives rise to the primacy effect.
Words which appear late in the list are remembered easily when recall is immediate as they are resident in short-term memory which has not had a chance to decay.
When recall is delayed by 30 seconds, they are not remembered very easily as short-term memory has had sufficient chance to decay as short-term memory seems to decay after twenty or so seconds.
This supports the presence of distinct primary and secondary memory stores rather than a unitary system.
Name: Kieran Lee (Graeme Conn)
Week of Experiment: Autumn term, Week 6
Lab: Prismatic Adaptation
Class: Wednesday
Demonstrator: Dr. P. Caryl
Due: November 28th, 1990 (Extension: November 30th)
Submitted: November 30th, 1990
Adaptation to Prismatic Distortion Due to Visual or Proprioceptive Change
University undergraduates pointed to targets under a variety of conditions.
These were primarily pointing with normal vision and while wearing prisms, using the preferred hand, before and after an adaptation period.
Before and after this period, subjects also pointed either at an auditory target or using a non-preferred hand in order to identify the mechanism by which this adaptation occurred.
Adaptation was expected to be the result of a change in visual perception or the proprioceptive sense of the arm.
The results were contradictory and thus neither hypothesis was supported.
It was not possible to determine if neither or both hypotheses contributed to adaptation.
Introduction
The purpose of the experiment was to investigate how the wearing of prisms affected perception and motor response under certain circumstances.
In 1963, Harris performed experiments in order to explore the adaptation of displaced vision; to observe whether adaptation was due to visual, motor, or proprioceptive change.
When prisms are worn, the retinal image is displaced.
Objects directly in front of the subject appear to be off to one side.
When asked to reach for these objects, subjects make corresponding errors.
After a few minutes of practise, subjects adapt and are able to compensate for this distortion.
Therefore, the linkage between perception and motor responses is altered.
There are several possible mechanisms which may be responsible for this adaptation.
Firstly, that there is a change in visual perception.
After adaptation, things that initially appeared to be off to one side appear directly ahead again.
Or, secondly, that there is a change in the felt position of the arm.
The proprioceptive position of the arm is altered so that it &bquo;feels&equo; as if in a different spatial location.
We can hypothesise that if there is a change in visual perception then pointing with a non-preferred hand should show the same adaptive shift as for the normal hand, while pointing to an auditory target should show no adaptive shift.
Alternatively, we can also hypothesise that if there is a change in proprioception, pointing with the non-preferred hand should show no adaptive shift, while pointing with the preferred hand at the non-visual target should show the same adaptive shift as that for the visual target.
Experiment 1
Design:
The distance of a finger from a target was measured where the subject was either under normal conditions, wearing prisms, or had his/her eyes closed while listening to an auditory signal using a within subjects design.
Subjects:
Thirty-four second-year psychology undergraduates at the University of Edinburgh.
Materials:
Prisms attached to spectacle frames, a sloping card marked with four targets &mdash;; A, B, C, and D evenly spaced with the outer lines at least six inches in from the edges of the card, and a metre rule.
Procedure:
Subjects were divided into two groups of 17.
One member of each group was assigned a partner from the other group.
Each pair was then placed in an isolated cubicle.
One of the pair was designated the subject and the other the experimenter.
The subject was then asked to point at the targets under three conditions:
A &mdash;;
Without prisms.
B &mdash;;
Wearing prisms.
C &mdash;;
Auditory target with eyes closed.
Condition A was used in order to measure normal accuracy and condition B in order to measure the effect of the prisms.
In condition C, the experimenter asked the subject to close his/her eyes with the prisms left on so the effects would not be lost when the subjects opened their eyes.
Auditory targets were produced by the experimenter tapping on the table by the targets using a pen.
After measurements had been carried out under each of these conditions, the metre rule was moved back towards the experimenter so that the subject could see his/her finger when pointing.
They then pointed at the targets for ten minutes in order to improve their accuracy.
At the end of this adaptation phase, the subjects withdrew their hand, the metre rule was replaced, and the subjects were retested under the experimental conditions in the order B, C, A. Under each condition, both before and after the adaptation phase, the subject was required to point at each of the four targets five times with the order in which they were required to point to the targets randomised so that the subjects did not become practised by repeated measurements in the same position.
Experiment 2
Design:
The distance of a finger from a target was measured where the subject was either under normal conditions, wearing prisms, or using a non-preferred hand while wearing prisms using a within subjects design.
Subjects:
Thirty-four second-year psychology undergraduates at the University of Edinburgh.
Materials:
Prisms attached to spectacle frames, a sloping card marked with four targets &mdash;; A, B, C, and D evenly spaced with the outer lines at least six inches in from the edges of the card, and a metre rule.
Procedure:
In each pair, the experimenter and subject swapped roles.
The procedure was exactly as for Experiment 1 except condition C was to point at a target using the non-preferred hand while wearing prisms, rather than pointing at an auditory target.
The adaptation phase, however, again was performed using the preferred hand only.
Scoring pointing accuracy: The distance between the subject's finger and the centre of the target was measured in millimetres.
The direction was included with this measurement.
The experimenter's right was taken as positive and their left as negative.
The results were then calculated as the average error across the 20 readings for a single condition for each of the three pre- and post-adaptation phase conditions.
Results:
Table 1 and Table 2 show the mean and standard deviation in accuracy of pointing before and after the adaptive phase for experiments 1 and 2 respectively. 
Table 3 shows the results of a one-way ANOVA (related) calculated between the 6 conditions, A, B, and C in both the post and pre-adaptation phases, for each experiment.
The F-ratio was calculated in order to determine whether any of the conditions showed a significant difference from the others. 
The F-ratio was found to be significant (p &lt; 0.001) in both experiments.
Several t-tests were carried out between conditions in order to find which hypothesis was most strongly supported. 
Table 4 shows the result of a t-test (related, 1-tailed) comparing the error in pointing of normal vision with that when wearing the prisms.
This result, as expected, shows a very significant increase in error due to the wearing of prisms. 
Table 5 shows the result of a t-test (related, 1-tailed) comparing the error in pointing for condition B before and after the adaptation phase.
This result showed a highly significant improvement in accuracy due to the adaptation period. 
Table 6 shows the result of a t-test (related, 2-tailed) which compares the difference in error when pointing after the adaptation before and after the prisms are removed.
The results show that this adaptation is not due to some sort of learned mechanism of correction in pointing, as the error increased when the prisms were removed. 
Table 7 shows the result of a t-test (related, 1-tailed) which shows the difference in error between pointing to a non-auditory target before and after the adaptation phase in experiment 1.
This shows that there was significant improvement in pointing after adaptation.
From the hypothesis that adaptation is proprioceptive due to the learning of arm movements, this is what should be expected. 
Table 8 shows the result of a t-test (related, 1-tailed) which shows the difference in error when pointing with the non-preferred hand before and after adaptation.
This result shows a significant improvement in pointing after adaptation.
This supports the hypothesis that adaptation is due to visual change.
Discussion
As expected, the results show a significant decrease in accuracy when initially pointing to a target while wearing prisms than for normal vision.
The results also show a significant difference in accuracy before and after adaptation when wearing prisms.
This suggests that adaptation does occur.
Adaptation was not due to some sort of learned mechanism involving conscious correction of pointing, because there was a significant difference after adaptation between aim while wearing prisms and that for normal vision following the removal of the prisms.
This was also expected as a learned mechanism did not adhere with either hypothesis.
The results from Table 6 and Table 7 give weight to both the hypothesis of change in visual perception and that of proprioceptive change.
There was a significant increase in accuracy after adaptation in both cases; this causes a contradiction.
The change in auditory target pointing shown in Table 6 supports the hypothesis that adaptation is due to change in the proprioceptive sense in the arm, while the adaptation in the non-preferred hand in Table 7 supports the other hypothesis; that visual perception is the key to adaptation.
Harris found that adaptation was, in fact, due to a change in the proprioceptive sense of the arm.
Our experiment confirms this.
It, however, seems necessary to carry out further research into adaptation to prismatic distortion in order to isolate the exact cause.
The experiment could have been improved if measurements had been made more accurately.
It was often difficult, for the experimenter, to judge the exact position of the target in relation to the metre rule and precisely how far the subject's finger was from the target.
The results from the experiment showed that, when pointing, adaptation to prismatic distortion of a target did occur and that it was not due to any learned mechanism.
Aspects of both the suggested hypotheses were supported as each showed a result in its favour and thus the exact mechanism for adaptation was not determined.
The mechanism of adaptation could, in fact, be a combination of both the suggested hypotheses with, judging from the results, the second dominant.
Name: Kieran Lee (Graeme Conn)
Week of Experiment: Autumn term, Week 8
Lab: Hemispheric Specialisation
Class: Wednesday
Demonstrator: Dr. P. Caryl
Due: January 9th, 1991 (Extension to January 11th, 1991)
Submitted: January 11th, 1991
Hemispheric Specialisation Between the Sexes with Subjects Presented with Stimuli in either Visual Field
University undergraduates of both sexes viewed several stimuli using a tachistoscope.
These stimuli comprised of verbal and spatial tasks with the stimuli directed either to the left or right visual field of the subject.
For verbal ability, there was expected to be a right field advantage, whereas visuo-spatial ability was expected to have a lesser advantage in the right field indicating left field advantage.
This would support the hypothesis that verbal ability is dominant in the left hemisphere and spatial ability is dominant in the right hemisphere.
Also, it was expected that ability would be more lateralised in men than in women.
That is, in men there would be a greater right field advantage for verbal ability and a lesser right field advantage for visuo-spatial ability than in women.
There was, indeed, found to be a greater right field advantage in verbal ability and thus the first hypothesis was supported.
In the second case, however, a significant right field advantage was found for males over females in verbal ability, but no significant difference was found between males and females in advantage between visual fields.
This did not show men to be more lateralised visuo-spatially than women as would have been expected.
Introduction
The human brain is divided in half and control of the body's basic movements and sensations is evenly divided between the two halves, or cerebral hemispheres.
This occurs in a crossed fashion, with the left hemisphere controlling the right side of the body and the right hemisphere controlling the left side of the body.
There is asymmetry of function, however, between the two hemispheres as can be seen by the fact that very few people are truly ambiguous.
There is usually dominance in one hemisphere.
Historically, clinical evidence has been the greatest source of research into differences between the hemispheres.
That is, patients who half suffered brain damage to one of their hemispheres have been used in order to examine the effect of this damage on their linguistic and spatial abilities.
More recently, however, interest in the left and right brain is due to work involving split-brain patients.
These patients have undergone medical surgery to cut the cortical pathways, most notably the corpus collosum, that normally connect the hemispheres.
This gives researchers the ability to study the abilities of each hemisphere of a single brain separately.
Presently, research is also being carried out using normal patients.
Due to the anatomical crossing described above, the optic nerves from the right halves of each eye travel to the visual cortex in the right hemisphere and vice-versa.
As a result, when visual stimuli is flashed briefly in the left visual field it projects first to the right hemisphere whereas stimuli flashed in the right visual field project initially to the left hemisphere.
This is only true if the image is projected for less time than it takes for the eye to move.
The time required for an eye movement is approximately 150 msecs, and so if the stimulus is presented for less than 150 msecs we can be certain that the image is projected only to one hemisphere directly.
Using a tachistoscope and testing normal subjects it has been repeatedly shown that letters and words presented in the right visual field are more easily identified than the same stimuli presented in the left visual field.
This right field superiority has been interpreted as due, at least in part, to the fact that stimuli presented to the right visual field will have readier access to regions of the left hemisphere specialised for the reception of verbal stimuli.
It has also been proposed that stimuli where accurate detection rests on some form of visuo-spatial ability are better perceived when projected to the right hemisphere.
This experiment hypothesises that this is indeed the case and that also males are more lateralised in both the left and right visual fields, and accordingly the right and left cerebral hemispheres respectively, than women.
Design:
The accuracy of response to verbal and spatial stimuli which were projected using a tachistoscope was measured for each sex, using a within and between subjects design.
Subjects:
142 second-year undergraduate students at the University of Edinburgh.
Materials:
A tachistoscope and four different kinds of stimulus materials:
Words (32 card set): 6x4 cards on which are printed either to the left or right of the centre point by approximately 1.5 cms, common words of 3-5 letters.
In each set there were 16 words appearing on the left and 16 words appearing on the right.
CVC Trigrams (32 card set): The stimuli are presented in the same position on the cards, but are consonant-vowel-consonant trigrams, again 16 on the left and 16 on the right.
Dot Location (32 card set): Cards on which a single dot appears either to the left or right of the centre, and which appears in any of 16 positions.
A key card number numbering the various positions was mounted on the top of the tachistoscope.
Dot Counting (64 card set): Cards on which a group of dots numbering between 3 and 10 appear either left or right of centre.
There were 32 on the left and 32 on the right.
Procedure:
Subjects were divided into two groups of 71.
One member of each group, partner A, was assigned a partner from the other group, partner B. Each pair was then placed in an isolated cubicle.
Each partner was then tested with a verbal task (trigrams or words) and a spatial task (dot location or dot counting) for partners A and B respectively.
For the location task, fixation cards with two half square fields on them were used.
After each trial in the location task, the subject was asked to name the position of the dot according to grid numbers mounted on the tachistoscope.
For the counting task and verbal material, fixation cards with a central + were used.
In the counting task, the subject simply said how many dots were present after each trial.
The cards were presented in a random sequence using examples from the Gelleman Series, which avoids long runs of identical R or L stimuli, given below: 
When sorting the stimulus cards into order, it was ensured that subjects could not see the cards or the experimenter's worksheet, on which s/he recorded the subject's responses.
During the run, correct answers were checked before the stimulus cards were put into the tachistoscope and the appropriate trial ticked if the subject got the answer correct.
Results:
On the results sheet, subjects were asked to fill in their handedness using scores that they obtained in a handedness inventory (example in Appendix A).
To avoid getting into decimals, the LQ (Laterality Quotient) was first multiplied by 100.
The score obtained from each part of the inventory was recorded separately.
The sex of the subject was also recorded: 1 represented men and 2 represented women. 
The degree of laterilisation between different tasks was compared in order to see if there was a greater right field advantage, for example, for words than there is for trigrams.
Some of the tasks, however, may be simply easier by nature than others and a significant result may just indicate, for instance, that it is easier to recognise and report words than trigrams.
The Right Field Advantage, RFA, for each stimulus type was calculated in order to indicate the relative performance in each visual field for each task.
The RFA for each type of stimulus material was calculated using
The RFAs calculated for each stimulus are shown in table 2.   
Table 3 shows there is a significantly greater RFA for trigram than for dot location and for words than for dot counting.
Also, Table 4 shows there is a significantly greater RFA for trigrams than for dot counting and for words than for dot location.
It does not, however, show if there is a significant bias toward the right field for the verbal tasks or if, for example, verbal tasks are just more right in the left visual field. 
Table 5 shows the results of a t-test between a stimulus and the value 0.5.
It would be expected that if there was no right field advantage for a stimulus then the RFA would be aproximately 0.5 as the values for the left and right visual field would be about the same.
If the value is significantly greater than this there is a Right Field Advantage.
Above, we see that there is in fact a significant right field advantage for the verbal tasks, trigram and word recognition and also a significant lack of right field advantage for the spatial tasks of dot location and counting.
Thus we can see that verbal material is identified more accurately in the right hand field than in the left and that, in fact, dots are located and counted more accurately in the left hand field. 
From table 6, we see that with minor significance there is a difference in the degree of the RFA between recognision of trigram and words and between dot location and dot counting.
Thus, it seems, the nature of the verbal material does influence the degree of RFA.
Also, it seems that to a similar extent the nature of the dot task also influences the degree of RFA.  
Table 7 and 8 show that RFA is not so great for females as for the overall population, especially in the case of word recognition, which is not so significantly great.  
Table 9 and Table 10 show that RFA is significantly greater for males than for the general population and from Table 8 that RFA is greater than for females. 
Table 11 reaffirms that there is significantly greater RFA in men than in women for verbal tasks, but not for spatial ones.
It is already been shown that there is no significant RFA in spatial tasks anyway.
As a result, this supports the hypothesis that men are more specialised in the left-brain than women.
Discussion
From the results in Table 3, Table 4, and Table 5 we find that, as expected, subjects more accurately recognised trigrams and words presented on a card when the material is presented to the left hemisphere.
This supports the hypothesis that there is a right field advantage for verbal ability over spatial ability.
This right field advantage reflects left-hemisphere specialisation for language functions.
The significant lack of this advantage in the tasks of dot location and counting reflects right-hemisphere specialisation for the processing of visuo-spatial stimuli.
The brief period of time in which the words were presented to the subject in our experiment, however, limited the kinds of words that could be used as stimuli.
Other researchers have been able to overcome this problem in split-brain patients.
For instance, Evan Zaidel has developed a device known as the Z lens.
This is a contact lens that permits a split-brain patient to move his/her eyes freely when examining an object, but at the same time ensures that only one hemisphere of the patient's brain receives the visual information.
Thus, this lens allows the patient to view a stimulus for as long as s/he desires while also enabling the investigator to present the stimulus to one hemisphere alone.
Of course, this apparatus would not be valid for use in normal subjects as the corpus collosum would allow the transfer of information from one hemisphere to the other.
On the basis of these split-brain studies, the most general statement that has been made about right hemisphere specialisation is that they are non-linguistic functions that seem to involve complex viso-spatial processes.
Evidence supporting right-hemisphere superiority includes differences in the abilities of the two hands of a split-brain patient to draw a figure of a cube.
Invariably, the left hand produces a better drawing.
It has also been found that the left hemisphere is specialised for language functions, but these specialisations are a consequence of the left hemisphere's superior analytic skills, of which language is a manifestation.
Similarly, the right hemisphere's superior visuo-spatial performance is derived from it synthetic, holistic manner of dealing with information with ambiguous instruction simply to match the similar stimuli.
The results from Table 8, Table 10, and Table 11 also show that, although there remains a right field advantage in women, males are significantly more lateralised in the left hemisphere and thus verbal abilities than women.
However, in other studies it has been found that women in general have superior verbal ability to men.
Deborah Waber has suggested that sex differences of this sort are not attributable to sex itself but to the difference between the maturation rates of the sexes.
She states that early maturers have better verbal aptitude than spatial ability, whereas later maturers perform better on spatial tasks than verbal ones.
These prediction have, in fact, been tested with a sample of children where individuals were classified as either early or late maturers.
In general, the results confirmed her predictions; independent of sex, late maturers scored better on spatial tasks and early maturers scored better on verbal tasks.
Also, difference due to sex where not found to be significant in this study.
It has also been suggested that there is an evolutionary basis for sex differences in lateralisation.
By this proposal, the greater bilateralisation in females may facilitate the needs of females in motherhood including the necessary communication skills, whereas restricted separation of function is necessary to ensure a high level of visuo-spatial skills in males required, for example, in hunting.
From this viewpoint, if we assume that complex visuo-spatial capability was present before the evolution of language in humans, it is possible that in men only the left hemisphere became involved in language.
This would have left visuo-spatial functions intact in the right whereas in women language could have become established in both hemispheres, crowding most specialised visuo-spatial ability.
If this is what has occurred, &bquo;more lateralised&equo; would be better for visuo-spatial ability and &bquo;less lateralised&equo; would be better for language.
Our results would support this viewpoint as men where found to be more lateralised than women and they have poorer language ability, but superior visual ability.
Other variables could also be isolated in order to examine their influence on hemispherical asymmetry.
These include handedness, familial sinistrality, depression, anxiety, schizophrenia, fatigue, age, smoking, and introversion/extraversion.
Name: Kieran Lee (Graeme Conn)
Week of Experiment: Spring term, Week 1
Lab: Timing Mental Processes
Class: Wednesday
Demonstrator: Dr. P. Caryl
Due: January 30th, 1991
Submitted: January 30th, 1991
Increase in Reaction Time With Increase in Amount of Information or Size of Task
University undergraduates sorted a pack of 32 cards into piles of varying number.
The number of piles in which the cards were sorted were 2, 4, 8, and 16 requiring from 1 to 4 bits of information.
Undergraduates were also shown sets of digits of varying sizes and then shown a &bquo;probe&equo; digit.
They were asked to give a positive or negative response as quickly as possible indicating the presence of absence of this &bquo;probe&equo; digit in the memory set presented.
In the first case it was expected that there would be a linear increase in the subject's reaction time with increase in the number of bits of information required.
This hypothesis that reaction time increase with information, Hick's Law, was supported.
In the latter case it was expected that there would be a relationship between size of the memory set and the average response time which would tend towards linearity.
It was found that this occurred, although for small sets results varied slightly between positive and negative responses.
In the whole, as the degree of discrimination increased it was found that reaction time increased at a fairly constant rate.
Introduction
In a situation where it is required to perform some task, the reaction time can be seen to include the following: (a) the time taken by the stimulus to activate the sense organ and for impulses to travel from it to the brain, (b) the central processes concerned with the identification of the signal and the response to it, and (c) the time required to energise the muscles and produce the correct response.
Most studies which have attempted to establish laws about reaction time have assumed that stages (a) and (c) are relatively short and consider that effectively all the time is taken up by central processes.
In 1868, Donders attempted to measure the additional time taken by certain mental processes such as discrimination and choice.
He hoped to achieve this by superimposing these mental activities upon the simple reaction time, SRT, and then subtracting the known value of the SRT from the total reaction time to give the duration of this mental activity.
Where the task involves a number of choices, i.e. there is a choice reaction time, it has long been recognised that reaction time rose progressively with the number of possible choices, but why and to what extent where not understood.
Hick made a break through into this problem as he proposed that in making choice reactions, subjects gain information at a constant rate.
This experiment intends to calculate the rate at which reaction time increase with amount of choice.
It also hypothesises that reaction time increases as the size of a set from which an object must be identified increases.
Experiment 1
Design:
Subjects sorted out a pack of 32 cards into piles under a number of varying possibilities using a within subjects design.
Subjects:
Twenty-nine second-year psychology undergraduates at the University of Edinburgh.
Materials:
A stopwatch, sorting templates, and a pack of playing cards.
The pack includes 32 cards only: 4 each of Aces, Kings, Queens, Jacks, 8s, 7s, 3s, and 2s; a Joker was also used.
Procedure:
Subjects were divided into two groups of 19 with one floating subject.
One member of each group was assigned a partner from the other group.
Each pair was then placed in an isolated cubicle.
The floating subject shared a cubicle with two other partners.
One of the pair was designated the subject and the other the experimenter.
The pack of cards was shuffled and the subject held the cards face up in one hand.
The Joker was used as as the top card and the subject discarded this on the &bquo;go&equo; signal.
The subject then sorted the cards into various classes, working as quickly as possible without making mistakes.
For testing Hick's Law, the number of possibilities was tested as follows: 
The subject was given a practice run through all four sorting tasks, starting at two possibilities and working through four, eight, and sixteen.
Two experimental trials were then run: first in the order 2, 4, 8, and 16, and second in the order 16, 8, 4, and 2.
The subject's time to complete each task was recorded in milliseconds.
The subject's response time per card was thus calculated.
The experimenter and the subject reversed roles and the experiment was repeated.
Results:
Table 1 and Figure 1 show the relationship between the mean reaction times and number of bits of information.
It can be seen that the graph slopes up and although an exact relationship does not exist, a systematic ones does.
Figure 2 shows a best straight line drawn by hand from visual examination of the lie of the points.
This was very difficult to do and there was a tendency to place the y-intercept at the origin.
Regression was used in order to find the best straight line.
This gave the line shown in Figure 3 which has a y-intercept at 24.95680 and a gradient of +377.68396.
From this graph, we see that increase in reaction time is directly proportional to the number of alternatives, by Hick's Law:  where k is constant and N is the number of equally probable alternatives.
This predicts that sorting time would be 24.9568 msecs per card if no reference was made to the value of the cards and that the rate of gain of information for the class is 377.68396 msecs per bit of information per card.
Discussion:
As the number of possibilities of selecting the cards and thus the number of bits of information involved increases, the amount of time taken to perform the task increases proportionally.
The amount of time required to sort all the cards without actually considering the value of the card was calculated to be approximately 25 msecs.
This is merely the reaction time required in picking up the card and placing it on a pile without making any choice based on the value of the card.
It was thus found that the majority of the reaction time required was in identifying the values of the card and choosing the pile onto which it should be placed.
It was found that for each additional bit of information required this time increased about 377 msecs.
Thus this choice reaction time can be described by RT = 377 * log2N where N is the number of possible values from which we are required to choose.
This shows that there is a constant increase in the time needed to make this choice as the amount of information increases.
Experiment 2
Design:
Subjects gave yes and no responses as to whether a specific item appeared in a memory set of varying size displayed on a computer using a within subjects design.
Materials:
BBC microcomputer with a connecting Yes/No response box.
Procedure:
Subjects were divided into groups and placed into cubicles as in Experiment 1.
A memory scan test was loaded into the BBC microcomputer.
The experiment consisted of a practice session (24 trials) and an experimental session (120 trials).
Each trial consisted of a visual warning (two arrowheads indicated the subject should attend to a particular area of the screen), a series of digits (varying in length from 1 to 6 digits) presented at a rate of 1.2 seconds per digit, a gap of 2 seconds, and finally a probe digit.
As soon as the probe digit appeared the subject was required to decide whether it was in the memory set just presented.
The clock to record response time starts at the same time that the probe digit is shown on the screen.
The subjects responded Yes or No as fast as possible with the preferred index finger.
After the practice session, the subject proceeded with the experimental run.
The end of this was indicated by the appearance of a table of numbers; this was a record of the average response time for both Yes and No responses for all 6 memory set sizes.
Results:
Table 3 and Table 4 show the means and standard deviations of Yes and No response times respectively for each condition.
Figure 4 shows a separate graph for both Yes and No responses on the same axes.   
Table 5 shows there is a significant main effect for the type of response (p &lt; 0.01).
There is also a much greater significant effect as the size of the memory set varies (p &lt; 0.001).
There was no significant interaction effect showing the nature of the response does not effect reaction time.  
Table 6 and Table 7 show comparisons of response time between sizes of memory sets for Yes and No responses respectively.
Values for memory set 1 and memory set 2 refer to the number of items in that set.
Table 6 shows that increase in the time to respond yes was significantly great between memory set sizes of 2 and 3 and 3 and 4.
After this point there was no further significant increase in response time.
Table 7, however, shows that the time required to give a no response was significantly greater when the size of memory sets excelled 3 items.
Response time after this was shown to continue to increase significantly
Discussion:
It was found that when a number of digits, or memory set, was presented on the computer screen and the subject was asked to respond as to whether or not a &bquo;probe&equo; digit was in this memory set reaction time increased, although not linearly, as the size of the memory set increased.
There was no significant difference to the manner in which yes and no responses increased over memory set size.
However, there were some differences in the degree to which rate in reaction time for these responses increased for different sizes of the memory set.
Subjects seemed to provide No responses slower than giving Yes responses initially, but No response reaction time increased at a greater rate tending to converge with reaction time for Yes responses in larger memory sets of about four digits onwards.
General Discussion
Reaction time increases as a function of the amount of information required to be processed in the brain.
In the first experiment, we saw that as the number of possibilities in which the cards may be sorted increases, the reaction time required increases proportionally.
Through this, it was also possible to calculate the the time taken for the simple reaction time involved in the relay of messages between the brain and the hand.
In addition it was found that as the number of choices which could be made, when choosing the cards, increased by a factor of 2 (another bit of information was required), the time required for the brain to make this choice increased at a constant rate.
In the second experiment, again, it was found that as the number of digits in the memory set increased the time which the brain required before making a correct response increased almost linearly.
This shows that the time to respond to a basic stimulus is fairly small.
Simple reaction time is a fraction of the time required to react even when having to choose between only two possibilities.
Also, by separating the simple reaction time we see that reaction time increases steadily as the amount of discrimination required increases.
Name: Kieran Lee (Graeme Conn)
Week of Experiment: Spring term, Week 4
Lab: Timing Mental Processes II
Class: Wednesday
Demonstrator: Dr. P. Caryl
Due: February 13th, 1991
Submitted: February 13th, 1991
Variation in Reaction Time and Recognition of Words with Differing Levels of Processing
University undergraduates were presented with words in a two-field tachistoscope and asked ten questions of each type: 1) is there a word present; 2) is it printed in capital letters; 3) does it rhyme with&hellip;; and 4) is it a member of the category&hellip; with five leading to a &bquo;Yes&equo; and five leading to a &bquo;No&equo; response.
It was expected that as the level of processing deepened, the reaction time required to answer and the number of words recognised would increase.
The results substantiated this and it was also found that at deeper levels of processing, &bquo;No&equo; answers were less commonly recognised than &bquo;Yes&equo; answers.
This was considered to be due to recognition of semantically similar words to those seen in the course of the experiment.
Introduction
There have been several suggestions for the way in which memory is stored, including the commonly held multi-stage model (Lee and Conn 1990).
An alternative to this, however, was proposed by Craik and Lockhart (1972).
In their conception, they suggest that memory should not be envisaged as a series of seperate stages; sensory, short-term, and long term memory stores.
Instead, it could more satisfactorily be explained as a continuum, with recall directly dependent on the kind of encoding that is carried out.
The basis for the continuum they put forward is was the level of processing to which the input is subjected.
A shallow level of analysis would involve the analysis of the input for its physical details, such as lines, angles, and brightness of physical stimuli, whereas a deep level might involve finding a word which is associated with the input word.
According to Craik and Lockhart, the deeper the level of processing, the stronger will be the memory trace, and the less likely that forgetting will occur.
This depth of processing, Craik defined in terms of the meaningfulness extracted from the stimulus rather than in terms of the number of analyses performed upon it (Eysenck 1984).
He argued that at deeper levels of processing, more semantic encoding is performed.
The experiment to be described here is a replication of part of the series of experiments reported in Craik and Tulving (1975).
Reaction time of subjects will be measured when required to carry out processing at different levels.
It is hypothesised that words processed at deeper levels will be better recognised than those processed to only a shallow level and that reaction times will be longer at deeper levels of processing.
Subjects:
Sixty-one second-year undergraduates at the University of Edinburgh.
Materials:
A two-field tachistoscope, timer-counter, stimulus cards.
Procedure:
122 students were divided into two groups of 61.
One group was designated to be the subjects, the other experimenters.
One member of each group was assigned a partner from the other group.
Each pair was then placed in an isolated cubicle.
The subject was given ten questions of each type given below, five leading to a &bquo;Yes&equo; and five leading to a &bquo;No&equo; response.
The types of question where:
1)
Is there a word present?
2)
Is it printed in CAPITAL letters?
3)
Does it rhyme with&hellip;?
4)
Is it a member of the category&hellip;?
The 40 questions were randomised and are provided on the question sheet in appendix A which shows the stimulus and question to be asked.
Eight practice trials (one for each question-answer combination) were given before the experiment proper, and the stimuli and questions for these are shown at the bottom of the second page of the aforementioned question sheet.
The subject held one response button in each hand; Yes and No buttons differed in size, and the Yes button was held in the preferred hand.
For each trial, the experimenter placed a stimulus card in the tachistoscope and checked that all the switches were reset.
The experimenter asked the appropriate question and checked that the subject was ready.
S/he then pressed the button to present the stimulus card and start the timer.
Each word appeared for 200 milliseconds and the subject was asked to responds as quickly as possible by pressing either the &bquo;Yes&equo; or &bquo;No&equo; response button, which stopped the clock.
The reaction time was then read off the timer and recorded in milliseconds in the appropriate column of the question sheet, plus a tick in the appropriate box to show whether the response was correct or not.
The next phase involved giving the subject a surprise memory test in which s/he was given a list of 80 words &mdash;; the original plus 40 distractors of a similar type &mdash;; and was asked to tick the boxes corresponding to the words which s/he recognised as those presented in the tachistoscope.
Scoring: A the end of the 50 experimental trials, all reaction times were crossed out the response sheet for incorrect answers, excluding them from further analysis.
For the remaining answers, the &bquo;Result Sheet&equo;, also in the appendix, was used to help calculate the mean reaction time for Yes and mean reaction time for No answers.
The information provided on the &bquo;Results Sheet&equo; was used to score the recognition sheet.
This simply involved ticking in the relevant box if the subject has ticked the correspondingly numbered box on the &bquo;Recognition Sheet&equo;, allowing the experimenter to identify the number of correctly recognised words in each of the categories.
Results:
Table 1 shows the means and standard deviations of reaction time for Yed and No answers at each level of processing.
Figure 1 shows the graph of mean reaction time against levels of processing for each type of response.
From this graph it can be seen that, generally, as the level of processing increases reaction time increases. 
From Table 2 it can be seen that there a significant main effect for level of processing, but no significant main effect for type of response.  
Table 3 and Table 4 both show that there is a significant increase in reaction time as the levels of processing increase.
Table 2 also shows a significant interaction.
This interaction shows the effect of increase in levels of processing on increase in reaction time as seen on the graph in Figure 1. 
Table 5 and the graph in Figure 2 show the mean and standard deviation in the number of words recognised for Yes and No answers at different levels of processing. 
Level 1 was left out from the calculation of the ANOVA in Table 6 since the question &bquo;was there a word present?&equo; can not have a non-zero entry for the cell corresponding to a No answer.
As can be seen from Table 6, there is a significant main effect for level of processing for the recognition scores.  
Both Table 7 and Table 8 show a significant increase in number of words recognised as the level of processing increases.
Table 6 also shows a significant main effect for type of response.
Again there is a significant interaction, as seen in Table 6 and Figure 2, between the levels of processing variable and type of response variable.
As the level of processing increases, the increase in positive recognition of words increases significantly more than the negative recognition of words.
Discussion
From the results we see that, as predicted, reaction times are longer for deeper levels of processing.
This is shown in the ANOVA in Table 2 and the graph in Figure 1.
Table 3 and Table 4 also emphasise the significant increase in reaction time as the level of processing increases.
However, it should also be noted that there is also decrease in reaction time between levels of processing of 1 and 2, which is significant for No answers.
It can be seen that words processed at deeper levels will be better recognised than those processed to only a shallow level.
The Anova in Table 6; Table 7 and Table 8 show that as the level of processing increases, words are better recognised.
This is due to deeper semantic encoding of the word in memory.
These results support Craik's proposal that both reaction time and recognition increase as the level of processing increases.
It can also be seen from the results that as the level of processing increases, the ability to recognise a word correctly increasingly exceeds the ability to recognise words that do not fulfill the task.
This is most likely because the word is retrieved from memory upon sight.
Words recently seen are more likely to be remembered at deeper levels of processing.
But again, due to the depth of processing, words that are semantically similar to those in the experiment may also be recognised.
This would reduce the number of &bquo;No&equo; responses.
Craik and Lockart (1972) argued that there is a direct relationship between the semanticity of processing and the depth of processing.
This, however, assumes that because we tend to process more semantic knowledge than non-semantic knowledge, it therefore follows that semantic processing is equivalent to making use of previous knowledge to interpret presented stimuli in a meaningful fashion.
This is not necessarily true, though, as is said by Stein, Morris and Bransford (1978): &bquo;rather than emphasise the superiority of semantic over non-sematic processing, it may be more useful to ask how people use what they know (whether this knowledge is non-semantic, semantic, etc.) to more precisely encode and retain information. 
APPENDIX
Kieran Lee November 27, 1988 Tutor: Andrew Duncan
HOW MANY DIFFERENT TYPES OF LEARNING ARE THERE?
ILLUSTRATE WITH EXAMPLES AND/OR EXPERIMENTAL EVIDENCE
HOW MANY DIFFERENT TYPES OF LEARNING ARE THERE?
ILLUSTRATE WITH EXAMPLES AND/OR EXPERIMENTAL EVIDENCE
There are two general forms of learning: genetically prepared learning, e.g. instincts, and associative learning, e.g. classical conditioning.
For the purposes of this essay I will concentrate on associative learning which covers what is popularly defined as learning, as opposed to genetic learning which is instinctual, hereditary behaviour that has developed through evolution.
It often prepares organisms for associative learning.
Examples of genetic or prepared learning are sensitisation, habituation, imprinting and, in humans, the specialised physiology required for speech.
The most commonly investigated types of associative learning have been divided into six: classical conditioning, instrumental conditioning, latent learning, insight, vicarious conditioning, and observational learning.
Classical conditioning requires the combined occurrence of two events; it is a procedure in which a neutral stimulus is paired with a stimulus that elicits a reflex or other response until the neutral stimulus alone comes to elicit a similar response.
The study of this particular type of learning was begun by Ivan Petrovich Pavlov.
He devised an experiment to ascertain whether salivation could occur in the absence of any obvious physical cause.
He placed a dog in an acoustically insulated room and fixed a tube to the duct of a salivary gland through the cheek of the dog to divert its saliva into a container, in order that the amount secreted could be measured precisely.
He then carried out an experiment in three phases.
In the first phase he confirmed that when meat powder (an unconditioned stimulus) was placed on the dog's tongue a natural reflex occurred and the dog salivated (an unconditioned response), but that it did not salivate solely in response to a buzzer sounded for 30 seconds, a neutral stimulus.
This neutral stimulus showed an orienting response, which indicated that the dog was paying attention.
In the second phase of the experiment Pavlov sounded a buzzer and immediately placed meat powder in the dog's mouth.
The dog then salivated.
This pairing sequence was repeated several times.
In the final phase of the experiment, the buzzer (a conditioned stimulus) sounded and the dog salivated after a few seconds without the presence of any meat powder (conditioned response).
This process is called acquisition.
Once a conditioned response develops, stimuli that are similar but not identical to the conditioned stimulus are found to elicit a similar response.
This is called stimulus generalisation.
Usually the greater the similarity between a new stimulus and the conditioned stimulus, the stronger the conditioned response will be, e.g. if the conditioned response is a reaction to hearing the note middle C and you change the note response is weakened gradually.
The effect of stimulus generalisation is balanced by a process known as stimulus discrimination.
Through this, people can differentiate between similar stimuli.
For example, mothers learn to distinguish between the crying of their own babies and the crying of others.
In humans another example of classical conditioning is the fear of the dark as a result of being frightened as a child (perhaps by a sibling).
In instrumental conditioning, the combined occurrence of three events is necessary; these three events are the stimulus, the response or operant, and the reinforcement.
An operant is a piece of behaviour which has a spontaneous nature which occurs at a predetermined frequency.
A reinforcer is something which changes the rate of occurrence of an operant.
There are positive and negative reinforcers which increase and decrease this rate of occurrence respectively.
One example of this were the experiments of Edward L. Thorndyke who put an animal, usually a cat, in a wooden puzzle box where it had to learn a response, e.g. stepping on a lever to unlock the door and get out.
When the cat was successful it was rewarded with food and then placed back inside the box.
After several trials, the cat managed to get out almost immediately to get the reward.
Thorndyke took pictures of what the cats were doing at the time they escaped.
These showed the cats rolled around on their backs and hit the mechanism with various parts of their bodies and not just one on different occasions.
This was due to the fact that rolling always eventually caused the opening of the cage.
Thus the response was more likely to occur as they were able to get the reward by trial and error.
He called this the law of effect.
According to this law, if a response made in the presence of a particular stimulus is followed by a reward, that same response is more likely to be made the next time the stimulus is encountered.
Responses that are not rewarded are less likely to be performed again.
B.F. Skinner emphasised that during instrumental conditioning an organism learns a response by operating on its environment.
He developed an experimental chamber called the Skinner Box, which contains a device that an animal can operate to get a reward, e.g. rats are usually placed in a Skinner Box which has a lever.
When the lever is pressed, a food pellet drops through a thin tube.
The rats eventually managed to press the lever very quickly after being placed in the box, in order to receive their reward.
This is an example of a free operant task.
There are a few other forms of experimental instrumental conditioning, such as discrete trials and shaping.
Learning is not always used as soon as it is acquired: Edward Tolman gave experimental evidence for this latent learning.
Tolman placed three groups of rats in a maze.
One group was reinforced with food after each trial, a second group was never reinforced, and a third was given a reinforcement only after the eleventh day.
The mean number of errors made during each trial was calculated for each group.
He found that although the rats in the third group made many errors until the eleventh day, upon reinforcement on this day they made almost no mistakes.
The rats could not have had their learning affected by the reinforcement on the eleventh day, it simply changed their subsequent performance.
The rats must have learned the maze earlier and were demonstrating latent learning.
Also, because the rats&equo; behaviour changed immediately after the first reinforcement trial, Tolman argued that the results obtained could only occur if the rats had earlier developed a cognitive map, that is the mental representation of the particular special arrangement of the maze.
The rats used their cognitive maps to achieve the goal, showing they had learned the maze.
Wolfgang Kohler argued that the problems Thorndyke had set for his cats determined the type of learning they demonstrated.
Thorndyke's puzzle box forced animals to use a trial and error strategy in which they eventually had to happen on an answer through associations.
Kohler thought that insight, or sudden understanding about what is required to produce a desired effect was shown by the chimpanzees in his experiments.
He put a chimpanzee in a cage and placed a piece of fruit outside the cage so that it was visible, but out of reach.
Many chimpanzees overcame these obstacles easily, e.g. if the fruit was on the ground outside the cage, the chimpanzee might thrust his arm through the cage.
When this was unsuccessful it would look around the cage where there was a long stick and suddenly it would decide to use it to rake in the fruit.
Kohler tried more difficult tasks and again the chimpanzees proved very adept.
He thought that something more than automatic associations were made as he observed that the chimpanzees seemed to understand the principle in solving the problems.
He believed that the only explanation for these results was the chimpanzees suddenly saw new relationships that were never learned in the past.
The relationship between a response and its consequences, or the association between a conditioned stimulus and a conditioned response learned by watching others is described as vicarious conditioning.
Albert Bandura devised an experiment to show this, which also highlighted observational learning, the ability to learn new behaviours by watching the behaviour of others.
In his experiment, he showed nursery school children a short film featuring an adult and a &bquo;Bobo&equo; doll.
The adult in the film punched, kicked, and threw things at the Bobo doll, and hit its head with a hammer while saying things like, &bquo;Whammo!&equo;
There were different endings to the film.
Some children saw an ending where the aggressive adult was called a &bquo;champion&equo;, some saw the aggressor being berated, and others saw an ending where there was neither a reward nor a punishment.
After the film, each child was left alone in a room to play with a Bobo doll.
Bandura found the imitation of the adult in the film was found to be greatest amongst the children who had seen the aggressive adult being rewarded.
Thus, the reinforcement that the children saw in the film, but did not directly experience, influenced their behaviour.
Observational learning was seen as many children imitated the behaviour of the adult in the film punch by punch and kick for kick.
Children who had seen the ending of the film were the aggressor was berated were just as aggressive if rewarded with candy for every act they could re-create from the film.
All learning can be seen to affect the behaviour of an organism if by eliciting this behaviour it is rewarded.
When organisms learn naturally it is usually through a mixture of these and other slightly different types of learning.
Humans have developed some very specialised forms of learning for use in schools, to encourage students to make best use of their memories and analytical capabilities.
Kieran Lee January 28, 1989 Tutor: Andrew Duncan
DID FREUD OVERESTIMATE THE IMPORTANCE OF SEX IN HUMAN PSYCHOLOGY?
To consider whether Freud overestimated the importance of sex in human psychology it is necessary to consider exactly what Freud said about sex in human psychology.
After this it may be possible to evaluate whether he did overestimate its importance.
Freud began to develop his theories in 1886 during the Victorian Age, a time when it was socially unacceptable to discuss sex, and it was considered that woman and children should be seen and not heard.
Most of his patients were middle-class women who suffered from hysteria.
Psychiatry was new and had only been in practice since 1879; neuroses and psychoses were thought to be untreatable.
This, combined with the sexism of the time, led other psychiatrists to take their symptoms to be the result of the natural inferiority and brain degeneracy of women.
Freud, however, did not hold this view and hoped to find the true root of his patients' hysteria.
Freud originally used the technique of hypnosis which he had adopted from a friend of his, Joseph Breuer, with whom he wrote &bquo;Studies in Hysteria&equo;.
Breuer had himself borrowed this technique from the French psychiatrist, Charcot.
He used hypnosis in order to suggest away the patient's symptoms.
Freud later developed the technique of free association, a triumph which is often neglected in the discussion of his controversial theories.
He tried to break down the father-role position doctors held, so the patients could feel more relaxed and see him almost as an equal.
He tried to get them to say whatever was on their minds and by following certain lines of thought tried to trace the root of the hysteria.
This technique of free association is a contribution to psychology which is used widely by psychotherapists, including behaviourists.
Using this technique, he often traced the symptoms back to the early years of the patients' life.
The patients would often recollect having been sexually abused by an adult; for most of his female patients this adult abuser was the father.
He later discovered that most of these scenes had never actually occurred and had been fantasies of the patient.
As a result he decided &bquo;psychic reality&equo; could be just as powerful a source for the development of neurotic symptoms as &bquo;physical reality&equo;.
He also discovered that, in the majority of cases, any sexual encounters that the patients had really had in childhood were with other children.
As these sexual fantasies and encounters had occurred from childhood, sexuality must have been present.
This led him to believe what is now scientifically substantiated, that sex is an integral part of the being from birth.
He believed that the sexuality of man, unlike any other animal, developed in two waves and this could explain man's susceptibility to neuroses.
In the first wave, Freud said, a child goes through a series of phases centring on various erogenous or erotogenic zones of the child's body.
These are referred to as the oral, anal, and phallic phases.
If anything unusual occurs at one of these phases a fixation at the relevant phase occurs.
The phase at which a fixation occurs can determine the form in which a neurosis appears in later life.
The oral phase occurs in about the first year of a child's life.
In this phase the centre of sensual pleasure for the child is found in the mouth, thus the child enjoys placing various things in his/her mouth, sucking his/her thumb, etc.
Fixation at this stage might result in dependence on others, talking too much, overeating, alcoholism, cynicism, or use of sarcasm.
The next phase, the anal phase, usually occurs during the second year of the child's life.
The centre of sensual pleasure becomes the anus.
Fixation at this stage may result if toilet training is too harsh and demanding or is begun too early or too late.
This fixation may manifest itself in adulthood in one of two extremes; either stinginess, orderliness, and excessive cleanliness or sloppiness, lack of organisation, and impulsiveness.
Indeed, my aunt for instance was toilet-trained by the age of one and is excessively organised.
In the phallic phase, which occurs between the ages of three and five, the principal erogenous zone becomes the genitals.
During this phase the sexual instincts (or libido) of a boy cause the child to desire his mother.
The boy feels hostile towards his father as he believes he is a rival for his mother's intentions.
Freud named this the Oedipus Complex after the character in the Greek dramatist Sophocles' legendary &bquo;Oedipus Rex&equo;.
The boy fears retaliation from his father, manifesting itself in the &bquo;castration syndrome&equo;.
As a result the boy represses his sexual desires towards his mother and identifies with the sexual role of the male portrayed by his father.
A female child begins with a strong attachment to her mother, but realises that neither she nor her mother possesses a penis.
She blames her mother for this and sees it as a sign of inferiority, thus experiencing penis envy and transfers her affections to her father as he has the penis she wants.
However, in order to avoid her mother's disapproval she identifies with her mother, adopting female sex roles and subsequently choosing a male mate in the place of her father.
After this phase there follows what Freud described as a period of latency during which the sexuality of a child lies dormant.
The second wave occurs at puberty when the young person looks for a sexual partner, and if there has been minimum conflict during the phallic phase this is of the opposite sex.
There is scientific evidence for sexuality in children as studies show that two to three year old boys have regularly recurring erections during REM periods of sleep.
This phenemomenon is also present in adult males.
Also a friend of mine has a two year old brother who tries to have sex with his teddy bear.
Many of Freud's interpretations of behaviour resulting from these phases are so taken for granted that they are no longer recognised, e.g. tight-ass is a slang expression for Freud's description of an anal character.
Freud believed the ancient theory that dreams are symbolic was true.
He analysed the dreams of both himself and his patients.
He discovered that dreams covered-up the urges of one's instincts.
He devised a system of symbols which he found in dreams to represent more unbearable things which did not themselves appear in the dreams in order not to disrupt sleep.
Many of these symbols represented the male and female unclothed body, male and female sexual organs, siblings, parents, birth, death, masturbation, and sexual intercourse.
Despite what many people claim, however, Freud did not say that dream interpretation shows that all dreams have a sexual content.
He also said that &bquo;it is easy to see that hunger, thirst, or the need to excrete, can produce dreams of satisfaction just as well as any repressed sexual impulse.&equo;
Towards the end of his career Freud developed the idea of the parapraxis, generally referred to as the &bquo;Freudian Slip&equo;.
A parapraxis need not necessarily be vocal, it may be written.
Misplacing something is also an example of a parapraxis.
He believed that many of these slips revealed some kind of sexual subconscious which suddenly &bquo;accidentally&equo; is brought to the conscious.
Many people today still see Freud as some kind of sexual pervert, taking his theories to be rather far-fetched.
This is mainly based on common hear-say as many of these people have read very little of his work.
Freud was no sexual pervert.
He worked twelve hours a day, seeing patients and analysing their cases.
He also analysed his own dreams and from these studies developed his theories.
He stated himself that he did not believe that neurotic symptoms all had a sexual cause and berated a student writing a paper on chess saying, &bquo;you can not reduce everything to the Oedipus Complex&equo;.
Through his work, Freud realised that some taboos of the time were much more commonly breached than was acknowledged by society.
The sexual abuse of children is known, unfortunately, to be wide-spread today and homosexuality, at the time considered to be a psychological disorder, is becoming more and more common-place as it becomes increasingly acceptable in today's society.
Sigmund Freud did not overestimate the importance of sex in psychology, he simply showed that sex existed and played a major part in the psychological development of a human being and if suppressed could result in neurotic disorders.
Many of his findings revealed aspects of society which are more openly acknowledged today.
Some critics say that Freud tried to make sex the explanation for everything.
In truth, he merely tried to break down social barriers and taboos that surrounded sex in order to remove what he saw as a cause for psychological problems of the time.
The statement that he overestimated the importance of sex in human psychology is refuted by an examination of his work and opinions.
Kieran Lee February 24, 1989 Tutor: Andrew Duncan
BRIEFLY DESCRIBE THE WAY IN WHICH NERVE CELLS COMMUNICATE DISCUSSING IN GENERAL TERMS THE MECHANISM BY WHICH THIS COMMUNICATION MAY BE INFLUENCED BY PSYCHOACTIVE DRUGS AND POISONS
A neurone or nerve cell, like all other cells, has a semi-permeable membrane which allows some substances to enter its cytoplasm while keeping others out.
Nerve cells, unlike other cells, have the ability to communicate with other nerve cells by the use of long thin fibres known as axons and dendrites which extend outwards from the cell body, allowing the cell to influence from 1,000 to 100,000 other nerve cells.
The axon of the cell is the fibre that carries signals out from the cell body to the dendrites of other nerve cells.
It may have many branches leading to many other nerve cells.
The dendrites of the cell, on the other hand, are the fibres that receive the signals from the axons of other neurones, carrying those signals to the cell body.
Neurones can have literally thousands of dendrites, each having many branches.
They also have an excitable surface membrane which allows a signal to be sent from one end of the neurone to the other.
Synapses, minute gaps between neurones where one neurone receives signals from another, also facilitate nerve cell communication.
Electrical signals occur between nerve cells as a result of special properties in the membrane of their axons and the synapses between neurones.
Because the membrane of the cells is semi-permeable, certain ions cannot usually pass into the cell.
Thus the distribution of positively and negatively charged particles inside and outside the membrane is usually uneven.
As a result, at resting potential, the membrane of the axon is polarised and the inside of the membrane is slightly negative compared with the outside.
Therefore at resting potential there is an electric potential of -70mV due to the attraction between the negative chloride ions on the inside of the axon and the positive sodium ions on the outside.
The sodium is only able to pass into the membrane through sodium ion channels distributed along the axon.
These channels are normally closed, but the membrane around a particular sodium channel may become depolarised causing the channel to open.
This occurs at a potential of +50mV inside that part of the axon with respect to the outside.
This results in the neighbouring part of the axon also reaching this potential, thus causing its ion channel to open.
This sequence of depolarisation continues till it reaches the end of the axon and is known as self-propagation.
This &bquo;firing&equo; of the axon is known as the action potential.
The speed of this action potential down a cell is specific to that cell, but in different cells this speed ranges from 0.2 metres per second to 120 metres per second.
This speed is dependant on the diameter of the axon and whether myelin is present; myelin is a fatty substance that wraps around some axons in order to speed action potentials.
The rate of firing of a specific neurone varies; it can fire repeatedly as after the ion channels have closed polarisation builds up again.
The brief time taken for this re-polarisation is the refractory period.
Due to the brevity of this period a neurone can send action potentials at a rate of up to 1000 per second.
The signal is transferred across the synapse between the axon of one cell and the dendrite of another by neurotransmitters.
There are about 50 known neurotransmitters, each of which are used by a particular set of neurones.
When an action potential reaches a synapse it releases the neurotransmitters which spread across the synapse to reach the next, or postsynaptic, nerve cell.
There the neurotransmitters cause a change in the membrane potential of the dendrite of the postsynaptic cell, thus creating an electrical signal.
This signal can be either excitatory or inhibitory.
If it is excitatory, it makes the postsynaptic cell more likely to fire whereas if the signal is inhibitory it makes the postsynaptic cell less likely to fire.
If the signal is the former a wave of depolarisation in the membrane of the postsynaptic nerve cell's dendrite begins to move towards its cell body, or stoma.
However unlike the action potential in an axon this wave fades as it goes along so only if the signal is strong enough to begin with will it pass through to the postsynaptic cell body to create a new action potential.
Neurones may have synapses with thousands of other neurones with a combination of inhibitory and excitatory signals.
Whether or not the neurone fires and how rapidly it fires depends on which signal predominates from moment to moment at the junction of the cell body and the axon.
An axon is enlarged at the tip and this presynaptic area contains vesicles made from the same material as the cell membrane, containing the neurotransmitters that are released in the synapse.
These neurotransmitters stimulate the postsynaptic nerve cells at specialised sites called receptors.
A given neurotransmitter fits perfectly only into its specific receptor.
When a neurotransmitter fits into its receptor it triggers the chemical response that changes the membrane potential and passes the signal from one neurone to another.
A neurotransmitter has only a brief stay at its receptor site.
It must be moved from the receptor after they have connected in order that the receptor is not stimulated indefinitely.
This is done either by an enzyme which breaks down the neurotransmitter or, more commonly, the neurotransmitter can be transported back into the presynaptic area.
This process is called reuptake.
The five main neurotransmitters are acetylcholine, noradrenaline, serotonin, dopamine, and gammo-amino butyric acid.
Nerves that communicate with the use of acetylcholine are said to be cholinergic and are found in the peripheral and central nervous systems.
At the junction of nerves and muscles, these neurones control the contraction of muscles.
Alzheimer's disease seems to stem from a complete loss of cholinergic neurones from a nucleus in the basal forebrain that sends fibres to the cerebral cortex.
Nerves that communicate using noradrenaline are adrenergic.
Half of the noradrenaline in the brain is located in the cells of the locus coerulum, a part of the brain whose neurones affect as many as 100,000 others.
Serotonin (5-hydroxytryptamine) is found in a similar position in the brain to noradrenaline and affects sleep and moods.
Serotonin is also affective in the neural circuits that descend from the brain to help block pain sensations.
One of the substances from which it is made, tryptophan, can be used by the brain directly from food.
This is one way in which food may affect mood and drowsiness.
Dopamine is used by a more restricted number of neurones than the other neurotransmitters and their axons do not branch as extensively.
Some neurones that release dopamine are involved in movement and their degeneration is the cause of Parkinson's disease.
Malfunctioning of the dopaminergic neurones may be responsible for schizophrenia.
Gamma-amino butyric acid (GABA) reduces the likelihood that the postsynaptic neurone will fire an action potential.
It is used by neurones in widespread regions of the brain.
GABA is the main inhibitory neurotransmitter and may be partially responsible for epilepsy.
Neurotransmission can be affected in several ways by psychoactive drugs and poisons.
Psychoactive drugs are chemical substances that act on the brain to create psychological effects.
A poison is a chemical substance which creates negative psychological effects over a certain dosage; a psychoactive drug can be a poison at high dosage.
They can alter the amount of neurotransmitter released by a neurone, mimic the neurotransmitter at the receptor site (agonists), block receptors for a certain neurotransmitter (antagonists), block reuptake of a neurotransmitter from the synapse, affect electrical conduction in the axon, inhibit biosynthesis of a neurotransmitter, or inhibit the breakdown of the neurotransmitter by enzymes.
Amphetamines release noradrenaline and serotonin.
Venoms from snakes and spiders, such as the Taiwan banded Krait, cause the release of the neurotransmitter acetylcholine from the presynaptic vesicle, causing the loss of control and contraction of muscles.
Botulism toxin from bacteria prevents the release of acetylcholine and is the most poisonous substance known.
Benzodiazepines such as valium cause the release of GABA.
Agonists are substances which are so similar to a specific neurotransmitter they can occupy that neurotransmitter's receptor perfectly.
The drug ephedrine mimics noradrenaline and therefore has arousing properties.
It is used to treat nacrolepsy, a disease in which a person falls asleep abruptly and unpredictably during the day.
Amphetamines can produce a psychotic effect due to an agonistic effect on dopamine receptors in the CNS.
An antagonist is a substance which is similar enough to a neurotransmitter to occupy its receptors, but is not similar enough to fit perfectly and change the cell's membrane potential, thereby blocking the receptor.
The poison strychnine acts as an antagonist on the receptors of the neurotransmitter glycine, inducing fear.
LSD (lysergic acid diethylamide) is a serotonin antagonist.
Many drugs interfere with the reuptake of a neurotransmitter from synapses.
The result is that the concentration of neurotransmitters in a synapse increases and the normal actions of the neurotransmitter is enhanced.
Many drugs that are used to treat depression such as imipramine and amitrypline block the reuptake of noradrenaline and serotonin, thus leaving more of these chemicals at the synapse in turn relieving depression.
Other drugs block the reuptake of acetylcholine from a synapse and at the junction between neurones and muscles.
This results in excessive contraction of the muscles leading to death.
For example, insecticides and nerve gases act by blocking the removal of acetylcholine.
It is possible that lithium has an effect on the conduction in an axon by replacing the sodium ion and it is used in the treatment of depressive illness.
Poison from the puffer fish, tetrodotoxin, prevents conduction in nerves.
The biosynthesis of noradrenaline in the central nervous system is inhibited by large doses of benseraside.
Substances such as iproniazid inhibit the enzyme monoamine oxidase (MAO), which breaks down the neurotransmitter noradrenaline.
Iproniazid is used in the treatment of depression.
It can be seen from this account that the mechanism by which nerve cells operate can be strongly affected by psychoactive drugs and poisons.
Name: Kieran Lee (Graeme Conn)
Week of Experiment: Spring term, Week 7/9
Lab: Transforming Common Sense into Scientific Knowledge
Class: Wednesday
Demonstrator: Dr. P. Caryl
Due: April 17th, 1991
Submitted: April 24th, 1991
Introduction
Social psychologists regard common sense answers for our attraction to some people rather than others as being inadequate.
Duck (1988) tells us that &bquo;common sense is not a good assistant&equo; in understanding interpersonal relationships and tries &bquo;to indicate the fundamental inadequacy of some common sense assumptions about attraction to strangers&equo;.
Hence, social psychologists attempt to transcend the naivety of common sense, to clear up its confusions and &bquo;unfounded speculation&equo;, and to transform lay into scientific theories.
In our study, we are going to examine the process and products of this transformation with reference to first impressions.
Anderson and Barrios' (1961) experiment is a typical example of how the folk wisdom that you should always &bquo;put your best foot forward&equo; is made amenable to experimental investigation, and then transformed into scientific knowledge.
There are several steps in this process.
First the significance of first impressions is translated into technical language: &bquo;the primacy effect&equo; or &bquo;word order effect&equo;.
Second, &bquo;information&equo; and &bquo;impressions&equo; are operationalised.
&bquo;Information&equo; becomes personality traits which are presented in lists of six.
&bquo;Impressions&equo; are measured on a rating scale of -4 to +4 according to their favourability.
Third, the information given to subjects is manipulated in order to determine the effects of &bquo;first information&equo; or &bquo;impressions&equo;.
This is done by constructing different types of adjective lists.
HL lists consist of three highly positive followed by three highly negative adjectives; LH lists consist of three highly negative followed by three highly positive qualities; GD lists consist of adjectives gradually descending from highly positive to highly negative traits; and in GA lists the order of adjectives is reversed; R lists are random in terms of the adjective evaluation.
Anderson and
Barrios used 12 lists of each type, making 60 in all.
The idea is that if first information is most important, then HL and GD lists will result in a favourable rating, and LH and GA lists will result in an unfavourable rating.
Fourth, the process of forming impressions is isolated from any other influences that might ordinarily affect our impressions.
Thus, the adjectives are prerecorded by the same speaker, at a rate of one every three seconds, and played back to subjects.
They are simply told to imagine that the adjectives had been used by 6 different people to describe another person, and to use these traits to form an impression of that imaginary person.
In this way, &bquo;information&equo; is isolated from the source of that information and any other possible aspects of the target person.
Moreover, subjects form impressions in the supposedly neutral context of the laboratory cubicle &mdash;; thus isolated from the complex social situations in which we actually form impressions of others.
This of course has the advantage of controlling for context effects by making the context the same for everyone.
We then impose some &bquo;order&equo; on our subjects' responses through statistical analysis.
The statistical order is then related to our original hypothesis.
In keeping with their hypothesis, Anderson and Barrios (1961) did find strong primacy effects.
The next step in the transformation of common sense is to explain the &bquo;scientific discovery&equo;.
One way is, of course, to ask how the subjects did the task since, after all, they had the experience of doing it.
The more usual alternative however is to explain our discovery as being due to a certain causal process, something within us that makes us behave in a certain way.
Thus, Jones and Goethals (1971), reviewing this and similar studies, argue that the primacy effect could be due to one of three possible causal processes: attention decrement, discounting, and assimilation.
attention decrement: subjects may &bquo;hold on&equo; to initial words because they get distracted or bored when they are asked to process sequential information, hence they don't pay attention to later words.
Alternatively, subjects might selectively attend to initial information and ignore later, incompatible information.
discounting: it is argued that we attribute our first impressions to the person (e.g. to a person's traits or abilities), and later or more recent information to the situation (e.g. the task, environment, influence of other people).
Hence, when judging a person we pay more attention to initial information and discount the later information as being due to the situation (especially where later information is incompatible with early information).
assimilation: it is argued that initial information sets up expectancies about a person.
That is, we initially categorise them as this or that type of person and interpret or distort subsequent information to confirm that s/he is this type of person.
Through assimilation we can achieve consistency in our impressions of others.
At this point, we need to test our alternative scientific theories to see which of the above is the &bquo;correct process&equo;.
Hence, subsequent studies manipulated things like the degree of incompatibility between initial and later adjectives and the instructions to subjects (e.g. whether the adjective are or are not equally valid).
Unfortunately, some of these studies found a recency effect in impression formation.
Attention then had to be directed towards identifying the conditions under which recency rather than primacy would be shown.
We might note the irony at this point: the purpose of experimentation was to clear up the confusions, contradictions, and conflicting advice of common sense.
But our experimental evidence has revealed equally conflicting themes: primacy and recency!
Whilst it may be the most objective away of practising social psychology, experiments and the scientific theories to which they give rise, may not be the most useful or appropriate means of understanding the individual in society.
There are two reasons for this.
First, social psychology is alienating because our everyday lives are not made sense of in our terms.
Instead, meaning and order is imposed on our lives as our behaviour, experiences, thoughts, feelings, and so on, are transformed into statistical data and interpreted within a scientific framework to which we have no access unless we've been trained in the technical language of psychology.
The second reason is a nagging doubt that perhaps we are researching fictions of our own making; that is, we are producing theories about experimental subjects in laboratory situations which have nothing to do with the individual in society.
And this would be somewhat ironic given the definition of social psychology with which we started this discussion.
Perhaps everyday life, far from being ordered, consistent and non-contradictory is chaotic, inconsistent, and contradictory.
In which case, the contrary themes of common sense are perfectly adequate ways of dealing with the contradictions of everyday life.
Perhaps the emphasis on cause and effect, whilst appropriate for the natural sciences, is inappropriate for understanding human behaviour and experience.
It could be that social psychologists have much to learn from the practical knowledge contained in common everyday sense and could more usefully pursue questions of how common sense helps us act in the world, instead of criticising its supposed inadequacies and developing new improved uncommon sense based on the laboratory subject.
In this practical, we shall contrast three ways of making sense of first impressions; two based on the laboratory experiment, one on our experience.
First we will repeat Anderson and Barrios' (1961) experiment.
Then, we will contrast each of these with accounts of first meetings with people we know well.
We shall consider (i) what each might tell us about first impressions and (ii) the utility of social psychology based on the transformation of common sense into scientific knowledge compared with a psychology based in common sense and experience.
Subjects:
Second-year undergraduate students at the University of Edinburgh.
Stimuli:
Thirty lists of adjectives were constructed from Anderson's (1968) list of 555 personality &mdash;; trait words rated for their likableness and meaningfulness.
Only words rated highly or this latter dimension were included in our lists.
Materials:
A questionnaire: Part I contained 30 rating scales from -4 (highly unfavourable) to +4 (highly favourable) for Person 1 to Person 30.
Like Anderson and Barrios, we had 5 types of list: HL, LH, GD, GA, and R. Part 2 consisted of one question as fellows:
&bquo;Please describe in your own words, how you reached your decisions (i.e. what strategy or strategies did you use?)
Please list them, if you used more than one.
There are no right or wrong answers.&equo;
Method:
Subjects were instructed &mdash;
&bquo;Today you are all going to do an experiment on impression formation.
You will be read a number of sets of adjectives.
Each set consists of adjectives given by six different people who know the target person well.
You should try to form an impression of the person the adjectives describe.
You will then be asked to rate that impression on an eight point scale rating from &bquo;highly unfavourable impression&equo; through to a &bquo;highly favourable impression&equo;.
You will be given a form on which to record the favourability of your impression of each person&equo;.
Subjects were then given an example and instructions in how to fill out the rating scales.
Finally, they were told: &bquo;There are no right or wrong answers.
I am interested in your immediate impression of the person described.
Do not spend long making your decision.
Please make sure that you only have one clear circled point on the scale for every person.&equo;
The adjectives were read out at a rate of one every three seconds.
Each list is preceded by the Person Number.
Once this stage was completed, subjects were told: &bquo;You have just participated in an experiment on impression formation.
The experiment is now over.
You are no longer subjects.
I want you now, as people, to reflect upon how you did the task you were asked to do.
Then, in your own words answer the question at the top of the sheet in front of you as honestly as you can.
Note that there are no right or wrong answers and that your answers are entirely anonymous.
Please write your answers clearly and in list form.
Write as many or as little as you can, but be honest!&equo;
When Part 2 was completed, it was handed in and the purpose of the study was explained.
Several weeks before the experimental study of first impressions, students were asked to &bquo;think of someone you know well &mdash; a close friend or partner.
Bearing in mind what you know about them now, write a brief account of your first meeting.
Can you remember what you first thought of them?&equo;
They were told to spend a maximum of fifteen minutes on this task and to restrict their accounts to a maximum of half a page.
It was explained that these accounts would be used in a later practical class.
Results
It was found that is most common for two strategies to be used when evaluating the target person.
The use of three strategies also proved to be popular, followed by one, and finally by four or more strategies.
This shows that generally more than one strategy is used, but normally no more than three.
This is shown in Table 1 and Fig. 1.
As each subject may have used more than one of the strategies when evaluating a person, in order to calculate inter-rater reliability two different judges decided individually which strategy they believed the individuals had described.
Where differences arose, the two judges discussed and settled on one or other of the differing categories.
This inter-rater reliability was found to be 0.88, which is high enough for results to be considered reliable. 
The first category, compare and contrast positive and negatives, involves a comparison between positive and negative adjectives describing the person used together to form the final opinion; for example, &bquo;&lsqb;I&rsqb; weighed all good points against all bad points&equo; or &bquo;as far as I was able to tell, every person was described by three positive/favourable adjectives and three negative/unfavourable ones&hellip; just as an impression was formed it was negated by the 5th or 6th words, giving it a cancelling out effect.
If there was the choice, I would have given almost all the persons a figure of zero, since to my mind (+3) + (-3) = 0!&equo;
The second category, overall impression, involves listening to all the adjectives describing the person and thereby forming an overall impression; &bquo;&lsqb;I&rsqb; assessed the overall impression left once all the words had been read out&equo; or &bquo;I was not conscious of using any strategy in particular at the time, but I guess I merely listened to all the traits and tried to gain an overall impression.&equo;
The serial strategy involved thinking of or placing your finger on an imaginary scale and moving backwards and forwards along that scale as adjectives were successively read out until a final impression was reached; &bquo;after the first adjective I rated the person in my head, then moved backwards or forwards along the scale after each attribute&equo; or &bquo;1.
Started with my pen in the middle.
2.
Moved up or down accordingly to positivity/negativity of the adjective&equo;.
The strategy of key characteristics involved rating certain characteristics more strongly in either direction than others, for instance &bquo;using the strongest words as a guide to an overall score and ignoring the weaker contradictory ones&equo; or &bquo;weighed up which characteristics I considered most important, i.e. whether trustworthiness was more important to me than politeness&equo;.
The realist strategy involved comparing the description of the person with the subject him/herself or someone the subject was familiar with; &bquo;I also related a lot of the adjectives to whether they were like me&equo;, &bquo;&hellip; and rated the person as if I knew them personally.
Could I have them as a friend/enemy?&equo;
Order effects strategy involved the effect of the position of each adjective in the list to the impression formed; &bquo;found that the last few adjectives had more influence especially if they reflected negative points of the personality&equo;, &bquo;when the list ended with several good qualities this tended to affect any answer since they remained more prominent in my mind&equo;, and &bquo;tended to remember first 3 adjectives better than last 3&equo;.
A seventh final category covered miscellaneous responses which would not fit neatly into either of the six other strategies; these included &bquo;if they sounded incongruous, &lsqb;I&rsqb; tended to choose an unfavourable response&equo; and &bquo;it was difficult because I kept forgetting what had gone before so probably ended up guessing&equo;.
Table 2 and Fig. 2 shows the frequency of each strategy used.
It was found that key characteristics was by far the most popular strategy, followed by realist strategy.
The least popular strategies were miscellaneous and order effects. 
The dominance of the key characteristics and realist strategies shows that the subjects were, generally, comparing the descriptions of the target person with a standard with which they could identify.
Also, the subjects found that certain traits which they felt strongly about were important to whether they were likeable or not.
Naturally, if someone has a characteristic you strongly disapprove of you will rate them as being more favourable and vice-versa.
Order effects was the least common of the main content areas as the order in which the adjectives were mentioned seemed unlikely to affect the way in which somebody judged the merits of a person.
In the accounts of first meetings, the subjects generally described the situation, physical appearance and attractiveness (the first meetings described were often with members of the opposite sex), details of the initial conversation, comparisons of the person with oneself, and reaction of the other person to themselves.
In many cases, first impressions were different from the opinion that later developed.
The physical aspects of the person seemed to be most important feature of the other person at initial meetings.
Personality was rarely mentioned explicitly, but sometimes comparisons were made with the person's self and aspects of conversation were sometimes described.
Discussion
If statistical tests had been performed on the results of the experiment it is likely, from Anderson and Barrios (1961), that a primacy effect would have been obtained.
Normally, this result would be explained in terms of the three causal processes of attention decrement, discounting, and assimilation.
It would then be necessary to perform further experiments to support our chosen explanation for the primacy effect obtained, using the model shown in Fig. 3.
In this way, we would generate our scientific theory of first impressions.
However, another way of explaining this effect is to ask the subjects how they performed the experimental task.
That is, to explain the experimental findings in terms of the subjects' account of the experimental rather than causal processes.
Analysis of these strategies showed that subjects prefer to use key characteristics and realist strategies when deciding the favourability of a person.
More than one strategy is also often used when deciding this.
Other strategies with similar popularity were: serial strategy, compare and contrast, and overall impression.
All three of these involve some technique by which after hearing each of the adjectives the subject decides whether the target person appears positive or negative in his/her favour and by what degree.
For each of these techniques, there is slight variation as to how the subject uses each of the adjectives to form a full impression.
Some of the strategies used could be made to explain the primacy effect found.
These can be compared with the causal processes mentioned.
Attention decrement can be compared to the order effect in cases where subjects &bquo;hold on&equo; to initial words and don't pay attention to later words &mdash;; &bquo;tended to remember first 3 adjectives better than last 3&equo;.
However, the order effect strategy does not always support attentional decrement.
In many cases, the latter variables were accounted for more than initial variables &mdash;; &bquo;first or last words were in many cases influential in my decision making process&equo; and &bquo;found that the last few adjectives had more influence especially if they reflected negative points of the personality&equo;.
The discounting and assimilation processes bear little comparison to descriptions made in any of the content areas.
This is mainly because although some adjectives precedes others it is within a relatively small space of time and a comparison between adjectives relating to the situation versus those to the person can not really be made.
Also, although the order effect strategy was used, it was, next to miscellaneous, the least common strategy to be described.
This leaves little explanation for a primacy effect.
The important is that this experimental method is only one way of first impressions using the method shown in Fig. 1.
This method is characterised on a hypothesis-driven approach.
An alternative is a data-driven approach in which we base our investigation of first impressions on people's experience, observations and accounts and then develop informal theories on the basis.
In accounts that were given of first meetings we find that the situation is often important as it is usually mentioned.
This means that the environment is often very important to the forming of an initial impression.
When describing the person in question, a reference to physical appearance is often made showing that physical appearance is very important.
This is also the first thing we can tell about a person when meeting for the first time.
Accounts of the other person personality although aspects of conversation were sometimes included, indicating similarity in interests between the two parties.
From this, we find that first impressions are extremely important; physical appearance and the things we can first tell about a person are very important when developing that impression.
In some respects these accounts are similar to those in the experimental situation where key characteristics were found to be a common strategy.
This ties in with the idea that people like others which share similar interests to themselves.
They differ in that when the impressions are described in the accounts of first meetings, the situation is important to the impression.
Also, subjects were asked to describe the first meeting with a close friend or partner; in this case all the persons described in the account were found to be favourable and the account are being made in retrospect.
Therefore, instead of showing the strategy you use you are really showing the value of first impressions.
In the experimental situation the subject is merely hearing a description of the person from varying points of view, the person is not present so he can be judged by the subject her/himself in a social context.
This leaves the experimental situation at a certain disadvantage.
From this experiment, I would be inclined to belief that psychology should be predominantly an activity in which we use non-experimental methods for understanding people's experiences in their own terms, taking into account the social context of those experiments, rather than an activity in which we transform common sense into scientific knowledge.
In the latter, we eliminate the variations and complexities of human nature, which are necessary to explain human behaviour.
In using the former, however, it is sometimes difficult to explain how things are actually working at the moment they occur rather than understanding what we see in retrospect after some factor such as clouding of memory or change of opinion may have occurred.
Generally, this method, however, I believe is more useful for studying the human condition than an experimental approach which looks at behaviour as the result of manipulation of variables.
APPENDIX UNIVERSITY OF EDINBURGH, DEPARTMENT OF BUSINESS STUDIES
BEHAVIOUR IN ORGANISATIONS PROJECT
KIERAN LEE
TUTORS NAME: PATRICIA FINDLAY
2nd of February 1990
Introduction:
Organisations can only achieve their goals by the coordinated efforts of their members.
It is management's job to integrate activities and get work done through other members of the organisation.
I interviewed two managers from Burger King in order to analyse the role of managers by relating information gathered in the interview to literature on managerial control, power, leadership, motivation, and technology.
Job description and operational functioning:
I interviewed the store manager and personnel manager of a large and busy Burger King fast food establishment.
The parent company has recently changed hands and decided to change the existing Wimpy into a Burger King.
All the staff have remained the same, but the managers I interviewed are new to this particular store.
It appears that previous senior management were not suitable for an establishment with such a high turn-over.
All the staff, including senior management are new to the Burger King system so there have been several teething problems.
This system has been in operation for three months now so the staff are gradually becoming more settled.
The old system was not unlike the new.
The organisation of work is highly structured as shown below:
Crew members are rotated between areas on a daily basis so that they become skilled in all forms of work.
They are also rotated within areas to particular jobs depending on their performance and the pace of work.
Although I interviewed the two most senior managers in the store, they both operated as line managers before being promoted, and take over-all charge of the shop floor on particular days.
Both of them also help when the store is busy, which includes lunch hour between twelve and two o'clock.
The store manager has worked his way up from the bottom and is now concerned with the financial side of operations.
The personnel manager studied resource management at college and joined the company as a line manager.
These differences in education came through in the interview.
The personnel manager tended to answer questions from a more theoretical point of view and the store manager tended to be more practical in his answering.
The role of management:
It is important to understand how managers perceive their role in organisations and to know what they would class as good management practice to understand why they themselves do things.
The store manager saw his role as increasing productivity, whereas the personnel manager saw her role as organising and motivating staff towards the organisation's objectives and looking after the staff's well-being.
Obviously these different view-points are related to the manager's particular speciality.
The store manager's job appeared to involve a large amount of administration of the system as opposed to the actual coordination of staff activities.
The operational system appeared, to a certain extent, to be fairly regulated with highly structured and defined job descriptions.
This left the store manager free for other activities.
This may account for his lack of concern for the organisation of people.
It may, however, be related to how he believes his success to be assessed by more senior management in the organisation, presumably by the profitability of the establishment.
Due to these different perspectives these managers may behave differently towards subordinates.
The store manager did indeed say he was a fairly hard-line manager looking for efficiency, whereas the personnel manager took a more compassionate stance.
Both managers said they would assess managerial competence by a person's ability to achieve the goals expected of them by the organisation.
This includes among other things controlling, coordinating, motivating, and planning as specified by Brech.
In this particular organisation planning will not take a major role because the system is already well-defined and should operate in the same way on a day-to-day basis.
The store manager has no say in objectives of the organisation.
It follows, therefore, to a certain extent that initiative and creativity are not very important characteristics for lower level managers in this organisation.
Power:
Both the store manager and the personnel manger at Burger King did not like the use of the word power.
Even when it was defined as the ability to change or control the behaviour of others they felt it was inappropriate.
The reason for this was not entirely apparent, but they said that the word power was too strong to describe their control over the employees.
This is emphasised by the Burger King policy to call everyone by their first names.
In my mind this encourages communication on a one-to-one basis therefore reducing power.
If a manager was addressed by his surname and subordinates by their first names the power of the manager would increase because subordinates are no longer treated as equals.
The main reason these managers have power is because they possess rewards desired by employees and have the ability to bring about undesirable outcomes for those that do not comply with directives.
These forms of power are commonly known as reward and coercive power respectively.
People also obey orders given by these managers because it is the leader's position to exercise influence in the organisation.
This is called position power.
One of the most influential types of power is called expert power.
It is important because it is the easiest for the employees to rationalise.
If an employee respects a person for his/her knowledge or ability to do their job, then they will be more willing to take order for this person.
These managers did not come across as having expert power, because they had had minimal training and appeared to be unorganised and therefore unprofessional in their conduct.
The store manager appeared to possess a certain charisma which enhanced his position as a manager of others.
This is called personal power.
Control:
The aim of management control systems is to bring about organisational conformity and to achieve the goals of the organisation.
The Burger King organisation operates a highly bureaucratic control system.
It is highly structured with strict procedures and rules for every task.
The technology is designed to limit variation in the conduct of tasks.
A reward system is in operation whereby competent staff get stars on their badges, and a discipline procedure reinforces conformity.
The control system appeared to be designed well and was well-regarded by staff.
Control can among other things build security and help management identify with the organisational system.
The reason for this is that the system is so highly structured that there is no question of uncertainty and it would therefore be impossible to impair the efficiency of it.
This came across in the interview because although the managers were under a lot of pressure they did not appear to be panicked or stressed by it.
Leadership:
Leadership can essentially be described as a relationship through which one person influences the behaviour of other people.
It is commonly believed that to be an affective manager, it is necessary to exercise the role of leadership.
Both managers that I interviewed believed quite adamantly that leadership qualities were something somebody either had or did not have and could not be developed through training.
This is more commonly known as the trait approach to leadership.
The store manager believed these qualities were innate, whereas the personnel manager believed they developed as a person developed into an adult, from which point they could not be altered.
The personnel manager's view appears unrealistic.
If a person develops these attributes over time, it must be possible for them to develop with practice in later life or through training.
This approach to leadership means there is more attention given to selection as opposed to the training of leaders.
This necessity for innate leadership qualities may affect these managers judgement when choosing someone for a supervisory job.
The trait approach to leadership has several limitations.
Byrd studied all research on the trait approach that had been done up to the 1940's and only found 5% of all traits identified for leaders common to all the research.
Apart from this finding, judgement in determining who is a good leader is bound to be subjective to a certain extent.
The list of possible traits is very long and there is not always agreement on the most important trait.
A more realistic approach to leadership may focus on the situation.
People with different personalities from different backgrounds are affective leaders in different situations.
Obviously, the type of leadership qualities required for a fast food establishment are not the same as for an insurance company.
The assumption behind these theories is that employees will work harder for managers who employ certain styles of leadership.
Both managers said their style of leadership was authoritarian, where power focuses with the manager as opposed to democratic, where power focuses with the group as a whole.
They said that the reason they were authoritarian was due to the pressure involved with the job.
Often there was not enough time to share decision making with subordinates.
They did feel that increased participation by subordinates may be a more effective way of motivating people but that time limitations made this impossible.
The personnel manager felt that if she was working in a establishment with a lower turn-over she would be more democratic because there would be more time.
But it could be argued that she has been placed in a high turn-over establishment because she possesses an authoritarian style of leadership.
Although the style of leadership both managers had was authoritarian they both said they encouraged employee initiative and they also tried to appear friendly and approachable.
This was emphasised by Burger King's policy of first name addressing which encourages equal-footed communication.
This may be an indication that the Burger King organisation prefers a participative style of leadership.
Motivation:
The store manager confessed that staff at Burger King are poorly motivated.
By this he means the degree to which an individual wants and chooses to engage in certain specified behaviour is low.
He believed the key to motivation is making employees feel part of a team, and to improve this he would encourage informal social interaction between management and employees.
This fast food establishment has a high turn-over of staff.
This may be due to the lack of motivation and low rate of pay.
The store manager said that core staff motivation was a problem because after a while the job becomes highly repetitive.
The problem of motivation may be due to the lack of self-esteem and self-actualisation the job gives the employee.
According to Maslow's hierarchical needs model companionship is only third on the list with self-esteem and self-actualisation above it.
Self-esteem is the need to feel good about oneself and self-actualisation is the need to reach one's full potential.
Although Maslow's model is not widely accepted it seems to be fairly accurate at explaining lack of effort and commitment by workers at Burger King.
The job is not seen, by society, to hold much esteem.
In fact it is often described as one of the worst jobs possible.
This cannot be good for the employees' self-esteem.
Employees possibly feel they do not reach their full potential in the job because promotion is scarce and the job is not stimulating either physically or mentally.
The managers I interviewed appeared to lack commitment to improving motivation, although they identified it as a problem.
The store manager said he would motivate subordinates in the following ways: offering praise, incentives, providing re-training, and finally a quiet talk if nothing else succeeded; in other words threatening them with dismissal.
The personnel manager said she would offer friendliness and praise to motivate employees.
These methods seem fairly primitive.
It was as though these managers expected motivation to stem from the organisation as opposed to management techniques.
The managers could have improved motivation by printing target sheets or graphs of the store's performance so that employees had something to aim for.
Technology:
The managers I spoke to have both taken up positions in this particular store because previous management were not competent in their jobs.
In addition to this the shift from Wimpy to Burger King has altered the technology used in production of the hamburgers.
The store manager said he found the new technology more labour intensive and less flexible.
For example, the new system relies on more machines for the making of different parts of the same hamburger.
The technology has taken on a more dominant role in controlling the way work is carried out and also the number of staff involved in the system at a given time.
This means that the store manager's objective to alter damage done by previous management is very difficult because the system is now much harder to manipulate.
Conclusion:
The role of management is not as simple as it may appear at first glance.
This project analysed management's role in terms of power, control, leadership, motivation, and technology.
It would be impossible to cover all areas of management practice within the limited confines of this project, but I hope to have covered all the important areas relevant to these manager's particular jobs.
Behaviour in Organisations 1989-90 Lecture Notes
APPENDIX
The interview took 30 minutes for each manager.
I was given a guided tour of the shop floor before the interview.
Questions:
1)
What would you say was the main task of managers?
2)
How would you assess whether a manager or supervisor was good at his job?
3)
Would you say that managers tend to lose sight of labour relations and concentrate on the market?
4)
Is power disguised as stemming from the organisation rather than management at Burger King?
5)
Would you say that there is a good relationship between management and employees here?
6)
Are employees cooperative?
7)
Everybody is called by their first names; why?
8)
Surely, this reduces the power of the managers because they are less respected.
9)
Would you prefer if manager's were treated with more respect?
10)
Do you believe leadership qualities are something a person is born with or can they be learnt?
11)
Have you had any specific leadership training?
12)
Do you believe the style of management leadership affects the willingness of employees to work?
13)
What would you say that your style of leadership is?
14)
Which style of leadership makes people more productive?
15)
What is your own personal philosophy as a leader of others and as a manager?
16)
Why do you have more power?
Is it due to:
Position
Have a valued resource
Respected because you are good at your job
Have charisma
17)
Is power important?
18)
How do you motivate people?
19)
Is it important to motivate people or does it come from another factor of the job?
20)
Does Burger King have a philosophy for managing people?
21)
How has the change from Wimpy to Burger King affected management-employee relations?
22)
Is it an improvement, if so why?
23)
Do you have any problems managing people?
24)
Is there anything you would change?
UNIVERSITY OF EDINBURGH, DEPARTMENT OF COMPUTER SCIENCE
COMPUTER SCIENCE 2
Review and Comparison of CISC and RISC Architecture in Computer Design
KIERAN LEE
TUTORS NAME: TIM HOPKINS
31st April 1990
During recent years there has been a growing debate over the merits of the two principle architectures for computer design, CISCs (complex instruction set computers) and RISCs (reduced instruction set computers).
CISCs were first introduced in 1963 by IBM in order to standardise the instruction set for their series of computers.
Before this point, whenever IBM had developed a new computer it used a completely different architecture from previous models and consequently existing software and hardware were discarded.
IBM, therefore, developed a standard instruction set that would allow the development of faster, more complex computers without major changes to the computer's basic architecture.
This allowed a large amount of compatibility between software and peripherals of the existing and newly developed IBM machines.
The RISC, though first developed under the name 801, was also pioneered by IBM in 1975.
This particular model was never sold commercially, but influenced Berkeley University's RISC I and II projects.
RISC I was developed as the university believed it was feasible to build a single chip computer faster than the Digital Electronics Corporation's VAX-11/780, a CISC design.
As there is a tendency to increase the complexity of CISCs in order to upgrade existing products, resulting in increased design time, increased design errors, and inconsistent implementations the university believed that a simpler RISC could be a cheaper alternative in the modern world where high-level languages predominate and there is little need for complex instructions for writing assembly code.
The RISC hypothesis is that by reducing the instruction set one can design an architecture which reduces the design time, number of design errors, and the time for individual instructions.
The VAX 11/780, hereafter referred to as VAX, is the Virtual Address Extension of the DEC PDP-11 architecture.
The principal behind the design of VAX was to be able to upgrade PDP-11 architecture, by preserving the existing instruction formats and instruction set and fitting the virtual address extension around them.
This principal, however it was discovered, was overly compromised in the areas of efficiency, functionality, and programming ease.
As a result, the constraint of the PDP-11 instruction format was dropped in the main or &bquo;native&equo; mode of the VAX, but a &bquo;compatibility&equo; mode was introduced in order to run existing PDP-11 programs.
This way, DEC were able to maintain investment in PDP-11 software by those who were not prepared to upgrade to VAX immediately.
Also, in VAX's native mode data types and formats are identical to those on the PDP-11 and, although extended, the instruction set and addressing modes are very close to those on the PDP-11.
Two main criteria were used when designing the VAX instruction format; all instructions should have the natural number of operands and all operands should have the same generality in specification.
This lead to a highly variable instruction form whereby an instruction consists of a one or two byte opcodes followed by a number of operands specific to that opcode.
VAX, like other CISCs, also has a series of addressing modes.
These addressing modes are a superset of the PDP-11 addressing modes with one exception, the PDP-11 addressing mode auto-decrement deferred was omitted from VAX as it was rarely used.
These addressing modes allow the programmer to carry out memory addressing tasks such as auto-incrementing of auto-decrementing an array index after, or before, carrying out an operation.
The VAX has a vast, complex instruction set consisting of 243 instructions.
Although many of these instructions are specified and as a result rarely used, the instruction set reduces the size of machine code software by reducing memory cost.
The VAX system as a whole consists of a central processing unit (CPU), the console subsystem, the memory subsystem, and the I/O subsystem.
The CPU, memory, and I/O subsystems are joined by the Synchronous Backplane Interconnect (SBI).
The 8-byte instruction buffer in the CPU decomposes the highly variable instruction format into its basic components and constantly fetches ahead to reduce delays in obtaining the instruction components.
When originally designed, it was decided that the RISC I should execute one instruction per cycle, all instructions should be the same size, only load and store instructions would access memory, the rest would operate between registers, and that RISC I would be designed specifically with high-level languages in mind.
The RISC I instruction set contains a few simple operations which operate on registers.
Instructions, data, addresses, and registers, are all 32 bits.
Instructions fall into four categories: arithmetic-logical, memory access, branch, and miscellaneous.
The execution time of a RISC cycle is given by the time it takes to read a register, perform an ALU operation, and store the result back into a register.
Register 0, which always contains zero, allows the synthesis of various VAX operations and addressing modes.
Load and store instructions move data between registers and memory.
These instructions use two CPU cycles.
There are eight variations of memory access instructions to accommodate sign-extended, zero-extended, and 32-bit data.
Branch instructions include call/return, conditional, and unconditional jumps.
The conditional instructions are the standard set used in the VAX.
There are a total of 26 simple instructions in RISC as compared with the 243 in the VAX complex instruction set and there are no addressing modes.
This allows for much quicker execution of instructions.
In order to provide effective compiler generate code from high level languages it was decided that VAX should have a very regular and consistent treatment of operators, avoid instructions unlikely to be generated by a compiler, include several forms of common operators, and replace common instruction sequences with single instructions.
The effect of this can be seen as the compiled code for VAX is typically smaller than PDP-11 code.
Also, from the set of instructions about 75 percent are generated by the VAX Fortran compiler.
RISC I was designed with the intention to always use high-level languages as these are the languages in greatest commercial use today.
It does not matter whether a high-level language system is implemented mostly be hardware or mostly by software, provided the system hides any lower levels from the programmer.
Given the limited number of transistors that can be integrated into a single chip computer, the RISC high-level language is implemented mainly using software, with hardware support for only the most time consuming events such as the passing of parameters in procedure calls.
Program data shows that the procedure call/return is the most time-consuming operation in typical high-level language programs.
The statistics on operands emphasises the importance of local variables and constants.
RISC I attempts to make each of these constructs efficient, while implementing the less frequent operations with subroutines.
Potentially, RISC programs may have an even larger number of calls since the complex instructions found in CISCs are subroutines in RISC I. Thus the procedure call must be as short as possible, perhaps only a few jumps.
The use of procedure involves two groups of time-consuming operations: saving or restoring registers on each call or return and passing parameters and results to and from the procedure.
Measurements on high-level language programs suggest that local scalars are the most frequent operands, so RISC supports the allocation of locals in registers.
Microprocessors keep multiple banks of registers on the chip to avoid register saving and restoring.
Thus each procedure call results in a new set of registers allocated for use by that new procedure.
The return just alters a pointer which restores the old set.
Some of these registers are not saved or restored on each procedure call.
These registers are called &bquo;global&equo; registers.
In order to allow parameters to be passed in registers, the set of window registers is broken into three parts: high, local, and low.
The high registers contain parameters passed from above the current procedure.
The local registers are used for local scalar storage as described, and the low registers are used for local storage and parameters to the procedure &bquo;below&equo; the current procedure.
On each call a new set of window registers is allocated.
The low registers of the caller become the high registers of the callee, by overlapping the registers of the calling frame with the high registers of the called frame.
This is an important aspect of the hardware as it is one of the few ways in which the hardware of the RISC utilises the instruction set.
Performance is increased by prefetching the next instruction during the execution of the current instruction.
This introduces difficulties with branch instructions.
This was solved by redefining jumps so that they do not take effect until after the following instruction; this is a delayed jump.
The machine language code is suitable arranged so that the desired results are obtained.
As this only affects the machine code, the programmer of the high-level language is not affected by this and the burden falls solely in the hands of the programmers of the compiler, optimiser, and debugger.
Results showed that window registers are effective in reducing the cost of using procedures and off-chip memory accesses are reduced.
Tests using C programs show that less than 20 percent of instructions are loads and stores, while more than 80 percent are register-to-register.
This shows RISC successfully changes the allocation of variables from memory into registers.
This also indicates that RISC requires a lower number of the slower off-chip memory accesses and that complex addressing modes are not necessary to obtain an effective machine.
Tests comparing the RISC with the VAX using similar C compilers show that on average RISC uses only two-thirds more instructions and RISC programs were only about 50 percent larger even though size optimisation was virtually ignored.
Dynamic results were tested using Forest Baskett's &bquo;puzzle&equo;, which features very few procedure calls, a deep call stack (20 nested procedure calls), and a relatively large number of loops.
These results showed that it made no difference to RISC whether register variables (hints that variables will be used frequently and should be kept in a register) were declared or not.
This is because the architecture makes it relatively simple for a compiler to allocate local scalars in registers, so there is no need for a language to give hints telling which should be used.
RISC I was successful in reducing the number of data accesses substantially in all programs.
RISCs are reliable and free from design errors, whereas with increasing complexity design errors are becoming more and more frequent among CISCs.
RISCs also have fewer transistors on a chip than CISCs making RISCs cheaper to produce.
The advantage of CISCs over RISCs, however, is the ability to transfer software from one machine to a more complex, faster model as most manufacturers provide customers with object code and not the source code.
For example compiled executable programs for the 8-bit Intel 8088 are also executable on the companies 32-bit 80386 machines, whereas the CISC Sun 3 is not compatible with the RISC Sun 4 machine.
Given the increasing chip density necessary for more complex CISC designs, resulting in increasing chip faults and greater cost RISC will be a major challenger to the CISC in the future.
The main reason that CISCs are still in more use today than CISCs are that commonly used commercial operating systems such as MS-DOS for the IBM PC cannot be emulated well on a RISC computer.
It requires extra software, and the IBM XT emulator for the Acorn Archimedes with the ARM microprocessor using RISC architecture (based on the original RISC I design) runs slower than the Intel 8088, the slowest of the PC processors, under the MS-DOS operating system.Also as RISCs are designed with high-level languages in mind, designing in assembly is very tedious unlike assembly programming with CISCs where there are instructions for complex tasks and so programming is relatively simple and, as a result, less time-consuming.
However, assembly is not widely used at present and as more companies develop RISCs it is likely that this architecture will dominate the CISC design in the future.
&bquo;If active citizenship is to be established in the public domain then the institutions of participatory democracy must be strengthened.&equo;
Critically assess this view.
Political Theory Extended Essay
KIERAN LEE
Tutor &mdash;; Dr Paul Smart
April 26th, 1991
The citizens of most countries like to believe they live in a democracy, i.e. the state being governed by all the people.
However, how active are citizens today and is the system organised in such a way as to foster, or stifle their participation?
Could it not be said the overwhelming factor of modern life is the apathy amongst the majority of its members, which should engender concern regarding the way government is designed?
It seems to ensure citizens are pinned under its powerful thumb.
Thus it could be alleged that we are all vulnerable to this power, individuals having little control over their lives, in the shadow of a powerful and looming state which offers them protection and a set of civil/political rights, the main being to elect representatives in return for their acquiescence.
As Keane says &bquo;How do we guarantee the survival and future growth of democracy&hellip; the troubling paradox is the growing respectability of democracy has turned out to be a disappointing affair &mdash;; certainly if democracy means a pluralistic system of power wherein decisions of concern to collectives of various others within civil society and the state are made directly or indirectly by their members&equo;.
It may be true that democracy, though a virtuous and desirable political form, is always under threat and incomplete in certain respects.
But, how incomplete is our political system today; does it satisfy or repress its citizens?
Liberal, representative government is the accepted form of democracy advocated by all liberals from Locke to Hayek.
Citizenship rights were developed along with liberal government due to modernisation and protest in order to obtain a greater degree of participation.
How far, therefore, does our present system of representative competitive democracy encourage active citizenship?
Schumpeter advocates it and describes it as being &bquo;that institutional arrangement for arriving at political decisions in which individuals acquire the power to decide by means of a competitive struggle for the people's vote.&equo;
This is, therefore, a &bquo;method which is well-designed to produce a strong, authoritative government.&equo;
Leaders therefore decide on behalf of the people and the system helps individuals to achieve their own goals.
Participation and active citizenship therefore have a limited role due to the desire to preserve stability and the claims that it is unrealistic to have citizens active in decision-making; &bquo;the electoral mass is incapable of action other than stampede&equo;.
The people's role in the main is the production of a government to represent their views, enforcing Montesquieu's view that &bquo;political liberty consists in security&equo;.
If the populace are unhappy with the actions of a government, they can vote representatives out.
The state is seen as a medium which allows individuals to pursue their own interests, promoting freedom and autonomy.
Among political theorists it is widely accepted that, to sustain a democracy, participation and active citizenship should have only a minimal role.
Any increase in participation would destabilise the political system.
Several authors, who see the failings of our present system, do not wish to see an extension of participatory democracy.
Bobbio, like Schumpeter and Dahl, wishes to retain the competitive model of different parties for the peoples' vote.
Active citizenship may not be possible as today life is geared around the individual and common interests are hard to identify &mdash;; &bquo;people are ill-informed judges of their own interests therefore representatives do a better job.&equo;
They will initiate more effective policies than a &bquo;badly-informed populace&equo;.
Bobbio has doubts, therefore, as to the educative benefits of participatory democracy.
It may not produce a positive result.
He is anxious that it will increase conflicts between groups due to the strengthening of and increase of group identity.
Instead Bobbio wishes to extend democratic control to a number of areas within society with strengthened civil, political and social rights.
He is concerned with the existence of invisible power within the state, creating a barrier between it and the people.
The need to penetrate this curtain is imperative, thereby allowing reform so the state is more in tune with the wishes of society.
Even though democracy has been a disappointment, Bobbio still prefers the rule of law to the rule of men as hailed by participatory theorists.
Walzer calls it the &bquo;problem of citizenship&equo; and he expresses deep doubts that the ideal of participatory active citizenship can be realised.
He feels we have to live with what we have helped to create.
Pluralism or membership of associations, he suggests, provides the individual with the participation lacking in a liberal democracy.
Active citizenship, in the form of participatory democracy, is only possible in small communities where there is no organisational complexity.
Perhaps the easier participation becomes, the less effective its efforts would be.
However his views seem split &mdash; &bquo;but if we are not willing to rule in our turn, other men will rule out of theirs.
They will call us citizens but we will be something less, perhaps I should say they do call us citizens but we are something less.&equo;
To many this is not argument; our system needs drastic revision.
Those who argue for participatory democracy believe active citizenship cannot be established within the limitations set by the existing liberal-democratic framework.
They press for much needed democratic reforms to enable the citizens to participate.
Without doubt, the legacy of the competitive model of democracy can be criticised.
Many reject democracy in terms of party competition, majority rule and the rule of law.
Active citizenship involves the participation of individuals in the social, economic and political organisations of the state.
Our present system has merely augured the growth of passive citizenship.
The Conservatives believe that as long as government allows individuals to to pursue their own goals their citizenship is served.
As such, they believe active citizenship is alive when looking at voluntary/charitable work and community control.
If active citizenship is to be established, changes are essential as our present system does not seek to achieve it.
It could be said that our present system does not invite people to be active; does not foster participation.
Our lives are still controlled by a distant central government, which can impose policies on the electorate and rule in Britain when nearly 60% of the electorate are opposed to the party in government and therefore have no political voice.
Thus, arguments for participatory democracy focus on the inactivity of the large majority of the people in the political arena today.
Held asks why &bquo;for so many people the fact something is a recognisably &bquo;political&equo; statement is almost enough to bring it instantly into disrepute&equo;.
Finley comments on the indifference and ignorance of a majority of the electorate in Western democracies.
Public apathy and political ignorance is an indisputable fact today.
Political parties today are condemned for having little or no room for meaningful citizenship participation.
Citizens in our present system are limited to the role of occasional voters.
Parties are dominated by specialisation, expertise, organisation, and bureaucracy.
Leadership, as Keane says, &bquo;resembles a blind and arrogant monarch when it comes to the real issues&equo;.
It is alleged that party government and politics are inimical to genuine democracy.
Leaders decide regardless of the vote.
While elections are crucial to democratic institutionalism, Keane states that &bquo;the mandate or representation principle upon which all parties thrive robs citizens of all significant responsibilities for their judgements and actions.
In the act of voting, meaningful citizenship drops with the marked paper into the ballot box &mdash;; both disappear simultaneously.&equo;
Thus it seems hardly sufficient for people to vote every four or five years at a time of a general election, the brevity of the act carrying little weight or influence.
This could be classed as &bquo;periodic citizenship&equo; whereby the accountability of the public, through the electoral process, is reduced.
There is no way of knowing, even seeking to find out what people want, even when they are in agreement with the government.
They are, in a sense, controlled and the system is &bquo;incomplete for fulfilling citizenship in any period&equo;.
The views of the electorate can be registered only very occasionally, if at all, and in the most general terms the side of the political spectrum they fall on is identified.
&bquo;The election and direct involvement of a small number of representatives does not give adequate opportunity for participation and expression of public views which can strengthen democracy.&equo;
Pateman also says the only democratic element in liberal democracies is the vote and then when representatives make political decisions on their behalf the electorate renounce power.
Liberal democratic voting, therefore, is a &bquo;series of renewals to obey&equo;.
Individuals cannot choose what they vote on or when they vote.
The &bquo;only means of modifying what they have done is to perform the same action when next invited to do so&equo;.
Many feel political participation must be made worthwhile before active citizenship can occur.
In order for individuals to be interested in their community they have to feel they have a say.
The electorate are merely &bquo;political actors&equo; &mdash;; &bquo;at election times, individuals put on what Marx called their &bquo;political lion skin' of citizenship but underneath they act as before&equo;.
Voting, in itself, is not sufficient to advance interests.
In some countries a large majority of the electorate do not even exercise the right to vote.
This is accompanied by low rates of political participation.
These are largely women and lower socio-economic groups who feel most excluded from the system.
So is this apathy and lack of interest not the result of a political system which dissuades people from taking an active role.
Exclusion in many ways facilitates their apathy.
The Left believes that, generally, given the concentrated forces arrayed against them, people perhaps cannot get into &bquo;real politics&equo;.
Even membership of political parties is declining.
Duncan says &bquo;the brute fact remains that democratic politics cannot prevent the creation of remote, state-entrenched centres of power which tend to promote general apathy, cynicism, and ignorance about politics among the masses of the people&equo;.
This is not a healthy situation.
It is argued democracy needs to be redefined in function and meaning.
Finley says &bquo;the issue is whether this state of affairs is under, modern conditions, a necessary and desirable one or whether new forms of popular participation in the Athenian Spirit need to be invented&equo;.
Alienation within the present system has led to activity outwith the political system and to the creation of social movements through the need to publicise grievances and uncertainties about everyday life and to urge the necessity of democratising life in these areas.
This is the most active expression of citizenship in recent years outwith government.
Decisions therefore can be influenced by these active movements, e.g. greens, peace, and feminists, which highlight issues and capture the public's attention and thus bring pressure to bear on the government during inter-electoral periods.
To this extent active citizenship is available to all.
The public can join and campaign, supporting beliefs and causes with the intent of influencing popular opinion and government.
The need for these movements shows the ineffectiveness and unwillingness of the present system to deal with issues central to the voter.
Movements have accused existing parties of not reflecting the true political nature of the country, &bquo;stocking the pores of civil society and frustrating and screening out political dissent&equo;, thereby limiting activity in the public domain.
Reactions of the government to green/women's issues have been superficial, clearly lacking and without substance.
What is important to a large majority of people takes second place to the market.
Civil disobedience to obtain what the voters want has been used most effectively in poll tax demonstrations and non-payment campaigns.
Although this was against the law, the poll tax was illegitimate &mdash; imposed from above.
In Scotland, the electorate had to pay this tax for an extra year.
A truly democratic government would have realised the populace was against the tax and abandoned it immediately.
Now the tax has been scrapped, but was it due to citizenship or merely the need of a new leader to gain popularity?
Despite its demise, we still have to pay this tax for the next two years and can be subject to warrant sales or heavy fines for non-payment.
The power of the state is overwhelming.
Undoubtedly, the Conservatives have been quite content to increase state domination.
Our potential to influence the structure, responsibilities and policies of government is severely limited.
We seem to be passive in the face of authority, conditioned to accept greater intrusion in our lives in a supposedly free society.
We experience a lack of power, supporting Rousseau's allegation that there is a &bquo;tendency of all governments to degenerate&equo; as the state's freedom increases while ours slips away.
Citizens today increasingly seem to believe what they are told.
They don't care about extending rights or being active as they are more interested in working and paying of debts.
They are impotent in the face of a centralised, powerful state.
This power needs to be tackled.
Many are questioning the threat of a &pound;400 fine, which is being imposed for failure to complete the 1991 census.
All of the above need to be tackled.
Active citizenship is severely lacking in the public domain today.
Many look to those classical theorists, such as J. S. Mill and Jean-Jacques Rousseau, who argued for a participatory system of democracy where citizens are sovereign.
As civic republicanism asserts, the individual can exert a meaningful existence within society in the public realm.
The situation today is divorced from what Mill/Rousseau envisaged, each believing in participatory democracy contained within the ideal of a &bquo;rational, active, and informed democratic man&equo;.
Pateman believes their work is still valuable today.
Our system forgets Mills' comment that &bquo;the nation did not need to be protected against his own will.
There was no fear of its tyrannising over itself.&equo;
A democracy should strive to make the most of its people; encourage active citizenship.
It should not repress or propel the citizenship away from political life but be &bquo;informed by an entire people to the point where the intellectual, emotional, and moral capacities have reached their full potential and they are joined freely and actively in a genuine community&equo;.
To Rousseau, non-participatory institutions posed a threat to freedom &mdash;; &bquo;man is born free and he is everywhere in chains&equo;.
Democracy should embody the popular will &mdash;; &bquo;every law which the people has not ratified is null, void is in fact not a law.
The people of England sees itself as free but is grossly mistaken, it is free only during the election of members of parliament.
As soon as they are elected, the people is enslaved, it is nothing&equo;.
This seems alarmingly true today.
Over the past twelve years, in regard to the Conservative government's radical course of action, changes have been made to the Welfare State and life in Britain that we have been powerless to resist.
Thus we require participatory democracy to give the individual a real measure of control over the life and structure of his/her environment.
General will should ensure the equality and liberty necessary for active citizenship &mdash;; taking collective decisions.
Participation is &bquo;central to the establishment and maintenance of a democratic policy&equo;.
For Pateman, participatory democracy hinges on the premise that individuals and their institutions cannot be placed apart.
Problems of political obligation can only be overcome by participatory political associations which would allow citizens to create their own political obligations.
Decisions can be reassessed, and if necessary changed, only by them.
Political disobedience is sanctioned as a possible expression of active citizenship on which a self-managing democracy is based.
To Finley, debate and discussion are imperative.
Democracy requires a society willing to take risks.
He argues for the classical form of government; to rest on political apathy he believes &bquo;is a way of preserving liberty by castrating it&equo;.
For Stewart/Ranson, the &bquo;distinct challenge for the public domain derives from the duality of publicness &mdash;; need to enable citizens in their plurality to express their contribution to the life of the community and out of that plurality, to enable a process of collective choice and government of action in the public interest to take place.&equo;
They reassert the need to establish publicness which has been stifled by private interest.
Public and collective concern, as opposed to the individual, is necessary for active citizenship.
Individuals need to help one another engender a common concern for establishing goods and services we all require, creating a community.
&bquo;A political community which has the capacity to make public choices, providing a public which is able to assemble, to enter into a dialogue and decide about the needs of the community as a whole is the uniquely demanding challenge facing the public domain.&equo;
The public domain, therefore, can do more than it is to secure the active citizenship of individuals.
On the one hand it can emphasise the regular contributions citizens/groups make to collective life, while on the other it can reduce their participation to elections.
A growth in participatory democracy is therefore desirable, emphasising the needs of collective choice and government.
There is a requirement for public institutions which are just.
Poverty, ignorance through lack of education, and idleness borne of enforced unemployment are all barriers to participation.
Citizenship needs to be developed by extending rights to women, blacks, and other minorities.
A balance is required between representative and participatory democracy to allow for a more active citizenship.
Bogdanor said &bquo;local government needs to be made both more visible and also more exciting.
This can be done by giving the electorate a more direct influence upon the policies adopted in the locality&equo;.
Stewart/Ranson therefore argue for four related developments to allow for greater popular participation.
Firstly, enlarging the consultative processes; it is essential for a government to appeal to the public and to allow citizens to express their opinions and needs.
This is crucial.
It will make the government more accountable and genuinely representative.
Secondly, ensuring the disadvantaged and less powerful are not excluded and are given a chance to express views by such methods as public inquiry techniques, workshops, seminars, and local opinion polls.
Third, including public protest, pressure and discussion as &bquo;an active politics of the community&equo; to strengthen government.
Government needs to listen to the public and respond accordingly.
Finally, delegating power to challenge elected representatives through the courts.
Everyone should have equal protection by the law and the ability to appeal against local authority.
Referendum and Bill of Rights would help prevent an &bquo;elective dictatorship&equo;.
Active citizenship, they believe, requires active participation in policy-making and taking responsibilities.
It is necessary to be involved in making political decisions, without having to fight for them.
Open government is also essential; a citizen has the right to know why and how decisions are made.
Citizens need to take their place on governing bodies and community councils.
Citizen discussion panels need to explore issues such as public transport, community care, or response to unemployment.
Public institutions need to be democratised and decentralisation is necessary in the public domain in order to achieve active citizenship.
Pateman also believes political obligation cannot be given expression in the liberal democratic institutions.
This can only occur in participatory or self-managing democracy.
Politics today seems to be a subordination to the judgement of others &mdash;; &bquo;citizens collectively exercise political authority over themselves in their capacity as private individuals&equo;.
Participatory democracy is necessary for a pluralistic citizenship, self-conscious membership, wide-ranging commitments&equo;.
As Walzer has said, &bquo;however useful we may be or want to be our usefulness is not organised or given expression within the political community.
This may well constitute an entirely sufficient argument for its radical reconstruction&equo;.
Democratic autonomy, which would specify conditions for the active participation of citizens in decisions which affect them, is Held's Ideal.
Democracy should be organised in such a way as to become a central feature of people's daily lives &mdash;; &bquo;autonomy would require the creation of a system of collective decision-making which allowed extensive involvement of citizens in public affairs&equo;.
People therefore have extensive opportunity to make decisions that determine what matters.
State and civil society should be separated with a diversity of power centres; state institutions becoming effective, accessible and accountable.
This would involve freedom of information and relocation of civil services to regions along with widespread decentralisation.
An increase in transfer of power from the state to local government would also make institutions more accountable.
Establishing the sovereignty of the parliament over the state, with all the citizens in the society having control over parliament.
This all signifies a breaking down of powerful interest groups such as trade unions and large corporations.
At the same time new avenues should be pursued to promote active citizenship, socially owned enterprises, independent media, and health centres which allow new members control of the resources at their disposal without interference from the state.
At present, few opportunities exist for citizens to act as participants in public life.
Democratic autonomy seeks to redress this state of affairs by creating opportunities for people to establish themselves &bquo;in their capacity as being citizens&equo;.
Thus a dual democratisation, encompassing the state and civil society, is necessary to provide active citizenship.
Active citizenship would also be possible through democratic socialism as advocated by Keane/Plant in order to overcome anti-democratic forces which are endemic &mdash;; &bquo;the most visible and alarming of these&hellip; is a failure of the democratic imagination itself&equo;.
By democratising civil society and the state, power can be distributed to public spheres against old bureaucracy, surveillance, red tape, and state control.
Keane says we need &bquo;a differentiated and pluralistic system of power where decisions of interests to collectives of various sites are made autonomously by all their members&equo;.
By limiting state action and expanding autonomous social life, civil society will then have the potential to become a non-state sphere comprising a plurality of public spheres.
Two interdependent functions are necessary &mdash;; expansion of social equality/liberty and restructuring/democratising of state institutions.
Keane believes social organisations such as self governed trade unions, enterprises, housing cooperatives, refuges for battered women, and independent communications media will ensure political representatives are kept under control.
&bquo;Civil society should become a permanent thorn in the side of political power&equo;.
All are crucial to check bureaucratic regulation, state surveillance, and invisible government which have increased since 1945.
Local initiative and pluralism, growth of decision-making centres, and space for individual/group autonomy would promote an active citizenry as opposed to individuals set apart from the authoritative state which has grown under the Conservatives in the last twelve years.
For too long, the state has had complete control of individual lives, freedoms, and powers.
This needs to be overcome.
Our lives are overpowered by state institutions whereas the distribution of political power should be installed in many self-regulating institutions.
Parliament is seen as a rubber stamp for decisions made elsewhere.
House of Commons too is seen as a mainly consultative body.
Electoral accountability is a myth.
For active citizenship, therefore, Labour must unite all individuals together and search for common values of citizenship rather than individual ethos.
State has to be an enabling power in Plants view &mdash;; to secure a sense of real freedom for individuals by increasing abilities, opportunities, and resources for emancipation.
Public provision is necessary to live &bquo;an autonomous and purposive life&equo;.
Needs have to be met to enable active citizenship as without education, welfare, health care, self-respect, and law we cannot act in a way we would like to.
Active citizenship requires an opportunity to participate in normal patterns of life, to overcome unemployment, poverty, homelessness, create equal education and equal opportunities, and strengthen social and political rights which have been under attack.
Plant wants to promote activity and empower individuals.
He feels need to put power in the hands of consumers so institutions reflect rights and needs &mdash;; &bquo;the empowerment of the citizen as consumer needs to challenge professionals/managers &mdash;; to enable a society of active citizens rather than experts&equo;.
In welfare this would involve the use of cash rights, entitlements, and cash surrogates to empower individuals.
There is, therefore, a broad consensus that our present system is not designed to engender widespread active citizenship.
Participatory democracy does need to be strengthened to promote active citizenship at local and national levels.
Alongside this, there is a need to break down our bureaucracy, thereby reducing the power of the state and the stranglehold it exerts on the nation's political activities.
The government should express the will of the people &mdash;; act for them and with them.
Radical reform is necessary to enable those who do want to be active in political life to do so.
In a so-called democracy active citizenship means nothing when we are unable to resist changes to welfare state, privatisation, or any government policy imposed on us without our consent.
What should replace the poll tax &mdash;; should the citizens not decide?
As Rousseau said, &bquo;yet it may be asked how a man can be at once free and conform to wills which are not his own&equo;.
Not all the interests of our citizens have been looked after over the last twelve years.
Policies have brought inequalities, promoting individuals to become self-seeking, rather than an active citizenship which reached common values and bare necessities required by us all.
This situation needs to be redressed in order to stall the divisive effects of homelessness, poverty, and unemployment.
It does seem today, as Mill once said, &bquo;protection is needed against the tyranny of prevailing opinion&hellip; the tendency of society to impose its own ideas and practices&equo; and by &bquo;an increasing inclination to stretch unduly the powers of society over the individual by the force of opinion and even by that of legislation&equo;.
An apt quote for today's democracy in which citizens are powerless to resist decisions and policies which are implemented without any reference to their needs or opinions.
In our society others are doing our thinking for us, a concept Mill and Rousseau wanted to prevent.
The road towards more active citizenship will be one fraught with difficulties, but it is necessary if we want to increase our freedom and status as thinking, free individuals.
Much, of course, depends on the people and whether they wish to participate, but the opportunity to do so should be strengthened.
It would be preferable if we had a system where people could become more involved in important government decisions, be able to register an opinion on government policy, and show the government the way in which public opinion is moving.
This is important; we should be able to have an influence over institutions and policies which affect our lives.
There should also be participatory democracy at local/individual levels, with local government perhaps transferred to popular assemblies; devolution for Scotland and Wales.
This will increase efficiency and be to the good of the respective communities.
Democracy in the work-place with firms under workers' control is also important.
Active citizenship is still an ideal, but we must not lose the vision of the classical theorists, regarding participatory democracy.
It perhaps will never be realised as it is questionable as to how many people do want to be active in politics.
However, the vote is an old freedom and we require easier access for those wishing to act by strengthening civil, political and social rights, and democratising centralised state power.
The barriers to active citizenship are many.
They need to be chipped away.
After each election we are left simply with the power to obey.
Our opinions meaning nothing as the next government sets on a course of action which people directly have little to do with, although indirectly may adversely affect their lives.
It is this lack of control which we have to think of when arguing on the merits of strengthening participatory democracy to achieve active citizenship.