examiners’ report: honour school of mathematics, part b ... b report.pdf · graduate ambassadors...

46
Examiners’ Report: Honour School of Mathematics, Part B: Trinity Term 2006 October 12, 2006 Part I A STATISTICS 1. Numbers and percentages of three year candidates in each class Number Percentages % 2006 (2005) (2004) (2003) 2006 (2005) (2004) (2003) I 30 (19) (26) (29) 39.5 (26.8) (25.7) (22.8) II.1 31 (35) (49) (64) 40.8 (49.3) (48.5) (50.4) II.2 11 (13) (18) (27) 14.5 (18.3) (17.8) (21.3) III 4 (4) (5) (2) 5.2 (5.6) (5.0) (1.6) P 0 (0) (1) (4) 0.0 (0.0) (1.0) (3.1) F 0 (0) (2) (1) 0.0 (0.0) (2.0) (0.8) Total 76 (71) (101) (127) 100 (100) (100) (100) Table 1: Numbers in each class The figures in Table 1 are for the 76 candidates who were classified on the original class list. A further 81 four year candidates took Part B, and 80 were awarded an Honours Pass. Subsequently, four candidates (three Firsts, one Upper Second) were declassified because the information received by the Examiners that they wished to be classified was found to be in error. Up to the time of writing this report, a further two candidates (one First, one Lower Second) have been declassified because they decided after the completion of the examination that they wished to return for Part C. The one four-year candidate who was not awarded an Honours Pass has been awarded a Pass in the three-year degree. One candidate, who took only part of the examination, is excluded from the figures. 2. Vivas There are no vivas in Mathematics 1

Upload: phamdan

Post on 29-May-2018

213 views

Category:

Documents


0 download

TRANSCRIPT

Examiners’ Report: Honour School of Mathematics, Part B:

Trinity Term 2006

October 12, 2006

Part I

A STATISTICS

1. Numbers and percentages of three year candidates in each class

Number Percentages %2006 (2005) (2004) (2003) 2006 (2005) (2004) (2003)

I 30 (19) (26) (29) 39.5 (26.8) (25.7) (22.8)II.1 31 (35) (49) (64) 40.8 (49.3) (48.5) (50.4)II.2 11 (13) (18) (27) 14.5 (18.3) (17.8) (21.3)III 4 (4) (5) (2) 5.2 (5.6) (5.0) (1.6)P 0 (0) (1) (4) 0.0 (0.0) (1.0) (3.1)F 0 (0) (2) (1) 0.0 (0.0) (2.0) (0.8)

Total 76 (71) (101) (127) 100 (100) (100) (100)

Table 1: Numbers in each class

The figures in Table 1 are for the 76 candidates who were classified on the originalclass list. A further 81 four year candidates took Part B, and 80 were awarded anHonours Pass.

Subsequently, four candidates (three Firsts, one Upper Second) were declassifiedbecause the information received by the Examiners that they wished to be classifiedwas found to be in error. Up to the time of writing this report, a further twocandidates (one First, one Lower Second) have been declassified because they decidedafter the completion of the examination that they wished to return for Part C. Theone four-year candidate who was not awarded an Honours Pass has been awarded aPass in the three-year degree.

One candidate, who took only part of the examination, is excluded from the figures.

2. Vivas

There are no vivas in Mathematics

1

3. Marking of scripts (of papers which are the responsibility of the Mathe-matics Part B examiners)

The scripts for Extended Essays (BE and OE), History of Mathematics (O1), Under-graduate Ambassadors Scheme (N1), and the three philosophy papers (N101, N102,N122) were double marked. As in previous years, all other papers were single markedaccording to a detailed pre-agreed marking scheme.

B NEW EXAMINING METHODS

None

C CHANGES IN EXAMINING METHODS AND PROCEDURES CUR-RENTLY UNDER DISCUSSION OR CONTEMPLATED IN THE FU-TURE

In future all candidates will be classified after the Part B examination, on the basis ofParts A and B. Those who continue to Part C will receive a separate classification for PartC.

Several changes of detail are proposed or discussed in Part II of this report.

D COMMUNICATION WITH CANDIDATES

The candidates were given detailed information on the form of the examination and thebasis of classification to be used in the examination. Two circulars were sent in hard copyform to each individual candidate; they were also posted on the Mathematical Institutewebsite. [These circulars are attached.]

2

Part II

The Examiners record their very warm thanks to:

• The Examiners from the other Honour Schools who set and marked papers for Math-ematics Part B;

• All Assessors;

• The Mathematical Institute staff, particularly Yan-Chee Yu and Catherine Goodwin;

• Waldemar Schlackow for his work on the database, and the graduate students whoacted as checkers for the marks entry process;

• The staff of the Examinations Schools;

• Glenys Luke for support and advice.

We particularly commend the heroic work of Yan-Chee Yu and Waldemar Schlackow ingetting an extremely complicated process to work without hitches during the examinationperiod. It was unusually difficult for them since neither of them had any prior experienceof such an exercise (and Catherine Goodwin had no experience of Part B). We trust thatthe administration of the Mathematical Institute will keep in mind the advantages ofcontinuity when assigning duties for future years.

The Internal Examiners record their warm thanks to Professor MacCallum and ProfessorRawnsley for their prompt and helpful comments on the question papers and for theirhelpful advice and attention to detail during the classification meetings.

A GENERAL COMMENTS ON THE EXAMINATION

Overview of the Examination

The examination generally ran smoothly, at least to the eyes of candidates and otheroutsiders. The most public problem was that there was again confusion about whichcandidates should be classified after Part B. The Examiners accept no responsibility forthat: the process for gathering and updating the information that had been agreed betweenus, the Mathematical Institute, the Examinations Schools and the Proctors Office wasruled out by another branch of the University administration who provided us with a listgenerated in a different way. Happily the introduction in 2007 of universal classificationafter Part B means that this will not be a problem for future Examiners, although it maystill induce headaches for administrators.

Nevertheless, there were a number of aspects of the examining process which seemed to beunnecessarily laborious or otherwise unsatisfactory for Examiners and their administrativesupport in the Mathematical Institute. Most of these involved one or more of the followingfeatures:

• There is conflict between the old system of organisation by individual Honour Schooland the new system of papers shared by many Honour Schools. Specific topicsaffected by this include the assignment of USMs, the appointment of assessors andrules concerning calculators.

3

• There is conflict between the old system of classification based on academic judge-ment of quality and a new system based on numerical algorithms and rules. Seecomments in the section on Classification.

• The Teaching Committee of the Mathematical Institute, and its officers, has showna tendency to make decisions about the examining process at a level of detail where,in our opinion, the experience of recent and current Examiners would be invaluable.Specific topics include the database arrangements, weak paper rules, rounding ofaverages, and restricting high marks.

• The support offered by the Examination Schools has become less in recent years.Specific matters include the publication of the timetable, the division and deliveryof scripts, the unavailability of a room for meetings and the preparation of the classlist.

Setting of Papers

A new procedure, putting more responsibility on lecturers, worked well. The lecturers moreor less met the deadlines (except B2a and B5b); the examiners applied a light touch, butcorrected errors and ambiguities and gave advice to new lecturers. The production of thefront page remained the responsibility of the Examiners and the Institute’s administration.No errors or significant ambiguities in the mathematics papers came to light during theexaminations or the marking process.

The Examiners made specific requests to the setters of some papers to make the hardestpart of the questions harder than the 2005 versions, and to the setters of other papersto make the first parts of the questions easier than the 2005 versions. When we cameto consider the 2006 marks, we found that there were fewer problems than in 2005 withpapers on which there were unexpectedly many high marks, but there were more problemswith papers with unexpectedly many low marks.

We recommend that the 2007 Examiners ask

• the setters of B9 (especially B9b) and B10b to make the hardest part oftheir questions harder than the 2006 questions;

• the setters of B1, B3 and B8 to bear in mind the unexpectedly smallnumbers of high marks on the 2006 papers;

• the setters of B2 and B4 to make the first parts of their questions easierthan the 2006 questions.

Database

At the start of the year, the Institute’s academic officers told the Chairmen of Examinersfor all four years that the Institute would organise a database to cover all the examinations.Questions about who would be responsible for ensuring that it worked correctly wereunanswered and eventually it became apparent that the Examiners would have to do thisand that the burden of it would fall on the Part B Examiners. The Institute appointedWaldemar Schlackow to operate the database (this proved to be an excellent choice).

4

Although the 2005 database had been further improved at the suggestion of the 2005Examiners, he decided to prepare a completely new database. While this was probablythe right decision in the long term, we were very disappointed that we had to repeatall the checking and testing which had been carried out on a new Part B database in2005. The amount of work involved is enormous: for Part B alone the database covers57 different papers, with many different rubrics and different arrangements for assigningmarks, entries of over 200 candidates with their marks on Part A papers and later on Part Bquestions, calculation of various scaling functions and averages, various special rules aboutclassification in the different Honour Schools, weak papers, rounding and so on, alerts ofrubric violations, alerts to draw attention to candidates who might be anomalous, andfacilities to produce mark sheets, check sheets, numerous lists for use in the examinersmeetings, notices of marks to colleges and statistics for this report. Much of the necessaryinformation has to be collected from outside the Mathematical Institute.

The development, checking and testing of the database were carried out from January toMay, and to the best of our knowledge it operated in June exactly as intended. The testingwas made easier by having the real 2005 marks available in depersonalised electronic form.Although the Institute proposed to delete the marks from the database within 40 days, werecommend that the marks are kept in depersonalised electronic form until itis clear that no further changes will be made to the database for the followingyear. (Keeping the marks in this way is permitted. Since the detailed marks have to bekept in hard copy for a year, we do not know the purpose of deleting the electronic versionearlier than that.)

The Institute is currently appointing a database administrator and we understand thatthe new appointee will take on next year the role taken on by Waldemar Schlackow thisyear. In order to avoid the need for Examiners to check and test yet another new database,we regard it as essential that the Institute should instruct the new appointeethat the 2006 database should be used again (with any agreed modifications).It should also be made clear that the database should be used to produce statistics in aformat suitable for this report within a few weeks of the end of the examination.

Timetable

The examination began with Paper NP102: Knowledge and Reality on Wednesday 24May, and ended with Paper B11: Communication Theory on Thursday 15 June.

Numbers offering the various papers

Table 2: Numbers offering the various papers

Paper Number of candidates2006 (2005)

B1 Foundations: Logic and Set Theory 34 (42)B1a Logic 12 (9)B1b Set Theory 5 (9)B2 Algebra 44 (42)

continued on next page

5

Table 2: continued

Paper Number of candidates2006 (2005)

B3 Geometry 16 (13)B3a Geometry of Surfaces 1 (5)B3b Algebraic Curves 2 (2)B4 Analysis 33 (45)B4a Analysis I 9 (2)B5 Applied Analysis 86 (66)B5a Techniques of Applied Mathematics 14 (21)B5b Applied Partial Differential Equations 5 (3)B6 Theoretical Mechanics 60 (47)B6a Viscous Flow 8 (3)B6b Waves and Compressible Flow 3 (1)B7 Electromagnetism, Quantum Mechanics

and Special Relativity 49 (42)B7a Quantum Mechanics and Electromagnetism 10 (10)B8 Topics in Applied Mathematics 83 (71)B8a Mathematical Ecology and Biology 20 (14)B8b Nonlinear Systems 2 (5)B9 Number Theory 37 (35)B9a Polynomial Rings and Galois Theory 13 (10)B10 Martingales and Financial Mathematics 15 (32)B10a Martingales through Measure Theory 4 (6)B10b Mathematical Models of Financial Derivatives 44 (36)B11 Communication Theory 22 (33)C3.1 Lie Groups and Differentiable Manifolds 1 –C3.1a Lie Groups 2 –C3.1b Differentiable Manifolds 1 –C5.1a Partial Differential Equations

for Pure and Applied Mathematics 0 –C9.1a Introduction to Modular Forms 0 –BE Mathematical Extended Essay 2 (10)O1 History of Mathematics 7 (19)OBS1 Applied Statistics 0 (4)OBS2 Statistical Inference 0 (1)OBS3 Stochastic Modelling 7 (3)OBS3a Applied Probability 6 (5)OBS4 Actuarial Science 42 (38)OCS1 Functional Programming,

Data Structures and Algorithms 7 (4)OB21a Numerical Solution of Differential Equations I 3 (6)OB21b Numerical Solution Of Differential Equations II 2 (6)OB22 Integer Programming 2 (3)OE Other Mathematical Extended Essay 1 (1)N1 Undergraduate Ambassadors’ Scheme 11 (8)N101 History of Philosophy from Descartes to Kant 0 (3)N102 Knowledge and Reality 2 (1)

continued on next page

6

Table 2: continued

Paper Number of candidates2006 (2005)

N122 Philosophy of Mathematics 3 (0)

7

Determination of University Standardised Marks

We used essentially the same algorithm for assigning USMs as in 2005. There were afew small changes of detail, intended to make the algorithm more robust and fairer forclassification purposes and to make it easier for the Examiners to adjust parameters duringthe marks meeting.

We are aware of some unease about using the Part A marks to quantify the strength of thefield of candidates on individual Part B papers. We have reservations about it but we havenot heard of any better method. We used the class borderlines to divide the candidatesinto three groups according to their Part A marks (this was a change from 2005), andthe algorithm for scaling a paper took account of the numbers in each group taking thatpaper (as in 2005).

There has also been debate whether or not the corners of the piecewise linear scaling func-tions should coincide with the class borderlines. The main argument in favour is that thoseborderlines are the natural places for the corners and the main argument against is thata corner at a borderline introduces nonlinearities which can be unfair when classificationis based on the average USM. We decided (as in 2005) that we would not put corners atthe borderlines. However we were aware that the 2005 Part A Examiners had put theircorners at the borderlines. We do not know what the 2006 Part A Examiners did, but itwould desirable if there were uniformity between Parts A and B.

As usual, we used three corners for our scaling functions, one at a USM above 70, onebelow 60 and one near the bottom of the field. The exact position of the bottom corneris not significant: its role is to ensure that the scaling function passes through the origin,but very few marks are affected by it. The exact position of the middle corner is also notvery important. The corner is needed to avoid dangerous extrapolation effects, but it isnormally not very sharp and we found that classifications were stable when we experi-mented with different positions of the corner on the 2005 and the 2006 marks. The topcorner is much more significant. As explained above we did not want it to be too closeto the class borderline on grounds of fairness, but the marks of candidates well up in theFirst Class fall rapidly as the corner is moved up from 70 (because the scaling functionsare usually very flat between 60 and 70). We did not wish to depress those marks tooseverely. Experiments on both the 2005 and the 2006 marks indicated that moving thecorner down to 72 did not distort classification. So we arrived at a provisional view thatthe corners would be set at 72, 56 and 37. In the end, the top corner was moved to 71 fora reason which is explained in the section on Classification.

When we considered individual papers, we made a number of adjustments according tothe following principles:

(i) A raw mark above 80 should attain a USM above 70

(ii) A raw mark below 60 should not attain a USM above 70

(iii) raw mark below 30 should not attain a USM above 56.

It was not necessary to adjust any scaling function to increase the USMs of candidateswith low marks.

An adjustment was made to the scaling function of most of the papers according to one

8

(but only one) of these principles. In addition, we considered whether candidates who tookhalf-units in B1-B10 were being treated fairly. The algorithm has as default that the rawmark is doubled and then the scaling function of the full unit paper is applied. We deviatedfrom this on B10b, B5a and B5b. The candidates taking B10b greatly outnumbered thosetaking B10, so we treated B10b as a free-standing paper with its own scaling function(which we then adjusted according to (i) above). We felt that the B5a questions had beensignificantly harder than the B5b questions so we created separate scaling functions forB5a and B5b by adapting the B5 function.

A later stage was to adjust the heights of the scaling functions (see the section on Classi-fication).

The final positions of the corners on each paper are shown in Table 3 below. For eachpaper, N denotes the number of candidates taking the paper who played a role in thecalculation of the scaling function (i.e., candidates in Mathematics or Mathematics &Statistics); N1, N2, N3 are the numbers of those candidates whose Part A average markswere in the ranges (69,100], (59,69], [0,59], respectively. C71, C56, C37 are the raw markswhich were mapped to USMs of 71, 56, 37, respectively. An asterisk denotes a cornerwhich had been adjusted by the Examiners.A dagger denotes a half-unit paper with raw marks out of 50. The scaling function foreach other half-unit paper was the same as for the parent paper after doubling the rawmark.

Paper N N1 N2 N3 C71 C56 C37

B1 33 9 16 8 63.00* 39.80 18.79B2 46 19 21 6 76.00 30.00* 15.00*B3 16 9 6 1 63.00* 33.00 15.58B4 34 16 14 4 80.00 30.00* 15.00*B5 86 33 37 16 70.80 42.30 19.97

B5a† 31.50* 18.00* 9.00*B5b† 37.00* 22.00* 11.00*B6 60 27 25 8 75.60 41.10 19.41B7 49 14 26 9 81.80 45.80 21.63B8 84 28 40 16 63.00* 31.30 14.78B9 36 15 15 6 81.00* 36.80 17.38B10 18 11 4 3 78.80 42.80 20.21

B10b† 60 21 26 14 40.50* 18.20 8.59B11† 24 12 9 3 36.00 21.00 9.92OBS1 24 8 10 6 82.00 52.00 24.56OBS2† 15 6 5 4 40.20 23.70 11.19OBS3 27 8 13 6 78.40 30.00* 15.00*OBS4 64 21 29 14 69.40 43.90 20.73OCS1 7 2 4 1 81.00* 42.30 19.97

Table 3: Positions of corners of scaling functions

9

Classification

The raw marks were slightly lower overall than in 2005. Since the 2005 Examiners hadbeen caused difficulties by several papers with many high marks and we asked some settersto set harder parts to their questions, a reduction in high marks did not necessarily indicatethat the 2006 candidates were inferior. However there were slightly more low marks thanin 2005 and that could not be attributed to any deliberate action on our part.

In previous years the 4-year candidates as a group had performed considerably betterthan the 3-year candidates, but this year was different. At the time of our classificationmeeting, any difference in quality between the two groups was almost imperceptible. Thesubsequent transfers did establish a slightly higher quality in the 4-year group, but it wasvery much smaller than usual. This is the reason for the sharp increase in the proportionof Firsts reported in Table 1. Another consequence is that it would be very misleadingto add the 2006 Part B results to the 2006 Part C results: a better indication is given byadding the 2005 Part B results to the 2006 Part C results. Since candidates may ask tobe classified retrospectively and all candidates will be classified after Part B next year, wedid consider the provisional classifications of 4-year candidates, with particular attentionto borderline cases. The details of the process and outcome follow, including a summaryof provisonal classifications of all our candidates.

The report of the 2005 Part A Examiners gave provisional classifications as follows:

Firsts: 59 Upper Seconds: 71 Others: 31 Total: 161

We found that our field of candidates had Part A marks as follows:

Firsts: 66 Upper Seconds: 67 Others: 24 Total: 157

We found the following explanations for the differences:

(a) There were 5 candidates who took Part A in 2005 but did not take Part B in 2006,and one who took Part B in 2006 but had not taken Part A in 2005.

(b) The 2005 Part A Examiners had not used the rounding-up rule when counting theirprovisional classifications.

The algorithm uses two parameters which the Examiners can change to adjust the USMsof candidates around the top two borderlines. The natural values for those parametersare 70 and 60, and our initial runs of the algorithm on the 2006 marks had those values.The first run on the 2006 marks, before individual papers were adjusted, produced severalmore Firsts and several fewer Others (Lower Seconds and below) than the provisionalclassifications from Part A. This was a considerable surprise to us: the opposite hadhappened when we tested the database on the 2005 marks.

Although the adjustments to individual papers were made for objective reasons and theactual changes in USMs were all small, the majority of them happened to be downwardsand the number of provisional Firsts fell and Others rose when those adjustments weremade. We also decided to change the values of the two parameters to 69 and 59 (giventhe rounding-up rule these are really the neutral values). This meant that about 75% ofthe marks went down by 1 and particularly large numbers of marks went down from 70to 69 and from 60 to 59; in order to avoid depressing the high marks by a much larger

10

amount, we moved the top corner from 72 to 71. At this level we would have been contentwith the marks if an average of 70 had been required for a First, but the rounding-up rulemeans that an average of 69.025 is sufficient. We considered the possibility of changingthe first parameter to 68 but we felt that this would depress the marks of candidates inthe middle of the First Class by too much. It would also depress the marks on our papersof candidates in other Honour Schools which might not be rounding up. So we set thegeneral parameters at 69 and 59.

Consideration of individual Mathematics candidates led to a handful of further changesof individual marks. If all candidates had been classified after Part B the final outcomewould have been:

Firsts: 64 Upper Seconds: 68 Lower Seconds: 20 Thirds: 4 Pass: 1

The proportion of Firsts here is slightly lower than in 2005.

We still had reservations about the outcome at the First/Upper Second borderline becauseFirsts were, or would have been, awarded to some candidates who seemed not to meet thedescriptive criteria, although they met the numerical criteria. In theory we could havemade much greater adjustments to the marks of individual candidates but we did not feelthat would be practical or fair. It has to be remembered that we could not change thePart A marks, so reducing all the Part B marks by 1 changes the overall average by only0.6.

We feel that the marking and classification arrangements do not reflect the descriptivecriteria for a First and we ask the Teaching Committee to reconsider whether this couldbe improved. Possibilities would be to require for a First that strictly more than onePart B paper should have a first-class mark; or to build into the marking some system ofbonuses for near-complete answers to individual questions.

Table 4 records rankings of three-year candidates by average USM; this table was sent tocolleges with the marks of individual candidates.

Teaching Committee

We appreciate that the Teaching Committee has a duty to oversee examinations and tocoordinate between Parts A and B (and C), and in particular that it was the Committee’sjob to put in place arrangements for examinations in the new structure. Nevertheless,we are concerned that the Committee is inclined to make changes without consulting orinforming the Examiners and to involve itself in matters which are the responsibility of theExaminers overseen by the Proctors Office. The Proctors Notes of Guidance indicate thatthe Examiners should submit the conventions for approval by the committee each year.While this is not entirely practical in the case of a two-part examination, it does suggestthat recent or current examiners should have some role to play in determining conventionsfor the corresponding examination in future years; indeed it seems reasonable that theirexperience should be drawn on. In our case the committee presents Consolidated Conven-tions to the Examiners. Sometimes the Examiners are given an opportunity to commentbut not in a systematic way. Some of the changes made or proposed during this yearseemed to us to be undesirable or impractical, and some anomalies were created becausethe committees responsible for the joint schools had not been consulted. Even when the

11

Av USM Rank %

90 1 1.3287 2 2.6386 4 5.2685 5 6.5884 6 7.8983 7 9.2181 8 10.5379 11 14.4778 12 15.7977 13 17.1176 14 18.4275 19 25.0074 20 26.3273 22 28.9572 24 31.5871 26 34.2170 29 38.1668 31 40.7967 34 44.7466 37 48.6865 42 55.2664 46 60.5363 48 63.1662 52 68.4261 56 73.6860 59 77.6359 62 81.5857 68 89.4753 69 90.7950 71 93.4249 73 96.0544 75 98.6841 76 100.00

Table 4: Numbers and percentages of 3 year candidates scoring a given USM (or higher)

12

Committee’s decisions are justified, it is essential that the decisions are communicatedaccurately and in full to the Examiners. Unfortunately, the Consolidated Conventionsissued by the Committee in Michaelmas Term 2005 did not mention either the weak paperrules or the requirement that averages should be rounded up for classification.

In our opinion, the committee should adopt the following principles when consideringchanges to examination procedures or conventions:

• the committee should consult the current and/or immediate past examiners beforemaking any decision;

• the committee should consult the committees responsible for the three joint schoolsbefore making any decision; if a change is to be made in Mathematics but not insome of the joint schools, a clear reason should be given;

• if the committee decides to make any change, the examiners who will first have toapply the changes should be informed in writing and given copies of any statementmade to prospective candidates;

• no changes should be made after the end of Michaelmas Term.

Weak paper rules

The weak paper rules had been suspended in 2005 and were not included in the Consoli-dated Conventions for 2006. It was plausible to us that they had been abolished, becausethe corresponding rule in Mods had just been abolished and the new Part C classificationrules include no such rule. When we enquired, the Director of Undergraduate Studies toldus that they were still in place, so we operated them. However none of our candidateswas anywhere near being affected by them. (The same was true in 2005 when the ruleswere not in place, so this effect cannot be attributed to the presence of the rules.) Wediscovered that Mathematics & Statistics, while intending to follow our procedures andconventions, were not aware of these rules and could not apply them (and there are no suchrules in our other joint schools). We think that the weak paper rules in this examinationare arbitrary, unfair and pointless (because candidates’ tactics are not affected by them),they create extra work for examiners, and we cannot see why they should exist only inParts A and B. We recommend that the weak paper rules in Parts A and B areabolished.

Rounding

As proposed by the 2005 Examiners, we calculated average marks directly from the indi-vidual USMs and they were shown to two decimal places in lists produced by the databasefor our use (the exact average is a rational number with denominator 40). To determineclassification, we rounded the average up. That was not our choice: we were obliged todo it because the candidates had been told by the Mathematical Institute that it wouldbe done and it appears to be a rule which is in their favour. In practice we set the PartB marks lower than we would have done without rounding-up, and this affected all can-didates adversely. Moreover we found it much more difficult to classify candidates withrounding-up in place than we would have done with no rounding. Although Mathematics

13

& Statistics intended to follow our practices and conventions, they were not expecting toround up (but we think that Mathematics & Computer Science were). We recommendthat rounding-up is abolished. Since candidates starting their second year in October2005 were informed that rounding-up would be used, it will have to be used for classifi-cation in Parts B and C in 2007, and there should be consultation with the joint schoolsbefore a final decision is made. In order to avoid a further year’s delay, we recommendthat the supplement to the course handbook for students starting Part A inOctober 2006 should not say that rounding-up will be used in 2008.

Although averages do not have to be integers, individual USMs must be. When applyingour scaling functions to assign USMs, we used symmetric rounding. We believe this tobe the natural method and to be the method favoured by the Mathematics TeachingCommittee, but not by the Computing Laboratory.

Rubrics

We are puzzled why candidates in Part C are allowed to submit answers to an unlimitednumber of questions while Part B candidates are restricted to 5 answers (3 on a half-unitpaper). The main argument that we have heard in favour of a restriction is that it savesmarking but we do not believe that more than a tiny number of scripts would include6 answers (about 30% handed in a fifth, or third, answer). Moreover the restriction sitsuncomfortably with the present rules that candidates must hand in everything that theyhave written and markers must read everything that has been written. Candidates shouldcross out what they don’t want to be read, but sometimes it is not clear whether it iscrossed out and sometimes assessors assign marks anyhow. (Should assessors read whatis crossed out or marked as rough work? The guidance to assessors is silent). Each yearthere are a few breaches of this rubric and the examiners have difficulty deciding what todo. Some of us think it would be preferable if there was no restriction on the total numberof answers, as in Part C. [Note: The problem of dealing with rubric violations was raisedin the report of the 2005 Part A Examiners.]

High marks

The Committee had plans to restrict high marks. These proposals were not discussedwith examiners and they were quickly set aside when the Faculty learned about them.However the minutes state that it was agreed that this matter should be revisited whenthe new Director of Undergraduate Studies was in post [i.e., after 1 September 2006]. Wetherefore feel obliged to record our views here.

We think that our marks are in line within other subjects in our Division and with math-ematics departments in other universities and we would not wish to disadvantage ourstudents by artificially depressing marks. We did not think that there were too many highmarks (except for marks of 100). In fact there were fewer high marks than in 2005, forvarious reasons (including the rounding-up rule). For example, 18 candidates averagedabove 80 in Part B 2006 (compared with 25 in Part A 2005 and about 35 in Part B 2005).We do not consider this number to be excessive in comparison with other subjects in ourDivision and other UK mathematics departments. Indeed, we may well have been lessgenerous than them.

14

There were 12 instances of marks of 100 by Mathematics candidates (and 2 by Mathematics& Philosophy candidates), compared with 7 in 2005. Four of these occurred on B9, whilethe others were on various half-unit papers including one on B9a and four on B11 (fromonly 25 candidates). One would expect half-unit papers to produce disproportionatelymany marks of 100, but the incidence of them on B11 did seem peculiarly high. We dofeel that it was too easy to obtain full marks on B9 and B11. The B9 paper had numerousother high marks but B11 did not.

Low marks

It had been suggested to us that we should consider all exceptionally low marks individ-ually. We did consider all the 14 cases where our algorithm produced a mark below 37.We were satisfied that each of those marks was appropriate and we made no changes.

Extended essays

As usual, extended essays were double-marked. In the past, the two markers had notbeen involved in reconciliation of the marks and indeed had not even been informed of thefinal mark (the same applied to fourth-year dissertations, in our occasional experience asassessors). We decided in advance that, after the two markers had independently proposeda mark, we would ask them to try to agree a mark; we would involve a third party onlyif the two assessors could not agree or in some exceptional circumstance. In the event,only three candidates submitted extended essays and the two assessors were able to agreea mark in each case.

We believe that there are several arguments in favour of our procedure: it appears to beexpected by the Proctors Notes of Guidance (Section 29.3), it is commonly used in othersubjects, it ensures that the mark is set by people who have most knowledge of the essayand the topic, and it should contribute in the long term towards attaining a commonstandard in the Faculty for the assessment of essays and dissertations. We recommendthat future Examiners should always ask the assessors of essays or dissertationsto try to agree a mark, after they made independent proposals.

It was helpful that one of us was also a member of the Projects Committee. This simplifiedthe task of choosing assessors.

Part C papers

Four Part C half-units were available to our candidates: C3.1a, C3.1b, C5.1a, C9.1a; theyattracted 3,2,0,0 candidates respectively. In 2005, each of the courses shared with SectionC had attracted considerably more Part B candidates. It may be that this year’s topicswere less attractive, but it is possible that students were deterred by the courses beinglabelled as C*.* instead of B12, B13, B14, or by the shared examination paper. We suggestthat the Teaching Committee should keep an eye on the numbers of Part B candidatestaking Part C courses.

We note that C5.1a is available in both Part B and Part C in 2007; and there does notappear to be a bar to a (hypothetical) candidate who had taken it in Part B offering it

15

again in Part C. We recommend that a regulation is introduced that no candidatemay offer in Part C any subject that he or she has already offered in Part B.

Proctors Notes for the Guidance of Examiners

These notes look rather dated in various respects (the Junior Proctor uses email despite theadvice in the notes!). Perhaps they could be rewritten and combined with other documentsissued by the EPSC, the Examination Schools and the Divisions. Closely related aspectsof certain matters (for example, the treatment of medical certificates) appear in differentdocuments without cross-references, so it is hardly surprising if Examiners are occasionallyconfused.

Examination Schools

Although individual members of the Schools staff were very helpful, the institution maybe suffering from the secondment of the Clerk for several years. It was difficult to getguidance or information on matters which would normally have been dealt with by theDeputy Clerks.

We mention here some Schools-related issues which may recur in future years.

(a) Until a few years ago the Schools aimed to publish the timetable by the end of HilaryTerm; now they promise it five weeks before the start of the examination. Part Bstarted officially on 22 May but we first saw a draft timetable on 1 May and thecandidates were not informed until 9 or 10 May. Many other examinations weresimilarly affected and the delays were attributed to staff absence.

(b) The Schools could not promise a room for our meetings so we rearranged themfor the Mathematical Institute. In practice this had several advantages, giving useasy access to a printer, photocopier and data projector and avoiding the need totransport papers across Oxford for our meetings. We recommend that thesemeetings are held in the Mathematical Institute in future. The SchoolsLiaison Officer should be informed of this far in advance.

(c) The Schools official position was that they would not divide the scripts into Sectionsa and b for the convenience of our assessors. Our experience varied greatly frompaper to paper. Some well-organised senior invigilators found it no problem at all toget the candidates to hand in the two sections separately, but there was confusionon other occasions. The confusion extended beyond the Section a/b division ascandidates taking completely different papers were handing in their scripts at thesame time as ours and our assessors sometimes received scripts from other papers.The scripts from one of our papers were delivered to the Institute although we hadsaid they would be collected by assessors. “Special” scripts were sometimes notavailable at the advertised times (that may have been the fault of colleges).

(d) It did not matter to us whether the candidates wrote on lined or unlined paper andwe left it to the Schools to issue whichever they thought fit. On a few occasionsindividual candidates asked for the other sort of paper and they were given it.

16

(e) We decided in advance that the Chairman would not sign the class list until thenotices of marks to be sent to colleges had been checked on the morning after ourfinal meeting. In fact the class list was still not ready then and the Chairman hadto agitate in order to ensure that it was produced before he left Oxford for a reasonwhich predated the examination by many years.

Organisation by Honour School or by department?

There is conflict between the old system of organisation by individual Honour School andthe new system in which papers are shared by many Honour Schools. Arrangements madeby us for papers which were our responsibility had to be communicated to examiners andcandidates in 6 Honour Schools and we had to inform our candidates of decisions madeby the Department of Statistics and Computing Laboratory. This is very inefficient, andit will get worse with the increasing cooperation with Physics.

Specific examples include the following:

• Assessors: We sent off a long list of assessors for approval, including many fromthe Department of Statistics and the Computing Laboratory (but not Philosophy).Those departments also put in long lists, including many of our assessors. Thisseemed to be wasteful of effort but we were advised that this had to be done. [Ourlist was referred back to us by the Proctors Office with a request for explanations.We eventually received approval of the assessors on 21 June, after all assessmenthad been completed! We wonder whether it is really necessary that three of thebusiest people in the university have to give approval.] Subsequent discussions haveindicated that we were wrongly advised. Other subjects, such as Philosophy andModern Languages, send in a single list of assessors from their Faculty to cover alltheir Honour Schools; they do not seek approval of any assessors from other Facultieswho are marking papers in other subjects taken by candidates in their Honour School.Their claims for payments of fees are also made by subject rather than by HonourSchool (we were able to make this arrangement with Statistics and Physics butnot with the Computing Laboratory). We recommend that the MathematicalInstitute should seek the agreement of the other departments that eachdepartment will arrange the approval and payment of the assessors ontheir own papers to cover candidates from all Honour Schools.

• Calculators: They were not permitted on the mathematics papers but this had tobe communicated to candidates in the six different Honour Schools where our paperswere available. In FHS Physics, certain calculators are permitted by Regulation inthe ”Grey Book”, and a Physics candidate could have argued that the Regulationtakes priority. In addition, we had to find out from the Department of Statisticsand the Computing Laboratory whether calculators were permitted on their papers,and then to inform our candidates. We feel that this is a very inefficient system andthat it would be better if the use of calculators was decided paper-by-paper ratherthan by Honour School. Clearly any change on this cannot be decided by a singledepartment, but we suggest that the Mathematics Teaching Committeeshould draw this matter to the attention of the Division and/or EPSC.

• Assignment of USMs: The natural procedure would be for all candidates tak-

17

ing a single paper to be assigned a USM by the same examiners—those from thedepartment responsible for that paper. This would avoid the need to transfer infor-mation about rubrics, marking schemes, raw marks, practicals, etc from departmentto department.

Generally this arrangement does apply, but there are some exceptions. The De-partment of Statistics has not yet developed its own infrastructure so assignment ofUSMs on their papers is combined with the mathematics papers using the Math-ematical Institute’s database. For a technical reason we have to assign the USMsof our candidates on OCS1; and the USMs on OB21 are a matter of discussion be-tween ourselves and examiners from the Computing Laboratory. We do not receivethe raw marks on mathematics papers of candidates in Mathematics & ComputerScience; instead we give the scaling functions to the examiners from the ComputingLaboratory and they calculate the USMs (but we think they use a more generousrounding convention when assigning USMs than we do).

Some of these exceptions may disappear before long. Indeed, shortly before theexamination started the Computing Laboratory proposed that we should assign themathematics USMs of M&CS candidates. We had been under the impression thatteh Computing Laboratory had agreed earlier in the year to continue the previousarrangement, and lack of time ruled the proposal out for 2006, but it may be returnfor 2007.

B EQUAL OPPORTUNITIES ISSUES AND BREAKDOWN OF THERESULTS BY GENDER

Table 5 shows the percentages of male and female candidates for the three year degree inthe various classes on the original class list. There were 81 four year candidates takingPart B (59 male and 22 female).

Class Total Male FemaleNumber % Number % Number %

I 30 39.5 16 21.1 14 18.4II.1 31 40.8 16 21.1 15 19.7II.2 11 14.5 8 10.5 3 3.9III 4 5.3 2 2.6 2 2.6P 0 0.0 0 0.0 0 0.0F 0 0.0 0 0.0 0 0.0

Total 76 100.0 42 55.3 34 44.7

Table 5: Breakdown of results by gender

18

C DETAILED NUMBERS OF CANDIDATES’ PERFORMANCE INEACH PART OF THE EXAMINATION

Summary of Statistics

The following statistics are aggregated over all candidates in Mathematics, Mathematics& Philosophy and Mathematics & Statistics.

Paper B1 - Logic and Set TheoryNumber of candidates: 57 Average raw mark: 57.9 Average USM: 68.2

Question Average Mark Std Number of AttemptsNumber All Used Dev Used Unused

Q1 15.11 15.11 4.54 47 0Q2 11.00 12.25 5.35 36 7Q3 15.26 15.26 4.72 27 0Q4 13.71 14.30 5.00 23 5Q5 12.35 13.06 6.36 34 3Q6 17.04 17.04 6.37 52 0Q7 12.44 14.00 7.14 15 3Q8 12.14 15.80 9.10 5 2

Paper B1a - LogicNumber of candidates: 13 Average raw mark: 23.6 Average USM: 60.0

Question Average Mark Std Number of AttemptsNumber All Used Dev Used Unused

Q1 13.73 13.73 6.03 11 0Q2 10.82 10.82 3.12 11 0Q3 10.50 10.50 6.36 2 0Q4 8.00 8.00 2.83 2 0

Paper B1b - Set TheoryNumber of candidates: 5 Average raw mark: 34.4 Average USM: 76.3

Question Average Mark Std Number of AttemptsNumber All Used Dev Used Unused

Q1 19.00 19.00 7.00 3 0Q2 16.25 16.25 0.50 4 0Q3 12.50 12.50 7.78 2 0Q4 25.00 25.00 1 0

19

Paper B2 - AlgebraNumber of candidates: 65 Average raw mark: 59.1 Average USM: 66.6

Question Average Mark Std Number of AttemptsNumber All Used Dev Used Unused

Q1 13.50 13.82 5.21 44 2Q2 17.06 17.14 5.39 50 2Q3 14.17 14.63 8.06 38 3Q4 16.95 17.38 5.60 21 1Q5 12.84 12.86 6.41 49 2Q6 10.45 11.11 4.45 19 3Q7 15.22 16.81 6.96 32 5Q8 15.60 15.60 7.30 5 0

Paper B3 - GeometryNumber of candidates: 18 Average raw mark: 59.0 Average USM: 69.8

Question Average Mark Std Number of AttemptsNumber All Used Dev Used Unused

Q1 13.00 13.43 2.92 14 2Q2 12.75 14.17 5.99 6 2Q3 8.00 9.50 4.00 4 2Q4 13.25 13.25 2.75 4 0Q5 7.88 8.33 4.42 6 2Q6 15.71 16.54 5.66 13 1Q7 18.19 18.19 2.20 16 0Q8 15.89 15.89 5.67 9 0

Paper B3a - Geometry of SurfacesNumber of candidates:3 Average raw mark: 25.3 Average USM: 65.3

Question Average Mark Std Number of AttemptsNumber All Used Dev Used Unused

Q1 11.33 11.33 1.53 3 0Q2 15.00 15.00 1 0Q3 14.00 14.00 1 0Q4 13.00 13.00 1 0

20

Paper B3b - Algebraic CurvesNumber of candidates: 2 Average raw mark: 42.0 Average USM: 87.5

Question Average Mark Std Number of AttemptsNumber All Used Dev Used Unused

Q1 12.00 0 1Q2 21.00 21.00 1 0Q3 19.50 19.50 0.71 2 0Q4 24.00 24.00 1 0

Paper B4 - AnalysisNumber of candidates: 36 Average raw mark: 66.0 Average USM: 68.2

Question Average Mark Std Number of AttemptsNumber All Used Dev Used Unused

Q1 16.71 16.71 4.61 35 0Q2 15.23 15.62 6.15 29 1Q3 17.00 17.00 5.22 19 0Q4 11.71 15.75 8.90 4 3Q5 15.79 16.28 6.04 32 1Q6 13.50 23.00 13.44 1 1Q7 15.81 18.43 8.56 21 5Q8 21.00 21.00 1 0

Paper B4a - Analysis INumber of candidates:9 Average raw mark: 29.1 Average USM: 64.5

Question Average Mark Std Number of AttemptsNumber All Used Dev Used Unused

Q1 14.55 14.55 4.91 11 0Q2 15.56 17.13 6.64 8 1Q3 16.33 16.33 4.04 3 0

Paper B5 - Applied AnalysisNumber of candidates: 86 Average raw mark: 62.5 Average USM: 67.7

Question Average Mark Std Number of AttemptsNumber All Used Dev Used Unused

Q1 10.78 11.75 5.67 68 12Q2 11.79 12.67 6.35 21 3Q3 5.67 11.33 9.18 3 3Q4 16.27 16.70 5.16 20 2Q5 14.26 14.92 4.42 66 10Q6 16.08 16.39 4.36 72 2Q7 18.60 18.60 5.70 70 0Q8 20.29 20.74 3.82 23 1

21

Paper B5a - Techniques of Applied MathematicsNumber of candidates: 14 Average raw mark: 25.0 Average USM: 62.6

Question Average Mark Std Number of AttemptsNumber All Used Dev Used Unused

Q1 11.43 12.31 7.13 13 1Q2 11.44 10.25 7.75 8 1Q3 2.50 4.00 2.12 1 1Q4 15.14 17.33 8.41 6 1

Paper B5b - Applied PDEsNumber of candidates: 5 Average raw mark: 28.6 Average USM: 61.6

Question Average Mark Std Number of AttemptsNumber All Used Dev Used Unused

Q1 7.60 9.33 2.97 3 2Q2 17.75 17.75 3.40 4 0Q3 21.50 21.50 4.95 2 0Q4 1.00 1.00 1 0

Paper B6 - Theoretical MechanicsNumber of candidates: 60 Average raw mark: 65.3 Average USM: 67.6

Question Average Mark Std Number of AttemptsNumber All Used Dev Used Unused

Q1 14.26 14.70 4.28 43 4Q2 18.55 18.55 5.35 31 0Q3 17.00 17.27 5.72 45 1Q4 9.27 10.44 6.45 9 2Q5 18.22 18.48 5.17 44 2Q6 12.60 14.50 6.66 12 3Q7 14.39 15.78 7.60 27 4Q8 16.31 16.31 4.15 26 0

Paper B6a - Viscous FlowNumber of candidates: 8 Average raw mark: 38.5 Average USM: 78.2

Question Average Mark Std Number of AttemptsNumber All Used Dev Used Unused

Q1 16.00 16.00 2.55 5 0Q2 23.50 23.50 1.29 4 0Q3 21.00 21.00 4.60 6 0Q4 8.50 8.00 0.71 1 1

22

Paper B6b - Waves and Compressible FlowNumber of candidates: 3 Average raw mark: 29.3 Average USM: 63.6

Question Average Mark Std Number of AttemptsNumber All Used Dev Used Unused

Q1 16.33 16.33 2.08 3 0Q2 13.00 13.00 1 0Q3 10.00 0 1Q4 13.00 13.00 0.00 2 0

Paper B7 - Electromagnetism, Quantum Mechanics and Special RelativityNumber of candidates: 50 Average raw mark: 66.9 Average USM: 65.2

Question Average Mark Std Number of AttemptsNumber All Used Dev Used Unused

Q1 16.69 16.66 3.95 35 1Q2 17.65 17.65 5.33 48 0Q3 14.31 14.39 4.63 44 1Q4 18.00 18.00 6.51 6 0Q5 16.34 17.82 6.98 33 5Q6 16.86 17.45 3.90 33 2Q7 9.00 9.00 1 0

Paper B7a - Electromagnetism and Quantum MechanicsNumber of candidates: 10 Average raw mark: 41.6 Average USM: 80.0

Question Average Mark Std Number of AttemptsNumber All Used Dev Used Unused

Q1 19.00 18.33 2.35 3 2Q2 20.60 20.60 5.56 10 0Q3 20.00 22.14 5.12 7 2

Paper B8 - Topics in Applied MathematicsNumber of candidates: 84 Average raw mark: 49.8 Average USM: 64.8

Question Average Mark Std Number of AttemptsNumber All Used Dev Used Unused

Q1 10.75 11.01 4.46 71 6Q2 13.86 13.90 4.02 78 1Q3 11.34 11.49 4.09 49 1Q4 12.37 13.29 5.25 17 2Q5 10.75 11.17 3.96 41 3Q6 13.25 13.36 5.35 61 3Q7 13.31 15.77 6.48 13 3Q8 7.14 7.83 4.18 6 1

23

Paper B8a - Mathematical Ecology and BiologyNumber of candidates: 27 Average raw mark: 24.5 Average USM: 63.9

Question Average Mark Std Number of AttemptsNumber All Used Dev Used Unused

Q1 11.44 11.46 3.92 24 1Q2 13.04 13.04 4.84 24 0Q3 9.70 12.17 5.70 6 4

Paper B8b - Nonlinear SystemsNumber of candidates: 2 Average raw mark: 18.0 Average USM: 55.1

Question Average Mark Std Number of AttemptsNumber All Used Dev Used Unused

Q1 9.00 11.00 2.83 1 1Q2 12.00 12.00 1 0Q4 13.00 13.00 1 0

Paper B9 - Number TheoryNumber of candidates: 48 Average raw mark: 70.2 Average USM: 71.0

Question Average Mark Std Number of AttemptsNumber All Used Dev Used Unused

Q1 16.20 16.20 5.94 20 0Q2 13.09 12.55 8.68 22 1Q3 18.70 19.02 6.45 45 1Q4 17.24 19.42 5.77 12 5Q5 16.13 15.83 6.65 36 2Q6 19.43 20.32 7.74 22 1Q7 20.50 20.41 4.19 17 1Q8 17.33 20.21 8.87 19 5

Paper B9a - Polynomial Rings and Galois TheoryNumber of candidates: 18 Average raw mark: 70.9 Average USM: 69.9

Question Average Mark Std Number of AttemptsNumber All Used Dev Used Unused

Q1 17.83 20.40 7.63 5 1Q2 13.30 16.43 6.62 7 3Q3 21.39 21.53 4.07 17 1Q4 19.33 21.86 6.69 7 2

24

Paper B10 - Martingales and Financial MathematicsNumber of candidates: 18 Average raw mark: 70.9 Average USM: 69.9

Question Average Mark Std Number of AttemptsNumber All Used Dev Used Unused

Q1 19.00 19.33 3.21 6 1Q2 10.50 10.50 4.28 8 0Q3 14.75 14.75 7.36 8 0Q4 15.00 17.83 6.16 6 2Q5 18.31 18.31 6.57 16 0Q6 18.80 23.38 9.75 8 2Q7 18.64 18.92 3.32 13 1Q8 16.22 17.86 8.36 7 2

Paper B10a - Martingales Through Measure TheoryNumber of candidates: 4 Average raw mark: 41.3 Average USM: 76.7

Question Average Mark Std Number of AttemptsNumber All Used Dev Used Unused

Q1 20.67 20.67 3.21 3 0Q2 21.50 21.50 4.95 2 0Q3 17.50 17.50 3.54 2 0Q4 25.00 25.00 1 0

Paper B10b - Elementary Financial DerivativesNumber of candidates: 61 Average raw mark: 33.2 Average USM: 68.5

Question Average Mark Std Number of AttemptsNumber All Used Dev Used Unused

Q1 16.71 16.70 5.42 54 2Q2 16.52 17.00 8.35 24 1Q3 14.71 15.86 6.23 35 6Q4 15.23 18.33 7.81 9 4

Paper B11 - Communication TheoryNumber of candidates: 25 Average raw mark: 33.9 Average USM: 71.4

Question Average Mark Std Number of AttemptsNumber All Used Dev Used Unused

Q1 14.73 14.29 4.61 14 1Q2 21.10 21.10 6.01 10 0Q3 17.50 17.50 6.76 24 0Q4 10.00 13.00 6.73 2 2

25

Paper OBS1 - Applied StatisticsNumber of candidates: 24 Average raw mark: 68.3 Average USM: 65.1

Question Average Mark Std Number of AttemptsNumber All Used Dev Used Unused

Q1 17.50 17.50 4.38 18 0Q2 7.33 7.85 4.58 13 2Q3 16.04 16.04 4.44 23 0Q4 14.18 14.18 4.85 17 0Q5 15.00 15.00 1 0PR 24.83 24.83 5.27 24 0

Paper OBS2 - Statistical InferenceNumber of candidates: 15 Average raw mark: 33.4 Average USM: 66.9

Question Average Mark Std Number of AttemptsNumber All Used Dev Used Unused

Q1 21.42 21.42 4.83 12 0Q2 13.50 13.44 5.50 9 1Q3 11.82 12.57 6.48 7 4Q4 12.75 17.50 6.02 2 2

Paper OBS3 - Stochastic ModellingNumber of candidates: 27 Average raw mark: 54.7 Average USM: 64.8

Question Average Mark Std Number of AttemptsNumber All Used Dev Used Unused

Q1 15.58 15.58 6.47 26 0Q2 13.73 14.33 6.17 24 2Q3 11.00 11.00 10.44 3 0Q4 2.33 1.50 2.08 2 1Q5 13.00 14.32 6.98 19 2Q6 11.10 11.10 7.09 10 0Q7 9.62 10.36 4.11 11 2Q8 15.08 16.33 7.64 12 1

Paper OBS3a - Applied ProbabilityNumber of candidates: 10 Average raw mark: 37.9 Average USM: 77.2

Question Average Mark Std Number of AttemptsNumber All Used Dev Used Unused

Q1 18.89 18.89 7.32 9 0Q2 18.70 18.70 4.79 10 0Q3 22.00 22.00 1 0

26

Paper OBS4 - Actuarial ScienceNumber of candidates: 64 Average raw mark: 58.8 Average USM: 65.1

Question Average Mark Std Number of AttemptsNumber All Used Dev Used Unused

Q1 14.92 16.09 6.35 22 2Q2 9.68 10.82 5.07 22 6Q3 14.02 14.45 6.16 53 3Q4 13.52 13.80 4.89 54 2Q5 15.88 16.76 5.63 51 5Q6 14.63 14.89 6.96 54 2

Paper OCS1 - Functional Programming, Data Structures and AlgorithmsNumber of candidates: 7 Average raw mark: 65.9 Average USM: 65.8

Question Average Mark Std Number of AttemptsNumber All Used Dev Used Unused

Q1 8.00 10.33 7.07 3 1Q2 14.86 14.86 3.58 7 0Q3 13.67 13.67 6.31 6 0Q4 10.00 11.75 6.40 4 1Q5 8.67 12.00 3.51 1 2Q6 8.00 8.00 1 0Q7 9.20 9.20 5.17 5 0Q8 8.00 9.00 1.00 1 2PR 17.43 17.43 4.43 7 0

27

D COMMENTS ON PAPERS AND INDIVIDUAL QUESTIONS

The comments in this section were provided by the markers of individual papers in orderto assist the Examiners in assigning USMs. They are reproduced here for information butthey do not necessarily represent the views of the Examiners and they have been editedslightly to remove some comments about individual candidates and other matters.

The comments relate to all candidates taking these papers, not just those in FHS Math-ematics Part B. In particular, the Mathematics Part B candidates were only a smallminority of candidates on papers OBS1, OBS2, OBS3, OB22, C3.1.

B1: Foundations: Logic & Set Theory Q1-Q4

My intention was always to make the first two questions (on the propositional calculus)quite tough and the second two (on the more advanced material) relatively easy, but thecandidates answers seem to suggest that I may have gone overboard on this. Very fewcandidates got anywhere near the last part of Q1 and ONLY ONE saw the point of thelast part of Q2. Unfortunately, I realised that this was happening too late to change themark scheme and so most candidates’ marks for Q1 are effectively out of 16 and for Q2,out of 20. I should say, however, that those 16 marks are very easily obtained, though the20 is tougher. Anyway, I certainly did achieve my aim in that there were more answersto questions 3 and 4 than I would have expected, and they weren’t bad. I was especiallypleased to see such a large proportion of candidates showing a good understanding of theCompactness Theorem.

Casting ones eye over the mark sheet reveals immediately that the M and P people havedone best, while the maths and comp sci list contains both the best and worst candidates!

B1: Foundations: Logic & Set Theory Q5-Q8

Predictably, a large majority of candidates concentrated their efforts on the first twoquestions; for these candidates effectively only the easier half of the course was tested. Sixcandidates, drawn from all three of Mathematics, Mathematics & Computer Science andMathematics & Philosophy, showed no knowledge of the course whatsoever, and scoredalmost no marks.

A similar number, also from all three FHSs, attempted just one question from this part ofthe paper and scored between 6 and 10 marks on that question. At the other end, therewas a respectable number of candidates who produced first-class answers to each of one,two, or three questions, demonstrating a very good command of the material.

Presentation was generally poor, even amongst the best candidates. Misspellings, even oftechnical terms appearing in the questions, were rife.

1. A wide spread of marks. Weaker candidates struggled with the equivalent formula-tions of (AC).

2. Attempted by almost all candidates. Plenty of cheap marks to be had for defini-tions, statements, and elementary inductive proofs. The last two parts discriminatedsurprisingly well.

28

3. There were not very many attempts at this question, but it discriminated wellamongst those who tackled it.

4. Unpopoular. A small number of highly competent candidates answered this questionvery well. Amongst the others attempting it, grasp of the Axiom of Replacementwas shaky and their knowledge of ordinals not good enough for them to realise whatfacts they needed (and were permitted by the wording of the question to quote).

B2: Algebras Q1-Q4

Question 1 (Algebras of linear transformations of small dimension, basic techniques).This was quite popular, and it had many good solutions. Strangely, very few candi-dates thought of the fact that a real polynomial of degree 3 must have a real root.Therefore, they could not show that the minimum polynomial cannot have degree 3.

Question 2 (Composition series). This was also a popular question, with many goodsolutions.

Question 3 (Maschke’s theorem). Many candidates had learned the proof and producedcompetent solutions. [For the last part, some produced solutions of past exam paperswhich would not quite answer the question.]

Question 4 (Cyclic decomposition theorem, Jordan canonical forms). This question didnot have many attempts. Most candidates who did the question produced goodanswers.

B2: Algebras Q5-Q8

As last year there are three general points to be made. First, the quality of expositionis somewhat higher than I remember it used to be. Of course there were still a num-ber of candidates who wrote the sort of private code, full of mysterious abbreviationsand ungrammatical constructions which some mathematics students think their subjectlicences them to use. A majority however wrote clearly in complete sentences—thougheven they were inclined to favour the misplaced quantifier, writing such nonsense as “forall 0 < b < a” or “∃1 6= g ∈ G”. Second, there was better balance this year betweennumbers of attempts at questions on the pure group theory and questions on charactersof finite groups. Third, as last year, there was much evidence that candidates had triedto memorise proofs rather than to understand and internalise them.

Qn 5 on groups of prime-power order and solubility was the most popular. Almost allcandidates attempted it and there was a good spread of marks. A surprisingly large numberof candidates thought that the existence of an element of prime order could contributeto their proof that the centre of a group of prime-power order is non-trivial and quotedCauchy’s Theorem: disappointing because that shows a serious lack of understanding—Cauchy’s Theorem is not needed for this; nor is the existence of an element of primeorder relevant. Four candidates thought that all groups of prime-power order are abelian.The last part of the question, on groups of order 48, inspired a number of howlers. Onecandidate had 48 = 32 × 23 and two had 48 = 23 × 3. Many candidates thought that agroup of order 48 must have a normal Sylow 2-subgroup; several thought that if it had 3

29

Sylow 2-subgroups then it would have 45 elements of order 2 [sic—not even 2-elements].Very few candidates realised that a short way to do the last part was to use the action ofthe group on the coset space of a Sylow 2-subgroup.

Qn 6 attracted a reasonable number of answers. Many candidates lost marks by definingthe commutator subgroup to be the set of commutators rather than the subgroup generatedby the commutators. Otherwise the bookwork of the first part was well done. In thesecond part a few candidates claimed that a minimal normal subgroup must be simple.Few candidates saw how to do the last part.

Qn 7 on column orthogonality of the character table was quite popular and was quite welldone. A surprising number of candidates claimed (correctly) that the group in questionmust be Alt(5) and deduced the orders of elements and the simplicity of the group. Sincethey did not prove their claim they got little credit for this (though they did get a little).

There were only a few attempts at Qn 8 but some of these showed good insight.

B3: Geometry Q1-Q4

Disappointing performance on the whole – there were very few complete solutions.

Question 1, 20 attempts – this subject is usually popular – but the candidates seemedstuck to the concept of the connected sum and couldn’t envisage the more general idea ofremoving discs and connecting up the boundaries in all possible ways. They always triedto reduce it to a connected sum and use induction.

Question 2, 8 attempts. One very good solution, but low marks in general. Riemannsurfaces are in general unpopular but that didn’t seem to be the case.

Question 3, 9 attempts. Most candidates missed the arc length constraint and workedhard at the two differential equations to get any result at all.

Question 4, 5 attempts. Nobody seemed to recognize the football tesselation. Nor did theyuseinequalities for angles. I thought this question would get them engaged but maybe itwas just too long.

B3: Geometry Q5-Q8

The quality of the answers was a little disappointing this year, primarily because severalcandidates showed a very poor understanding of basic linear algebra. This had the oddeffect that the first question , which was the most elementary , attracted the lowest qualityanswers.

Question 5 (projective geometry, cross-ratio). This attracted 9 attempts, almost all ofvery poor quality. There were no alphas. No candidate did the last part. Several got intoa real mess trying to show cross-ratio was invariant.

Question 6 (singularities, conics) There were 15 attempts but only 5 alphas. Candidatesknew the definition of singularity and could find the singular points of the cubic, butfinding the singularities of the conic proved unexpectedly difficult. There were some bizarreanswers; eg one candidate, on confronted with a 3-by-3 homogeneous linear system, spent

30

two pages eliminating variables in order to show that if there was a non-zero solution thenthe det was zero...

Question 7 (Bezout) This was the most popular question, with 18 attempts. Candidatesfound the last part hard and no-one got this quite right, though some received partialcredit. There was therefore only 1 alpha, but most candidates got high betas.

Question 8 (elliptic curves) This only got 10 attempts but there were 4 alphas. Thebookwork was done fairly well though some got confused in the periodicity arguments.Quite a few candiates got the last part (curves with complex multiplication) right, whichwas good to see.

B4: Analysis I Q1-Q4

Q1 A very popular question, but one which exposed widespread lack of understanding ofthe limit operations involved in a completeness proof.

Q2 Very few candidates were able to offer an example in the last part of the question.(Almost any operator will do.)

Q3 A question that was less popular but which nonetheless enabled a reasonable numberof candidates to score well. Several candidates found an attractive “otherwise” argumentin the middle of the question.

Q4 An unexpected (?) result of the recent change of rubric seems to be that candidates,who are now obliged to answer at least one question on each half of a paper, choose to“drop” the last part of each half unit. We cannot criticize such undergraduates, who areplaying by the rules.

B4: Analysis II Q5-Q8

General Comments. Thirty-eight candidates sat the paper, a substantial decrease uponthe previous year. The first and third questions both consisted of material studied at thebeginning and at the end of the course and were largely on the ‘pure’ part of FunctionalAnalysis. The second question, which was similiar to a question set last year, was onmaterial studied in the middle of the course but tending rather more to the ‘applied’ partof the subject. The final question was on pointwise convergence of Fourier Series which wasput into the course last year, having been relegated from the second year analysis course.By far the greater number of attempts were on the first and third questions, which seemsto indicate that candidates are uneasy doing routine integration and that a large numberof of candidates simply ditched the Fourier Series part of the course on the grounds thatit requires quite a lot of hard analysis. The standard of much of the work produced washigh, though, on average, was of poorer quality than in previous years.

Question 5. The first part of the question was well done. However, although the lastpart was a direct application of the first part, most candidates lost marks because ofgaps in their knowledge of the Lebesgue integral. The most common errors were the beliefthat L2-convergence on sequences implied convergence almost everywhere and the cavalierand inappropriate application of the monotone convergence and dominated convergencetheorems. Of the thirty-five attempts there were twelve alphas and fourteen betas.

31

Question 6. This question produced only one serious attempt which was almost perfect.

Question 7. This question produced twenty-seven attempts, of which twenty were serious.Of these thirteen produced alphas, making the question the most well done on the paper.A large part of the question was on bookwork, but candidates were on the whole able togive the impression that they understood the principles involved.

Question 8. This question produced only one attempt which gained an alpha.

B5: Applied Analysis Q1-Q4

By far the most popular question was 1, on perturbation methods, but it was very variablydone, with many basic algebraic errors, despite being almost the simplest question onecould pose. 2 and 4 were attempted fairly frequently, 4 generally being done quite well, 2less so. 3 attracted only 7 attempts.

B5: Applied Analysis Q5-Q8

A reasonable spread of questions were answered, and it appears that the majority ofcandidates answered more questions on the second half of B5. In detail:

Question 5, on Charpits method, was a little more challenging this year, as requested bythe examiners, and this dramatically reduced the number of answers.

Question 6, on hyperbolic systems, was answered well, although sketching regions chal-lenged most candidates.

Question 7, on parabolic equations, was answered well.

Question 8, on elliptic equations, had many more attempts this year, and those whoanswered mostly did so well.

Most of the bookwork was explained clearly. Basic technical ability (such as correctlysolving quadratics, integrating by partial practions, and sketching regions bounded by lin-ear and quadratic equations) let many candidates down. However, on balance, candidatesshowed that they had mastered the basics of PDE theory.

B6: Theoretical Mechanics Q1-Q4

Overall quite well done. Of course they do the bookwork parts which is often enough toget most of the marks. There were only two candidates who seemed to have no idea atall.

q1.This was a popular question. The first 9 marks were standard, the next 9 differentiatedbetween the candidates but no one managed to get more than 3 out of the last 7 markswhich involved some physical understanding of how a boundary layer works.

q2. A lot of attempts and mostly well done.

q3. The most popular question. the trickiest bit was the middle section which tripped upsome.

32

q4. Hardly any attempts. This question involved an exact solution of the diffusion equationby separation of variables which seemed to be beyond all but one of those who did try it.

B6: Theoretical Mechanics Q5-Q8

Question 5: The first 2/3 of this question was entirely bookwork and was reproduced wellby most candidates. The final derivation was easy but unfamiliar, and very few managedto complete it. Many mistakenly assumed the incompressible form of Bernoulli’s equation,clearly inapplicable here.

Question 6: The linearisation and manipulation required to obtain the governing equa-tion here were straightforward in principle but awkward in practice, and there were manyalgebraic slips in the solutions. Very few candidates gave convincing derivations of theboundary conditions, with many not even stating the underlying kinematic and dynamicboundary conditions correctly. The final part of the question involved a standard separa-tion of variables and was done quite well by most who attempted it.

Question 7: This entire question was based on lecture notes and problem sheets, and therewere many good solutions. However, many candidates made a mess of the linearisationand were unable to derive the boundary conditions. Another common mistake was toassume the incompressible version of Bernoulli’s equation in calculating the pressure.

Question 8: Part (a) of this question was bookwork and mostly done reasonably well. How-ever, very few candidates realised that, to show that the weak and strong formulations areequivalent, one must show that each implies the other. There were many algebraic slips inthe attempted derivations of the quadratic equation in part (b). Several candidates pro-duced longwinded and completely unnecessary rearrangements of the Rankine–Hugoniotconditions. Although it only required elementary analysis of a quadratic equation, no-onegave a convincing argument for the existence of exactly one root with V > U .

B7: Electromagnetism, and Special Relativity Q1- Q4

Question 1: This question is on the electromagnetism part of the syllabus and is a standardquestion on electromagnetic waves. This problem brought 41 attempts out of 60 papers.The average was 17 with many good attempts. Almost everyone got the book work right.No one got quite right the proof that using gauge transformations, in the absence ofsources, one can always choose the scalar potential equal to zero and a divergence freevector potential, so there were no full marks.

Question 2: This question is about the Uncertainty Principle. Everyone but two studentsattempted this question, proving to be a popular question with an average of 18.3 marksand many excellent attempts. All but a few students who attempted this question got thebookwork correctly, but there were some problems with the last parts. There we manyerrors in the computation of the commutators of H with X, P and XP.

Question 3: This question is about number operators and an application to the Hamil-tonian of charge mass moving in a uniform magnetic field. This was also a popularquestion, 54 out of 60 papers, with an average of 15.3 marks. Again there were very goodattempts but only one student got full marks. In the last parts of the question, there were

33

several computational errors. Many students tried unsuccessfully to identify Q and P, andin some cases, even if the identification was correct, they failed to prove that Q and P hadto be canonical operators and/or that A and A* had to satisfy the correct commutationrelations, in order to use results from the Harmonic Oscillator.

Question 4: This question covers the angular momentum part of the syllabus. There wereonly 6 attempts and the average was 18 marks. Two students got full marks and anotherthree had very good attempts.

B7: Electromagnetism, and Special Relativity Q5- Q8

The questions were intended to be straightforward, and the first two proved to be. Onlyone candidate tackled the third and none tackled the fourth. I suspect the candidateswere put off by the appearance of tensors and variational principles, although the ques-tions themselves were not difficult. The Mathematical Physics Panel should perhaps con-sider whether it is wise to include these topics (they are relatively new) and if it is, totake advantage of the new structure of the papers on mathematical physics next year todiscourage the extreme selectivity of the candidates, who seemed to have concentratedtheir examination effort, but perhaps not their learning, on the straightforward parts ofquantum theory and relativity.

Question 5: A straightforward question, which was generally well answered.

Question 6: Again straightforward, the only difficulty being in solving the final (very easy)recurrence relation and establishing convergence to c. Most simply assumed convergenceand then deduced the limit.

Question 7: Only one partial attempt.

Question 8: No attempts.

B8: Mathematical Ecology & Biology Q1- Q4

General comments: Several candidates had clearly opted for learning over understanding.Very few can do phase planes.

Specific comments:

Question 1: Most candidates penalised themselves by trying to prove results on linearstability analytically when they had been taught in lectures that graphical methods madethese proofs trivial. Many did not know the meaning of a bifurcation diagram. Few wereable to do (ii)(c).

Question 2: Parts (a) and (b) were very well done and most candidates managed the firstpart of (c) but few could properly explain the meaning of timescales. Very few could drawthe phase plane.

Question 3: Most candidates made reasonable attempts at (i) and very good attempts at(ii)(a). Very few knew what excitable meant and, again, they struggled with phase planes.

Question 4: Only very few explained chemotaxis properly and only one candidate got thecorrect boundary condition for n. No one answered (d) correctly.

34

B8: Non linear Systems Q5- Q8

Qn 1: The bookwork was well attempted. Algebraic errors marred some of the calculationsof the equilibria and their stabilities of the nonlinear model. Many people forgot howto show the existence of a pitchfork and a Hopf bifurcation, taking ε as the bifurcationparameter. A few people realised that the zero eigenvalue corresponded to the determinantbeing zero, but no-one derived the equation explicitly. No-one found the saddle-nodebifurcation.

Qn 2: The bookwork was very well answered. Many people didn’t spot the need to expandthe exponentials before finding the eigenvalues etc. Those who did, answered the questionvery well, with only a few errors.

Qn 3: Although the bulk of the question was covered in one of the lectures, a lot of peoplehad problems finding the fixed points and determining their ranges of stability. The lasthalf of the question was well answered.

Qn 4: There was confusion in how to calculate the fixed points from the graph. Somepeople thought they corresponded to zeros of the map. Only one person was able to findthe trigonometric substitution for the terniary map. The proof that the map was chaoticwas waffly at times.

B9: Number Theory Q1-Q4

Overall, there didn’t seem to be any problems with the exam, and there was a good spanof marks.

Q1. Attempted by relatively few candidates, possibly because this topic was taught at thebeginning of the course. Because of computational mistakes, no candidate got a full score(including the setter, who committed a mistake himself in the model solutions). Therewere several essentially complete answers. Several people struggled on the bookwork butthen went on to do good computations. Idea of applying the earlier part in the last bitwas missed by several candidates.

Q2. First part foundational bookwork, got wrong or incomplete by quite a number ofpeople. Middle section basically Part A material, but (or hence?) again some peoplestruggled. Concluding part caused difficulties, and there were relatively few complete oralmost complete answers.

Q3. Attempted by almost everybody; this type of question was repeated throughoutlecture course and revision. An easy question of its type, so I marked it strictly, full scoresgiven only for proofs which in particular gave a decent reason why the splitting field hasdegree 6 over Q (instead of just saying power law, 2.3=6, or guessing a basis w/out proof).Still, lots of scores were 24 and above.

Q4. Attempted by relatively few again, though in my view the easiest question on thesheet. Once people got going though, there were several decent attempts. Trick with thelast polynomial was missed by some.

35

B9: Number Theory Q5-Q8

Qu.5: Marks ranged from 1-25 with a concentration at the upper and lower ends. Thiswas the most popular question, probably due to the fact that it was based upon materialearly on in the course. It was also the least routine question: a complete solution requireda good understanding of discriminants and integral bases.

Qu.6: Marks ranged from 0-25, with the majority concentrated at the upper end. Thiswas the third most popular question, and was very well done my most students.

Qu.7: Marks ranged from 12-25. This was the least popular question, although there werea significant number of complete solutions. Marks lost were usually through failing tofinding relations between the generators of the class group.

Qu.8: Marks ranged from 3-25 with a concentration at the upper end. This was the secondmost popular question, and was very well done by most students. It was of quite a routineformat, and rewarded students that had got to grips with all of the key ideas in the course.

B10: Martingales and Financial Mathematics Q1-Q4

Many students felt the course was hard and decided not to take the exam. The remainingones (22 + 1CS in total) that stayed the course were probably among the stronger students.At least this is my impression after having marked the scripts; it is also in accordance withreports from the class tutors.

Overall the attempts appear to be evenly distributed over the 4 questions. There is atendency for candidates to attempt either questions 1 and 2 or questions 3 and 4; thisreflects that the former are about measure theory and the latter about martingales.

The exam paper seems to have been without typos or ambiguous statements. However,the model solution for the first part of question 2 (a) (ii) solves a slightly different question(open intervals instead of closed) which makes the question slightly easier, but question2 (a) (iii) potentially harder. I have stayed with the marking scheme for 2 (a) (ii) andin 2 (a) (iii) I have given full marks if the candidates have mentioned that also the openintervals generate the Borel sigma-algebra.

B10: Martingales and Financial Mathematics Q5-Q8

In the light of last years exam the examiners requested that the questions be made easierat the beginning and harder at the end. This proved to be quite difficult to judge and inthe end the exam was at around the same level of difficulty as previous years. Question 2had a very high level of performance for those that tried it. The popular questions were1 and 3.

As usual there were a set of papers which were truly terrible with candidates unable todo standard bookwork. On the other hand there were many excellent answers and allquestions had at least one perfect answer.

Q1: (74 attempts, 22α, 35β).This was a standard question asking for a derivation of the Black-Scholes equation

36

in two different ways.(b) The question asked for a replication argument - only a few candidates gave thedelta hedging argument.(d) Many candidates did not read the question carefully and derived the equationfor pricing options on futures and not the usual Black-Scholes equation by hedgingwith futures.(e) Most knew where the problem was but quite a few failed to explain it properly.

Q2: (35 attempts, 18α, 6β).This was felt by the checker to be the hard question and some hints were put in,which lead to it probably being the easiest question.(a) Many were not clear about the SDE needed for pricing.(c) Most could integrate by parts (though not all).(d), (e) were well done by those that got there.Those that started this question well were often able to get the full alpha as therewere enough hints to keep them on track.

Q3: (55 attempts, 11α, 26β).(a) The first part was standard bookwork.(b) Many were not sure how to show the equivalence in prices and were unable todo partial differentiation.(c) The formulation as a free boundary value problem was not well done, with manyforgetting to put in all the conditions. Often the conditions for a put option weregiven.(d) This was poorly done, very few thought to use the pricing formula for Europeanoptions on the dividend paying asset to prove this.(e) This was reasonably well attempted.

Q4: (20 attempts, 8α, 4β).This was the least popular question but those that attempted it generally did well.Parts (a)(i) and (b) were exercises or done in lectures and were well done. Thefinal part (c) proved a little tricky with many not manipulating the max functioncorrectly.

B11: Communication Theory

Overall this appeared to be a successful paper. The candidates seemed to find it a littleharder than last years paper, as desired by the Examiners. Broadly speaking, the mostfamiliar problems were very well done, the less familiar ones were found to be much morechallenging. In detail :

Q1: 18 attempts, mostly betas. The final part required a leap of the imagination, whichwas beyond almost everyone.

Q2: 11 attempts, mostly alphas. For anyone who understood the link between trees andprefix-free codes, this was a nice profitable question.

Q3: 26 attempts. Very similar to an exercise from the problem sheets, and thereforepopular and quite well done as a result.

37

Q4: 5 attempts. This was a straightforward question, requiring only routine explicitcalculation of mutual information and then simple calculus. Nevertheless, that proved tobe too much for most attempts.

O1: History of Mathemaics

Paper O1 (History of Mathematics) was taken by seven candidates. Scripts for both theminiproject (submitted in Week 9 of Hilary Term) and the 2-hour written examination(Week 8 of Trinity Term) were blind double marked by the two assessors (JAS + PMN),and there was little difficulty in reconciling the marks. Miniproject marks were assignedfor mathematics, history and presentation in the proportion 2:2:1 after which each assessorindividually assigned a percentage, and a final reconciled percentage was agreed for thishalf of the assessment. On the examination paper the two short comment questions andthe essay were assigned marks in the ratio 1:1:2 and again each assessor individuallyassigned a percentage, after which a reconciled mark out of 100 was agreed for this half ofthe assessment. Finally, since there was a reasonable correlation between the two halvesof the examination, and the marks had been assigned as if they were USMs, the overallUSM for the paper was obtained by averaging the marks on each half of the paper androunding up. We feel that the final USMs represent a fair assessment of the progress madeby the students during the year.

We feel it would not be realistic to break our report down to the level of individual questionsbecause the numbers were so small for each one, but we make some general comments asfollows. Although one candidate just reached a first-class mark there were no outstandinganswers this year, and unfortunately one or two very weak scripts. The most commonfailing was an unwillingness to engage in detail with mathematics that is not presented inmodern form. This kept at least two candidates below the II.1/I boundary, even thoughtheir historical understanding was good. At the lower end candidates concentrated on oneor two relatively safe topics but failed to demonstrate any breadth of knowledge or depthof understanding.

BE/OE: Extended Essay

As only three extended essays were submitted, the Examiners cannot make any usefulcomment on the standard. Comments on the process of agreeing marks can be found inSection A of Part II of this report.

N1: Undergraduate Ambassador’s Scheme

There is a quota of 12 students for the course, with 11 students taking the course thisyear. The 11 were selected from 13 applicants by interview at the start of the Michaelmasterm. During Michaelmas there was a Training Day run by OUDES. In Hilary the stu-dents attended a local school for around half a day per week assisting teachers.

The students were assessed on the basis of:

38

• A Journal of Activities (20%)

• An End of Course Report including details of the Project (35%)

• An Academic Presentation (30%)

• A Teacher’s Report (15%)

The Journal and Report were double marked, and digital video copies of the presentationsmade. A USM was awarded for each part with the above weightings used to produce asingle USM.

Weaker marks on the report and journal were usually the result of an anecdotal/chronologicalstyle which did not adhere closely to the UAS Learning Criteria, or focus enough on sum-marizing the term’s work. The presentations on the whole were focused, well prepared,and were pitched at an appreciable level for the year 12s; there was one clear exceptionto this where technical undergraduate material was given in an almost lecture-like fashion.

Generally most candidates showed a real awareness of the challenges faced by school mathsteachers, made perceptive comments on what they’d seen and observed and produced someengaging projects for the pupils.

OBS1a: Applied Statistics

Overall comments The lasting impression from marking this paper was the coupling ofparrot-like regurgitation of half understood ideas with very bad English. In many casesit was clear that underlying statistical principles introduced in the first and second yearshad not been properly assimilated and had therefore been largely forgotten. Many showedvery poor understanding and little technique with simple linear algebra. On the otherhand four or five candidates showed themselves to be extremely competent in all areas.

As may be seen from the remarks on individual questions, the paper was such that candi-dates were well able to demonstrate what they knew and could do.

Question 1 This was a popular question and was, for the most part well answered withonly two candidates failing to score half marks: the mean mark was 17.28.

Question 2 The least popular question with only 13 attempts, only 1 of which scored morethan half marks. The answers demonstrated an alarming lack of ability to do very simplelinear algebra and marks a change from a few years ago. It may well say something aboutsecond-year exams and the resulting compartmentalisation of basic mathematics.

Question 3 The most popular question which was attempted by all but 1 candidate. Only4 failed to score half marks, the mean being 16.04. Most of the marks lost were due toinability to accurately carry out an ANOVA for a subset of regression variables.

Question 4 A popular question with only 3 candidates scoring less than half marks butno-one scoring full marks: the mean was 14.12. The main stumbling block lay in definingdeviance residuals, even among those who could obtain the deviance given in the question.

39

OBS1a: Applied Statistics

Candidates focussed on Questions 1-4. There was one attempt on Question 5 and noneon Question 6.

OBS2: Statistical Inference

The questions produced a fair balance of answers. Question 1 was the most popular withseveral excellent solutions. Question 2 created some difficulties for the weaker students.The hint contained an improbable value for the mean of a Beta distribution, i.e. a meanpotentially outside the range of the random variable. This was dismissed or ignored bythe stronger students. One student claimed that ”method of moments” was not on thesyllabus of any Oxford Statistics course – a debatable point. There were several attemptsat Question 3 with one excellent solution. Question 4 was the least popular with only4 attempts. The ambiguous manner in which the prior distribution was specified causedconfusion.

OBS3a: Stochastic Modelling

Question 1 was a very popular question, well-answered by most, but with a surprisinglythick tail of weak and very weak answers. There was a reasonable spread of marks clearlyseparating two groups of less than 11 and more than 16 marks, which a high number ofexcellent answers. Among the good scripts, marks were typically lost in (c)(i) and/or(c)(ii). Also, many got the indices wrong in (ii) and thereby failed to assign parametersto heights. The first two parts were generally no problem at all.

Question 2 was a very popular queston, well-answered by many, but again also with manyanswers on the weak side. The spread of marks was reasonable giving a similar separationof of less than 13 marks and more than 19 marks, with some from the upper group ofQuestion 1 joining the lower group for Question 2. Many students stated correct matricesbut no explanations in (a) and (b), and/or had not learned the bookwork around theErgodic Theorem that was crucial for (d) and (e). Part (c) was generally well answered.

Question 3 attracted only four attempts. This was not unexpected since Renewal Theoryhas not been a popular topic since it was added to the syllabus. Apart from one genuinelyweak answer and one attempt that was not serious, the other students scored very well onthis question, which was mostly standard material from assignment sheets. The definitionof convolution powers that was added after the checking process contains a mistake, whichwas not corrected for the exam version.

Question 4 was meant to be a straightforward question on strong laws, but not at allwell-recieved by students. There was no serious attempt. Three students started on parts(a) or (b) and scored a few marks, nobody went on to do (c) or (d). Part (b) contains amistake, an inequality being the wrong way round.

Question 5 was a popular question with a good spread of marks peaking at 15. The fewvery good and one excellent answer are outweighed by a higher number of weak answers.Many students lost some marks on the book work. Many students did not check secondderivatives to ensure that their likelihood is maximised by the zero of the first derivative.

40

Some students confused discrete and continuous methods in (ii) and (iii) and/or triedto explain related differences in (iv)(c), although this was a question exclusively on thecontinuous method. The second table of thie question contains a mistake, the last rowbeing expected frequencies (clearly) not expected proportions, but the scripts did notreveal any problems resulting from this.

Question 6 attracted a reasonable number of attempts, but there were only two very goodand two other satisfactory answers, again outweighed by a tail of weak answers. Amongthe better answers, parts (i)-(iii) were mostly well-done, with the odd mark lost to neglect.Surprisingly many were unable to calculate expectations in (iv).

OBS3b: Stochastic Lifetime Modelling

OBS4: Actuarial Science

Questions 1 and 2 were much less popular than Questions 3, 4, 5, 6.

Question 1 (stochastic interest rates, lognormal distribution). Not a very popular question(more popular among Maths & Stats candidates). Some decent answers, although very fewcandidates (I think 3) got a completely correct answer to the last part. Almost everyoneelse used that V ar(D1 + D2) = V ar(D1) + V ar(D2) when D1 and D2 are normallydistributed, without checking independence.

Question 2 (yields: existence and uniqueness). It seems that part (iii) of this question wastoo hard, and that candidates weren’t prepared to deal with such an abstract questionon this paper. I altered the mark scheme to put less weight on part (iii), and markedgenerously. I hoped that the bookwork proof in (ii) would give a good hint for (iii)(b) butin retrospect an explicit hint might have helped a lot.

Question 3 (bonds, taxation). Generally decent answers to a fairly standard question(although part (iv) probably won’t have been seen in this form). In part (iv) many peopleconfused the nominal amounts of each bond to buy with the amount of money to investin each one.

Question 4 (bonds, yield curves). A fairly standard question with mostly decent answers.

Question 5 (life insurance). Mostly well answered except the last part, which causedproblems. A surprising number of people attempted a full calculation in part (vi), despitebeing told explicitly not to.

Question 6 (forward prices). Lots of good answers. Parts (v) and (vi) caused the mostproblems, and quite a few candidates obviously didn’t work out what they were beingasked to do.

OB21a: Numerical Solution of Differential Equations I

Three candidates took the paper. One of the three was poorly prepared. Each candidateattempted two of the three questions.

41

Q1 The question concerned the analysis of one-step methods for the numerical solutionof the initial-value problem y′ = f(x, y), y(x0) = y0. All three candidates attempted thequestion. One produced a low α answer, the second a high β answer; the third candidatesanswer was very poor.

Q2 The question concerned the theory of linear multi-step methods. Two of the threecandidates attempted the question; one produced a high α answer; the other candidategave a high β answer.

Q3 The question concerned the convergence analysis of the Lax-Friedrichs scheme. Onecandidate attempted the question and produced a halfhearted, low β, solution.

OB21b: Numerical Solution of Differential Equations II

Only 6 undergraduate scripts for this course; 3 attempts at Q1 with a highest score of21/25, 4 attempts at Q2 with a highest score of 20/25, and 5 attempts at Q3 with ahighest score of 24/25.

The lowest score overall was just above 50% and the highest a very creditable 90%. In thissense the paper seems to have been a ”doable” test of candidates’ understanding of thematerial of this course and one which did discriminate between well-prepared candidateswith reasonable understanding and those able to apply their understanding to unfamiliarsituations.

OB22: Integer Programming

All three questions set were attempted by multiple students and had a good distributionof marks ranging from 7 to 25 points, with typical values between 15 and 17 points. Thequestions thus seem to have been set at the right level.

Question 1 was the most attempted. The knowledge content tested here was the idea ofdynamic programming applied to an easy to understand model the students had not seenbefore (but similar to a case discussed in class). Some students struggled to account for thenovel situation of having a nontrivial bound on the number of copies of each object usedin this knapsack problem. Others understood the novelty of the situation and managed tocorrectly compute the required table but not to formulate the recursion formula correctly.

Question 2 was the second most attempted. It was generally well solved, but no-onemanaged to correctly set up the dual of the LP relaxation of the problem and prove thatthe greedy solution is LP optimal.

Question 3 was the least attempted question, but those students who did so achievedsimilar results to the other two. The students seemed to have the right ideas about thelabelling algorithm asked for in the first part, but none of them managed to write up thealgorithm correctly.

Three minor questions arose during the exam regarding the formulation of the questions.The invigilator managed to answer them satisfactorily, and the outcome of the exam was

42

not affected by this. However, appropriate changes have been made in the file to bearchived.

All in all, a very successful exam.

C3.1a: Lie Groups

This is a completely new course. Nothing similar had been offered for examination for atleast 15 years. Practice questions had been set in addition to the seven problem sheets.

There were 8 and 7 candidates for papers C3.1a and C3.1 respectively, of which 2 and 1respectively were third year students. There is a wide spread of marks with 5 candidatesachieving first class level, 4 level II.1, 2 level II.2, 1 pass level and 1 only achieving 6 rawmarks in total.

Question 1: All candidates attempted this question and it was generally well done. Whenmarking I realised that the questions was misleading. When asking for the definition ofa ‘complex linear Lie algebra’, ‘complex’ was supposed to refer to the fact that the Liealgebra is a set of complex matrices and not that the underlying vector space is complexlinear. Many candidates took it to mean both. I did not mark scripts down for anyresulting confusion in the next part of the question when some candidates claimed thatthe tangent space of a Lie group was complex linear.(15 attempts: 8 alphas, 4 betas)

Question 2: Candidates seemed unprepared for this question. Even though part (a) wasall book work, many candidates got only a few marks here. Part (b) had only a couple ofserious attempts where the candidates noticed that they should use the Lie correspondencewhich they had just stated.(10 attempts: 1 alpha)

Question 3: This was a resonably straightforward though not easy question on characters.Those who had revised material from the second part of the course did reasonably welland found it easier than question 2.(8 attempts: 3 alphas)

Question 4: The serious attmepts came from two first class candidates for C3.1a whoanswered more than two questions. Part (a) was standard book work from the secondpart of the course while part (b) asked for a simple application of the Weyl characterformula.(3 attempts: 1 alpha, 1 beta)

C3.1b: Differentiable Manifolds

Q5 Competently done on the whole. (7 attempts)

Q6 Just writing an account – difficult to get full marks. Some candidates wrote out allthey knew about the subject instead of being selective. (4 attempts)

Q7 They all had the right idea but the second paragraph was too complicated for them.One candidate noticed a point which in hindsight should have been made in the question:”assume that V is connected”. (4 attempts)

43

Q8 Not popular though one candidate got full marks. (2 attempts)

44

E COMMENTS ON THE PERFORMANCE OF IDENTIFIABLE IN-DIVIDUALS AND OTHER MATERIAL WHICH WOULD USUALLYBE TREATED AS RESERVED BUSINESS

Prizes in Mathematics Part B

One Gibbs Prize (£400), two Junior Mathematical Prizes (£200) and one IMA Prize wereawarded to four different three-year candidates. The unusually strong field of three-yearcandidates this year meant that all the prize-winners were highly ranked in contrast tosome previous years.

In future all candidates will be eligible for the prizes at this stage. If that arrangementhad been in place this year, the selection of the top prize-winner would have been veryarbitrary as the top two candidates had almost identical marks. We expect that such adifficulty will arise quite frequently.

Matters of adjudication

The Examiners considered a number of medical and other certificates.

1. There were a number of candidates who were allowed extra time, and the Examinerstook no further action in their cases.

2. There were several medical certificates for people who, despite having been ill duringthe examination, had completed the papers. Two candidates had been particularlyill during one paper each, and the provisional marks on those papers were completelyout of line with the candidates’ marks on the other papers. The Examiners adjustedthe USMs awarded for those papers to bring them more in line with the candidates’USMs on the other papers. The Examiners made some very small adjustments toUSMs of some other candidates.

3. One four year candidate failed to attend one half-unit paper. The Junior Proctorinstructed the Examiners to disregard that paper when considering whether to awardan Honours Pass (which we did award) but we were not informed how the missingpaper should be treated for classification after Part C.

4. One candidate was given permission by the University authorities to take only twoPart B papers in 2006, and to defer the other two papers to 2007. Marks wereassigned to the two papers and reported to the college but the candidate was notincluded in the class/pass list.

Individual marks and scripts were considered for all candidates who were close to a classi-fication border, including those candidates who were believed to be four-year candidates.

Conduct of the Examination

We have made some comments about the overall conduct of the examination in SectionA, and we have no further points to add here.

45

F EXAMINERS AND ASSESSORS

Examiners: C.J.K. Batty (Chairman), X. De la Ossa, A.C. Fowler, M.A.H. MacCallum(External), J. Rawnsley (External), G. Reinert, M.R. Vaughan-Lee.

Assessors: (for papers under the aegis of the Mathematical Institute, and forExtended Essays): S.J. Chapman, A. Dancer, C.M. Edwards, K.Erdmann, E.V. Flynn,B. Hambly, R. Haydon, N. Hitchin, P. Howell, J. Kristensen, A. Lauder, P. Maini, I.Moroz, P. Neumann, J. Norbury, H. Ockendon, H. Priestley, J. Stedall, B. Szendroi, U.Tillmann, M. Tindall, A. Wilkie, N. Woodhouse.

46