• Congratulations to the Class of 2024 on your results!
    Let us know how you went here
    Got a question about your uni preferences? Ask us here

Are internal marks affected by the school you go to? (1 Viewer)

Greninja340

Active Member
Joined
Apr 28, 2020
Messages
288
Gender
Male
HSC
2021
Are internal marks affected by the school you go to? If so how does it work?
 

idkkdi

Well-Known Member
Joined
Aug 2, 2019
Messages
2,588
Gender
Male
HSC
2021
Are internal marks affected by the school you go to? If so how does it work?
!st and Last get the exact mark of rank 1/last in externals for their internals while the internal marks of other people are given based off your position in internals and the mark is allocated with that positions accordance with the cohort average on the externals.
 
Last edited:

Potato Sticks

Member
Joined
Oct 25, 2019
Messages
37
Gender
Undisclosed
HSC
2013
If you go to a shitty school, it is. been down that path
You need to be careful and really clamp down on your performance on your externals as you have no benchmark to compare your ability against. Preferably, choose subjects such as the maths and sciences if you are good at them as they are less dependent on marker preferences/bias and your performance is unlikely to be unexpectedly bad in the final exam.

Having said that, if you put a top selective school student in a comprehensive school, yes, most of the time they will quite easily come 1st. However, there are many instances of comprehensive school toppers being far ahead of the rest of the cohort (like 1st 90 and 2nd 50 style gap), and I would not be surprised if in the past there have been comprehensive school students scoring 99.95 (though I have not researched this). So be careful.
 

Trebla

Administrator
Administrator
Joined
Feb 16, 2005
Messages
8,401
Gender
Male
HSC
2006
Are internal marks affected by the school you go to? If so how does it work?
The school you go to is NOT an input into your final internal marks.

These are influenced by how you rank within the school cohort and the gaps between you and adjacently ranked students. This is a process known as moderation.

The moderation process is intended to ensure that - all other factors being equal (such as school culture and resource accessibility) - your internal marks will be similar no matter what school you went to.
 

idkkdi

Well-Known Member
Joined
Aug 2, 2019
Messages
2,588
Gender
Male
HSC
2021
The school you go to is NOT is not an input into your final internal marks.

These are influenced by your performance in your external exam and how you rank within the school cohort. This is a process known as moderation.

The moderation process is intended to ensure that - all other factors being equal (such as school culture and resource accessibility) - your internal marks will be similar no matter what school you went to.
Cohort does matter. Because they give your internal mark in accordance with your school cohort's average. It's not as simple as getting the 3rd highest internals rank and thus getting the 3rd highest external mark. If 3rd highest external mark was 94, and your cohorts average was 60, that 94 would be brought down to ~90, maybe lower. On the contrary if you the cohorts average was higher, i.e. 90, 94 only stays at about 94 or maybe 93 in small cohorts. On the contrary if your cohort average is higher, your score can actually be brought up by a certain amount.

I know this for a fact, because in my accelerated subject, I was smack bang at median. The average was brought down by a flop, and I got a score lower than what my rank got for externals. Rank 2,3,4,5 also got 1 lower than what their rank got for externals.

Contrarily, rank 1 and last got their ranks external mark which corresponds to what SMH reports.
 
Last edited:

hschelper01

Active Member
Joined
Apr 6, 2020
Messages
168
Gender
Male
HSC
2019
Yes, your school’s performance in the HSC exam of a subject does affect your final HSC assessment marks, and, hence, your HSC mark as well. As ATAR is a rank determined by the sum of scaled marks (HSC marks are converted into scaled marks), it is important that students obtain the highest possible HSC mark for each course they undertake.

For example, if your Year 12 English Advanced cohort perform poorly in their HSC exam and the sum of their HSC English Advanced exam marks is less than the sum of their HSC English Advanced final school assessment marks submitted to, then most students’ final school assessment marks are likely to be adjusted down.

Please note that moderation is not the same as ‘scaling’ of subjects.
 

Trebla

Administrator
Administrator
Joined
Feb 16, 2005
Messages
8,401
Gender
Male
HSC
2006
Cohort does matter. Because they give your internal mark in accordance with your school cohort's average. It's not as simple as getting the 3rd highest internals rank and thus getting the 3rd highest external mark. If 3rd highest external mark was 94, and your cohorts average was 60, that 94 would be brought down to ~90, maybe lower. On the contrary if you the cohorts average was higher, i.e. 90, 94 only stays at about 94 or maybe 93 in small cohorts. On the contrary if your cohort average is higher, your score can actually be brought up by a certain amount.

I know this for a fact, because in my accelerated subject, I was smack bang at median. The average was brought down by a flop, and I got a score lower than what my rank got for externals. Rank 2,3,4,5 also got 1 lower than what their rank got for externals.

Contrarily, rank 1 and last got their ranks external mark which corresponds to what SMH reports.
Yes, your school’s performance in the HSC exam of a subject does affect your final HSC assessment marks, and, hence, your HSC mark as well. As ATAR is a rank determined by the sum of scaled marks (HSC marks are converted into scaled marks), it is important that students obtain the highest possible HSC mark for each course they undertake.

For example, if your Year 12 English Advanced cohort perform poorly in their HSC exam and the sum of their HSC English Advanced exam marks is less than the sum of their HSC English Advanced final school assessment marks submitted to, then most students’ final school assessment marks are likely to be adjusted down.

Please note that moderation is not the same as ‘scaling’ of subjects.
There are examples which create the perception of being unfairly dragged down by a poor performing cohort, but look closely and it becomes clear that this perception is actually a mis-attribution of causation.

One key principle of the moderating process is that the relative gaps between the internal marks of students are retained (approximately). For example, say you are coming 2nd with an internal mark of 78 and the person ranked 3rd has an internal mark of 75. There isn't a big difference between 2nd and 3rd for the internal assessment marks. Hence, it must be fair that the moderated marks for 2nd and 3rd should also have a small gap.

Another principle to understand is that internal assessments are separate to external exams. Obviously, internal assessments exist because each subject needs to assess a range of skills. Exams can't assess things like practical lab skills or research skills. If you are say good at lab/research work but don't do well in exam conditions, then it is fair that you get a good internal assessment mark but not a good exam mark. All of these will be combined together with the relevant weightings. The moderation process calls upon the highest, lowest, average external marks as a sort of 'indicator' to estimate where the cohort should collectively sit on a common comparative statewide scale. Once this is established, all the moderated marks in between the lowest and highest are purely decided by the distribution of the internal marks, not the external marks. This is because internal marks are treated as separate to external marks.

I am not sure if people fully understand what influence this has so hopefully the below example makes it clear (though it might be a bit complicated FYI). For illustrative purposes, assume the marks shown in the example below are an accurate reflection of performance - controlling for external factors such as different school culture, individuals bombing out or getting lucky in exams etc.

Scenario 1
Suppose we have the following marks in a cohort of six students (note this is a slightly simplified example to the one described in NESA's website).

The moderated marks are shown in the table below:
StudentExternal exam markRaw internal markModerated internal mark
A929092
B807877
C727574
D605859
E555557
F504050

Now focus on student B (2nd) and student C (3rd). The gap in the raw internal marks for student B (78) and student C (75) is small. This gap is similar when adjusted to the moderated marks, where student B gets 77 and student C gets 74.

Scenario 2
Now suppose the internal marks are all the same but the cohort is much weaker from rank 3rd onwards according to their external marks (I lowered each mark by 10 for students C to F). Student A and student B get the same external marks as scenario 1.

The moderated marks are then recalculated as follows:
StudentExternal exam markRaw internal markModerated internal mark
A929092
B807872
C627568
D505850
E455547
F404040

Now focus on student B (2nd) who gets 72 in scenario 2 compared to 77 in scenario 1 despite scoring the same external exam mark. This is the part where people naively jump to the conclusion that therefore the weaker cohort has unfairly dragged student B down!

However, if you look carefully that conclusion conveniently ignores the key principles that relative gaps between the internal marks of students are retained and internal assessments are separate to external exams. The distribution of the raw internal (not external) marks must dictate how the internal marks become moderated. Remember, that since student B and student C had close raw internal marks 78 and 75, it is only fair that they get close moderated internal marks, despite the huge difference in the external exam mark (which is a separate assessment altogether). The rationale is that if student B is genuinely much stronger than student C internally then he/she should not have gotten that close to student C in the internal assessments.

Some people might argue that student B is clearly much stronger than student C according to the external exam marks. However, student B is NOT much stronger than student C according to the raw internal mark, which is what counts here.

To demonstrate this, note that if I simply rearrange the external exam marks in a different order, it has no impact on the moderated internal mark (because the lowest, highest and average are still the same). Can we really say student B is stronger than student C internally based on the external exam mark? This illustrates my point earlier that the external marks are only used as an indicator of where the internal marks should sit on a common scale, but the rest is determined by the internal marks themselves.

StudentExternal exam markRaw internal markModerated internal mark
A409092
B457872
C507568
D625850
E805547
F904040

This popular idea of school cohort performance dragging your marks up/down is actually not a property of the moderation algorithm as demonstrated above, but more attributable to external factors such as school culture and accessibility to resources.

FYI I corrected my sentence in my earlier post about dependence on the individual external mark.
 

Potato Sticks

Member
Joined
Oct 25, 2019
Messages
37
Gender
Undisclosed
HSC
2013
There are examples which create the perception of being unfairly dragged down by a poor performing cohort, but look closely and it becomes clear that this perception is actually a mis-attribution of causation.

One key principle of the moderating process is that the relative gaps between the internal marks of students are retained (approximately). For example, say you are coming 2nd with an internal mark of 78 and the person ranked 3rd has an internal mark of 75. There isn't a big difference between 2nd and 3rd for the internal assessment marks. Hence, it must be fair that the moderated marks for 2nd and 3rd should also have a small gap.

Another principle to understand is that internal assessments are separate to external exams. Obviously, internal assessments exist because each subject needs to assess a range of skills. Exams can't assess things like practical lab skills or research skills. If you are say good at lab/research work but don't do well in exam conditions, then it is fair that you get a good internal assessment mark but not a good exam mark. All of these will be combined together with the relevant weightings. The moderation process calls upon the highest, lowest, average external marks as a sort of 'indicator' to estimate where the cohort should collectively sit on a common comparative statewide scale. Once this is established, all the moderated marks in between the lowest and highest are purely decided by the distribution of the internal marks, not the external marks. This is because internal marks are treated as separate to external marks.

I am not sure if people fully understand what influence this has so hopefully the below example makes it clear (though it might be a bit complicated FYI). For illustrative purposes, assume the marks shown in the example below are an accurate reflection of performance - controlling for external factors such as different school culture, individuals bombing out or getting lucky in exams etc.

Scenario 1
Suppose we have the following marks in a cohort of six students (note this is a slightly simplified example to the one described in NESA's website).

The moderated marks are shown in the table below:
StudentExternal exam markRaw internal markModerated internal mark
A929092
B807877
C727574
D605859
E555557
F504050

Now focus on student B (2nd) and student C (3rd). The gap in the raw internal marks for student B (78) and student C (75) is small. This gap is similar when adjusted to the moderated marks, where student B gets 77 and student C gets 74.


Scenario 2
Now suppose the internal marks are all the same but the cohort is much weaker from rank 3rd onwards according to their external marks (I lowered each mark by 10 for students C to F). Student A and student B get the same external marks as scenario 1.

The moderated marks are then recalculated as follows:
StudentExternal exam markRaw internal markModerated internal mark
A929092
B807872
C627568
D505850
E455547
F404040

Now focus on student B (2nd) who gets 72 in scenario 2 compared to 77 in scenario 1 despite scoring the same external exam mark. This is the part where people naively jump to the conclusion that therefore the weaker cohort has unfairly dragged student B down!

However, if you look carefully that conclusion conveniently ignores the key principles that relative gaps between the internal marks of students are retained and internal assessments are separate to external exams. The distribution of the raw internal (not external) marks must dictate how the internal marks become moderated. Remember, that since student B and student C had close raw internal marks 78 and 75, it is only fair that they get close moderated internal marks, despite the huge difference in the external exam mark (which is a separate assessment altogether). The rationale is that if student B is genuinely much stronger than student C internally then he/she should not have gotten that close to student C in the internal assessments.

Some people might argue that student B is clearly much stronger than student C according to the external exam marks. However, student B is NOT much stronger than student C according to the raw internal mark, which is what counts here.

To demonstrate this, note that if I simply rearrange the external exam marks in a different order, it has no impact on the moderated internal mark. Can we really say student B is stronger than student C internally based on the external exam mark? This illustrates my point earlier that the external marks are only used as an indicator of where the internal marks should sit on a common scale, but the rest is determined by the internal marks themselves.

StudentExternal exam markRaw internal markModerated internal mark
A409092
B457872
C507568
D625850
E805547
F904040

This popular idea of school cohort performance dragging your marks up/down is actually not a property of the moderation algorithm as demonstrated above, but more attributable to external factors such as school culture and accessibility to resources.

FYI I corrected my sentence in my earlier post about dependence on the individual external mark.
In my opinion the internally assessments should seek to simulate the external assessments as closely as possible, such to produce the expected distribution of scores seen on the external exam. This is best way to ensure that the cohort as a whole does better, and henceforth the scores are moderated up to a greater extent. In my opinion, research tasks are not suitable as internal assessment tasks as they do not reflect the external exam, in addition to being too easy to cheat on.

Also, your post makes assumption that students external exam performances genuinely reflects their exam ability. Usually, the argument is that while 1st place is not susceptible to other students performing poorly I,e worse in equivalent performance than they did in the internals, while lower places are and hence can get “dragged down” by other students just having a bad day.
 

idkkdi

Well-Known Member
Joined
Aug 2, 2019
Messages
2,588
Gender
Male
HSC
2021
In my opinion the internally assessments should seek to simulate the external assessments as closely as possible, such to produce the expected distribution of scores seen on the external exam. This is best way to ensure that the cohort as a whole does better, and henceforth the scores are moderated up to a greater extent. In my opinion, research tasks are not suitable as internal assessment tasks as they do not reflect the external exam, in addition to being too easy to cheat on.

Also, your post makes assumption that students external exam performances genuinely reflects their exam ability. Usually, the argument is that while 1st place is not susceptible to other students performing poorly I,e worse in equivalent performance than they did in the internals, while lower places are and hence can get “dragged down” by other students just having a bad day.
Even better, get rid of internal assessments, and standardise four assessment throughout the year across the state. This way, all internals are of the same difficulty, and all students sit similar exams. This also removes all need for moderation internally.

The benefit of internal assessments in developing one's ability to make assignments, speeches, presentations etc, can be argued to have been gained -from yr 7-11. In addition, the innately dodgy/biased criteria for such things are contradictory to the aim of the HSC, to provide as fair of a score as possible. Dodgy marking from teachers statewide that is not in accordance with what NESA wants on exams can also be removed by setting such a system.

I can imagine that NESA does not want to do this, due to the extra manpower and $$$ that it would take. Heck, look at Melbourne's selective schools, 5x as funded as ours. We have a fixed-up farm for James Ruse, they have a mini-Hogwarts for Melbourne High School. Perth Modern School in Perth, another example of how crap our funding for selective schools is.

LMAO, I just realised how much money the education department is generating for NSW government from selective schools. This is daylight robbery. 150 students. Average out at $80,000 pre-tax salary (conservative estimate). On the $80,000, assume $25,000 tax (conservative?). 150*25000 = 3.75 million. Already exceeds the annual funding for many selective schools. Now, we need to think about all the cohorts before us that graduated and are paying tax. Say that at a single time we have 20 yrs of Alumni in the working force. 75 million a year for the government in tax money.

EDIT: I can spot a few fallacies. Nonetheless, I still stand by my argument funding is not being properly allocated. As of now, school funding is higher for private schools, due to funding being calculated by the parents of students. However, this is flawed, because we are not considering the very likely possibility of high-earning selective school graduates sending their kids to private schools.
 
Last edited:

Trebla

Administrator
Administrator
Joined
Feb 16, 2005
Messages
8,401
Gender
Male
HSC
2006
In my opinion the internally assessments should seek to simulate the external assessments as closely as possible, such to produce the expected distribution of scores seen on the external exam. This is best way to ensure that the cohort as a whole does better, and henceforth the scores are moderated up to a greater extent. In my opinion, research tasks are not suitable as internal assessment tasks as they do not reflect the external exam, in addition to being too easy to cheat on.
I don't quite agree with that, because external examinations are a very limited way to assess a subject. Internal assessments exist to assess skills or topic coverage that an external examination is unlikely to capture.

You cannot claim to be good at Chemistry if you are terrible at lab work or research, but only good at studying for exams. You cannot claim to be good at English if you are terrible at verbal communication in a speech, but only good at writing essays in exam conditions. You need to be good across the broad spectrum of skills demanded by the subject. There is a reason why these types of assessments are actually mandated in the syllabus. Not to mention, universities also do a variety of assessment types which do not simulate their final exams.

Also, your post makes assumption that students external exam performances genuinely reflects their exam ability. Usually, the argument is that while 1st place is not susceptible to other students performing poorly I,e worse in equivalent performance than they did in the internals, while lower places are and hence can get “dragged down” by other students just having a bad day.
The whole argument of students having a bad day is redundant, because it goes both ways. Who says student B didn't just get lucky getting their external mark of 80 as opposed to being actually smarter? Who says the person who came first internally won't have a bad day in the external exam and fall many ranks below and therefore the top external mark is from someone lower? There are infinitely many possibilities. It is impossible for any system to ever correct for anyone who suddenly gets lucky or has a bad day in an individual internal assessment or external exam.
 

Accurate

clix
Joined
Feb 8, 2018
Messages
198
Location
~~~
Gender
Male
HSC
2021
I think the internal system is full of rubbish, why should I be getting someone else's mark as 50% of my score if I didn't come first. Why can't they just standardize everything instead of schools have different assessments for everyone. Standardization of internals eliminates bias, and people that go to low ranking schools aren't disadvantaged a lot.
 

Potato Sticks

Member
Joined
Oct 25, 2019
Messages
37
Gender
Undisclosed
HSC
2013
I don't quite agree with that, because external examinations are a very limited way to assess a subject. Internal assessments exist to assess skills or topic coverage that an external examination is unlikely to capture.

You cannot claim to be good at Chemistry if you are terrible at lab work or research, but only good at studying for exams. You cannot claim to be good at English if you are terrible at verbal communication in a speech, but only good at writing essays in exam conditions. You need to be good across the broad spectrum of skills demanded by the subject. There is a reason why these types of assessments are actually mandated in the syllabus. Not to mention, universities also do a variety of assessment types which do not simulate their final exams.


The whole argument of students having a bad day is redundant, because it goes both ways. Who says student B didn't just get lucky getting their external mark of 80 as opposed to being actually smarter? Who says the person who came first internally won't have a bad day in the external exam and fall many ranks below and therefore the top external mark is from someone lower? There are infinitely many possibilities. It is impossible for any system to ever correct for anyone who suddenly gets lucky or has a bad day in an individual internal assessment or external exam.
Regarding the first part, I believe that the HSC should seek to only examine, but effectively, within a certain framework of a subject. So, using your example, one might not claim to be good at chemistry without good research skills, but they can claim to be good at chemistry theory and understanding. I believe the HSC should seek to assess the components that are easy to quantify performance, such as the latter. Research is extremely difficult to assess accurately in internals, due to the presence of data fabrication problems, which is almost impossible to detect if the fabricator is careful. In addition, why should someone be judged by research that may have produced no results, if they had no way of predicting it? Lastly, schools will always be incentivised to mimic the external exams in order to maximise the performance of their students; universities are different as they are not intended to rank students, but demonstrate the acquisition (pass) or not (fail) of skills, and as such the metrics do not need to be as reliably quantifiable.

Regarding the second part, the key issue is that the student should be in control of their own performance. If they had a good or bad day, that is just them being lucky or unlucky, at the end of the day, they were responsible for what they put down on the paper. Someone else getting unlucky is not something they can control, and yet they are penalised by it. So the exam should aim with the broad goal of testing ability, but most importantly, a test takers performance on the exam, whether or not it was deserved. A similar argument might run, if a student got a lucky run of questions in the internals, would it be fair for the teacher to deduct marks as it did not accurately reflect their ability? As you note, it would be unfair and impossible to correct for it.
 

ultra908

Active Member
Joined
May 11, 2019
Messages
151
Gender
Male
HSC
2020
I think the internal system is full of rubbish, why should I be getting someone else's mark as 50% of my score if I didn't come first. Why can't they just standardize everything instead of schools have different assessments for everyone. Standardization of internals eliminates bias, and people that go to low ranking schools aren't disadvantaged a lot.
You dont get someone else's mark as 50% of yours, you get your own school, just moderated. Standardization of internals eliminates alot of different learning opportunities that are available due to the flexibility of internal assessments. Different types of tasks are appropriate to different schools.

Going to a lower ranking school can be disadvantaging in some ways, but moderation doesn't play a huge part. The key thing is, the distribution of internal marks is kept the same- its just shifted and stretched to match the range of external marks. The idea that 'my mark would've been better at a top school' is a bold one. At a different school, you complete different (potentially harder) tasks, your marks are different, and the people around you perform differently.
 

Trebla

Administrator
Administrator
Joined
Feb 16, 2005
Messages
8,401
Gender
Male
HSC
2006
Regarding the first part, I believe that the HSC should seek to only examine, but effectively, within a certain framework of a subject. So, using your example, one might not claim to be good at chemistry without good research skills, but they can claim to be good at chemistry theory and understanding. I believe the HSC should seek to assess the components that are easy to quantify performance, such as the latter. Research is extremely difficult to assess accurately in internals, due to the presence of data fabrication problems, which is almost impossible to detect if the fabricator is careful. In addition, why should someone be judged by research that may have produced no results, if they had no way of predicting it? Lastly, schools will always be incentivised to mimic the external exams in order to maximise the performance of their students; universities are different as they are not intended to rank students, but demonstrate the acquisition (pass) or not (fail) of skills, and as such the metrics do not need to be as reliably quantifiable.
I'm not sure what you had in mind when it comes to "research" in the 2 unit HSC science subjects because I don't think it involves what you're describing lol (though also noting that Science Extension is a very research based subject). Semantics aside, practicality from an integrity point of view is of course useful to filter out the types of assessments that wouldn't be fair. However, that alone is not a compelling enough reason to completely disregard more diverse assessments that are not HSC-exam-like altogether. For example, practical lab exams in Chemistry or speaking tasks in English do not simulate the HSC exam conditions in any way, but can be carried out with reasonable integrity.

Regarding the second part, the key issue is that the student should be in control of their own performance. If they had a good or bad day, that is just them being lucky or unlucky, at the end of the day, they were responsible for what they put down on the paper. Someone else getting unlucky is not something they can control, and yet they are penalised by it. So the exam should aim with the broad goal of testing ability, but most importantly, a test takers performance on the exam, whether or not it was deserved. A similar argument might run, if a student got a lucky run of questions in the internals, would it be fair for the teacher to deduct marks as it did not accurately reflect their ability? As you note, it would be unfair and impossible to correct for it.
From your argument, any sort of mark adjustment system which is based on ranks and mark distribution would be considered unfair. This also includes the scaling of HSC marks (outside the moderation of internal marks) used to calculate the ATAR where relativity and mark gaps also matter.

That being said, the moderation algorithm has elements which buffer some of these impacts. Some aspects to point out are:
  • For a typical size school cohort for a subject (like 20+ students and in some cases much more), if a handful of students have a very bad exam day the moderation process is not sensitive enough to significantly impact the majority of students who did not. In fact, even the small impact could potentially be offset by students who had a very lucky exam day.
  • Whilst individuals can have bad days, the probability that a big chunk of the cohort all have a significantly (like drop 10 marks from where they should be) bad exam day at the same time is very low. The far more likely scenario is that the cohort is actually a poorer performer than they think they are.
  • Even in the very rare case that the distribution of the externals is extremely different to that of the internals, NESA actually intervenes with either an adjustment (i.e. the bottom moderated mark is no longer made equal to the bottom exam mark) or an exclusion (for students whose performance in the exam is much lower than expected based on their performance relative to the group in the school assessment). See NESA website for details.
Relating these points to my earlier illustrative example of the six students:
  • If only student C has a very bad exam day to get 62 instead of 72, then student B would receive a moderated mark of 75 which is close to the original 77 in scenario 1. When you scale that up to more typical sized cohorts, note that I had to make 17% of the cohort mess up that significantly (which I think is borderlining as very unusual), and the impact is -2 marks at most.
  • In my scenario 2, I adjusted the marks of 4 out of the 6 students by -10 marks which leads to student B to being impacted by -5 marks. This is quite an extreme scenario in reality. Maybe for very small cohorts this is possible, but when you scale that up to more typical sized cohorts it would be extremely unusual to suggest that this performance from 67% of the cohort is due to all of them simply having a really bad exam day. It is far more likely they are a genuinely weaker cohort, in which case my original analysis in the post explains the outcome.
Whilst I would agree that moderation is not a perfect system (no mathematical model is ever perfect), it is a pretty fair one when it comes to putting internal assessment marks on a common scale.

Going to a lower ranking school can be disadvantaging in some ways, but moderation doesn't play a huge part. The key thing is, the distribution of internal marks is kept the same- its just shifted and stretched to match the range of external marks. The idea that 'my mark would've been better at a top school' is a bold one. At a different school, you complete different (potentially harder) tasks, your marks are different, and the people around you perform differently.
Pretty much agree with all of this. The primary driver for the perceived advantage in higher ranked schools is the advantage of the school environment, not a direct mathematical advantage. If you move to a higher ranked school which you are far less suited to culture-wise, you will be putting yourself at a disadvantage.
 

Users Who Are Viewing This Thread (Users: 0, Guests: 1)

Top