CPT Submission Scoring Set 3
This blog contains how I scored four submissions of the CPT on CollegeBoard and compares my scoring to how CollegeBoard scored it.
- Submission 1
- Submission 2
- Submission 3
- Submission 4
- What my CPT Program Will Need + Criteria
- Why a Submission Might Not Meet a Standard
- Final Reflection for the Last Scoring
Submission 1
Reporting Category | My Scoring | College Board Scoring | Comments |
Program Purpose and Function | 1 | 1 | The student earned the point for this category because they describe the purpose of the function being to entertain the user and describes the functionality and the input and output of the program. |
Data Abstraction | 0 | 0 | The response did not earn the point for data abstraction, as while they do show a code segment with the name of the list, their second code segment does not show that same list being implemented into the functionality of the program. |
Managing Complexity | 0 | 0 | The response did not earn the point for this reporting category because the list that the student includes does not manage complexity. In addition, in their written response, they provide an inaccurate explanation for why the list they use could manage complexity, as the program could be created just as easily even without it. |
Procedural Abstraction | 1 | 0 | The response did not earn the point for this category because while it does show a student-developed procedure,they do not mention in their written response how that procedure contributes to the functionality of the program. |
Algorithm Implementation | 1 | 1 | This response earned this point because their algorithm includes sequencing, selecting, and iterating and also explains how the algorithm works in enough detail so that someone could recreate it. |
Testing | 1 | 1 | The student earned the testing point, as they pass two different arguments, describes the two different conditions that the runs are being tested in, and describes the two different results. |
Discrepancies Between my scoring and CollegeBoard Scoring (1)
For the most part, my scoring agreed with that of College Board’s, which is a good sign that I am continuing to improve my judgement when I am looking at these sample submissions. However, there was one discrepancy between our scoring, and that was for the procedural abstraction category. For this category, I was actually quite hesitant to award the response the point, as I thought that it did not go into enough detail about how the procedure works. However, I ended up deciding to give them the point, as I thought that they at least did the bare minimum for their written response portion. College Board, however, did not agree with this, as even though the response includes a student-developed procedure with parameters, it does not explain how the procedure contributes to the overall functionality of the program, thus not earning the response the point for procedural abstraction. To improve from this discrepancy, I will make sure to really read what is in the written response and try to see if I myself could recreate the procedure being described (by thinking about it of course). This way, I will know right away if the response deserves the point or not.
Submission 2
Reporting Category | My Scoring | College Board Scoring | Comments |
Program Purpose and Function | 1 | 1 | This response earned the point for this reporting category, as they properly describe the purpose, functionality, and input and output of the program. |
Data Abstraction | 1 | 1 | The student earned the point for this category because they include a list as one of their program segments, as well as another code segment that demonstrates how the list is being used to fulfill the program’s purpose and functionality. |
Managing Complexity | 1 | 1 | This response earned the managing complexity point because they include a code segment that manages complexity of the letters the user types in and explains (accurately) how much more difficult it would be to write the program if they did not use the list specified in the written response. |
Procedural Abstraction | 0 | 1 | The student earned the point for this category, as they include their student-developed procedure with the necessary parameters and explain how the procedure and its functionality contributes to the functionality of the program. |
Algorithm Implementation | 1 | 1 | The student earned the point for this category, as they go into great detail about how the algorithm works, which allows someone who is reading the description to recreate the algorithm themselves. |
Testing | 1 | 1 | The student earned the testing point, as they pass two different arguments, describes the two different conditions that the runs are being tested in, and describes the two different results. |
Discrepancies Between my scoring and CollegeBoard Scoring (2)
Similar to the first submission, my scoring and College Board’s scoring was the same for the most part. However, the one discrepancy between our scoring was from the procedural abstraction category. I likely made the mistake of not awarding the point because of how I scored this category for the last submission I scored. I had made the mistake in assuming that the response did the bare minimum, but College Board did not agree. I must have thought that the student did not include enough detail about how the procedure fulfills their program’s purpose, so I therefore did not award them the point. College Board did award them the point, however, as they meet all of the necessary requirements on the rubric. Next time, I will make sure not to let a previous scoring distract me from the submission I am scoring right now, as the whole point of the scoring should be to improve your judgement of what submissions are good and bad. This way, I will know exactly what I need to do to get full credit on the Create Performance Task.
Submission 3
Reporting Category | My Scoring | College Board Scoring | Comments |
Program Purpose and Function | 1 | 1 | The response earned the point for this row, as they are able to distinguish between the purpose and the functionality of the program. They also describe the input, output, and functionality as demonstrated in the video submission. |
Data Abstraction | 1 | 0 | The student did not get the point for this row, as there is no code segment that demonstrates data being used from the stateList.They also give an inaccurate description regarding the code segment containing the name of the list, stateList. |
Managing Complexity | 0 | 0 | Even though the response includes a code segment that manages complexity with a list, in the written portion, it gives very generic reasons for why it would be difficult to create the code without a list (would take much longer, be very inefficient, etc.) |
Procedural Abstraction | 0 | 0 | The student did not earn the point for this category because despite having a student-developed procedure, their procedure does not have at least one parameter. Additionally, the second code segment does not show the procedure updateScreen actually being called. |
Algorithm Implementation | 0 | 0 | The response did not earn the algorithm point because it includes sequencing and selecting, but it is missing iteration. Furthermore, the written response does not specify how the index value is set based on the match and what value is set for each U.S. state. |
Testing | 0 | 0 | The student did not get the response for this row because rather than describing two calls to the procedure, they describe two calls FROM the procedure. Also, they describe two conditions being performed by the user rather than the conditions being tested by the parameter. Additionally, the response does not specify the two calls to the given procedure and simply describes the result that shows up on the screen. |
Discrepancies Between my scoring and CollegeBoard Scoring (3)
There was only one discrepancy between my scoring and that of CollegeBoard’s, and that was for the data abstraction point. When I looked into the sample response for data abstraction, I was debating quite a bit on whether the response should have earned the point or not. This was because I was not sure if the different lines of code was supposed to represent data being used from the list. Additionally, I was not paying attention to the fact that the student’s description did not match what one of their code segments showed. The code segment shows that the list stateList is representing the state name, but in the written response, it describes how the list represents the information that the user will see about whichever state they choose. Although I can see that I am able to decide if a response for a certain category is good or bad, I still need to work on comparing what is shown in the code segment to what the user writes in their response. This way, I will know if both line up with each other and will help me to not make the mistake of having the code and written response contradict each other when I do my own Create Performance Task.
Submission 4
Reporting Category | My Scoring | College Board Scoring | Comments |
Program Purpose and Function | 1 | 1 | The response did earn the point for this category because they state the purpose of the function (to lessen boredom), demonstrates running of the program in the video (movement of boat as user presses keys), the functionality of the program (score keeping), and describes the input (pressing ‘a’ and ‘d’ keys) and output (boat’s left and right movement). |
Data Abstraction | 1 | 1 | This response earned the data abstraction point because they show two code segments that meet the criteria: one that shows data being stored in the list and one that shows data being accessed from the list in a loop. The response identifies the name of the list (fishtypes), and specifies what is stored in the list (type of fish, number of specific fish caught). |
Managing Complexity | 1 | 1 | The response earned the complexity point because it includes a code segment that manages complexity with a list (about the type of fish and the number of each type of fish caught). Furthermore, the response accurately explains how the code could be written without lists and specifies how little changes need to be made to the program if another fish was added to the list. They compare this to not using a list, which would force them to make a new variable for every fish that was added, which makes the program far more inefficient and therefore unnecessarily complex. |
Procedural Abstraction | 1 | 1 | The student earned the procedural abstraction point because they include a student-developed procedure (clone+movement+range) with five parameters (more than just the one required) and a call to this procedure. Additionally, they describe the functionality of the procedure and how it is able to fulfill the program’s overall functionality and purpose. |
Algorithm Implementation | 1 | 1 | This response earned the point for this category because the algorithm includes sequencing, selecting, and iterating. Additionally, the response explains how the algorithm works in enough detail so that someone else reading the description of it could recreate it themselves. |
Testing | 0 | 0 | The response did not earn the point for the testing category, meeting none of the three criteria. The response did not describe the arguments passed through the parameters, and although it did describe the conditions being tested, it does not correspond to how the parameters are implemented in the program. Additionally, rather than describing the result of each call, they only describe the code segments for each call. |
Discrepancies Between my scoring and CollegeBoard Scoring (4)
Fortunately, there were no discrepancies between my scoring and that of CollegeBoard’s, which to me is a great sign that I understand what CollegeBoard considers to be a good or bad submission for the Create Performance Task. Although I was hesitant for some categories in scoring this response, I made the right choices whether it involved giving or not giving them the point, as all of my choices matched those of CollegeBoard’s. For example, I was hesitant on if I should have given the response the testing point, as I was not sure if they had actually described the results of each call of the fishing program. Much of their response regarding the results appeared to talk more about the code segments that are involved rather than what is actually outputted. Otherwise, I would say that I felt more confident in my decisions for the other rows.
What my CPT Program Will Need + Criteria
To recap, below are the six criteria that CollegeBoard will be looking for in my Create Performance Task:
- Program Purpose/Function
- Data Abstraction
- Managing Complexity
- Procedural Abstraction
- Algorithm Implementation
- Testing
In order for me to get a full score on the CPT, my submission needs to meet all six of these criteria. While I do not have any finalized ideas for what I am going to for my CPT (yet), below are a few that I have brainstormed and thought about for quite some time:
- Some kind of fun game that can also help the user learn something
- Program that allows a user to track their daily activity, water intake, food intake, etc.
- Maybe a program that takes a data set and calculates the mean, median, mod, standard deviation, etc.
- A program that lets a user of things that they want to collect
- A program/series of pages that behave as an arcade full of games (hangman, matching game, etc.)
Regardless of what I end up doing, my program needs to meet these six criteria
Why a Submission Might Not Meet a Standard
While the program not functioning like intended can be a reason that a submission fails to meet a standard, what the student includes in their written response can also be the reason. If the student is not being specific about the program purpose, function, algorithms, etc., it is likely that CollegeBoard cannot award them the point for being vague. Another reason could be that the submission shows that the student did not thoroughly look at the rubric themselves to make sure that they met all the criteria. Without reading the rubric, one cannot be for certain if they have met a standard, which is why it is always important to review the rubric before getting started on the project. This way, you will not have to start from scratch if you find out that it does not meet the criteria on the rubric.
Final Reflection for the Last Scoring
I felt that I learned a lot from looking at and scoring these sample submissions. First, by scoring more and more of these each week, I was able to improve my ability to distinguish between a response that should earn a point and a response that should not. I also was able to get a better idea on what CollegeBoard believes is a good or bad example of the Create Performance Task submission. Additionally, I learned from much of my mistakes ever since I first began scoring these, and I could see this happening from how there became fewer and fewer discrepancies between my scoring and CollegeBoard’s scoring. Most important of all, I can now apply what I have learned from scoring these myself to what I am going to do for the Create Performance Task, as I now have plenty of samples to look back on and to ensure that I can get full credit on each category. Lastly, I could see myself get better at really looking at the written response and comparing it to the code segments embedded into the response, as looking at both told me if the student understands what their code does and if they made sure that what they were saying was actually true. Understanding how your code works is certainly an important skill in both this class and beyond, as this informs people that you not only put work into the project but that you also took the time to make sure that you know what you are doing.