Opting Out or Weighing In: Decision Directed SAT Score Submission
a Research Report by Ryan Mullin
Word Count: 4676
“Not everything that is faced can be changed, but nothing can be changed until it is faced.” —James Baldwin
Introduction
The college admissions process is an important part of a high school student’s academic career that shapes the future of their life and career paths. This process is the result of many factors, all of which work with and alongside one another. According to Mark Freeman, a researcher at Common App, among these factors, the decision of whether or not to submit scores from the SAT®1, a widely used college entrance exam, has gathered more attention in recent years. Students must now consider factors such as strategic planning, test-optional policies, institutional requirements, economic constraints, and perceptions of standardized testing’s fairness and accuracy. This study seeks to explore these motivations, particularly within the context of high school students in southern New Jersey.
For this study, the concept of “test-optional policies” refers to procedures that “allow prospective students to choose whether or not to submit their SAT or ACT scores as part of their application process” (Lopez and Ward).
Literature Review
Strategic Planning in Score Submission
One of the most significant factors influencing a student’s decision to submit or withhold SAT scores is the strategic considerations behind it. For example, a student may be more likely to withhold a score if they believe that it is not up to the standard of the institution they are applying to. “New Evidence on Recent Changes in College Applications, Admissions, and Enrollments,” a peer-reviewed study that aimed to study recent trends and changes in applications in colleges within the United States, found that the single greatest predictor of whether a student chooses to submit an SAT score for consideration or not is the ability of their score to compete with the average of the institution (Howell et al., 2022). Similarly, “Making SAT Scores Optional in Selective College Admissions: A Case Study” presented the conclusion that students who had higher SAT scores relative to their GPAs were more likely to submit their SAT scores than those who had higher GPAs compared to their SAT scores, who were instead more likely to withhold these scores (Robinson and Monks, 2005). However, these studies only focused on more selective institutions, which limits how the results of the study may be applied to all university types.
These findings also align with “The Role of Admissions Practices in Diversifying Honors Populations: A Case Study,” a study that aimed to research how admissions practices could diversify the honor programs at colleges, found that students perceive the decision to submit or withhold scores as a “risk-reward decision.” This means that students considered the potential benefits of submitting their scores against the possibility of undermining their application with lowered perceived chances of admission (Radasanu and Barker, 2021). However, this utilized data from one university’s honors program, limiting how it may be generalized to all populations. The findings are complemented by the results of the study “Why ACT Scores Are Low in Chicago and What It Means for Schools,” which found that when students underperform on standardized tests such as the ACT®2 (another popular college entrance exam), they often choose not to submit their scores because they believe that it hurts their chances of admission (Allensworth et al., 2008).
Test-Optional Policies and Application Trends
Test-optional policies have fundamentally reshaped the college admissions process, thereby prompting studies on their implications. Studies that aimed to research test-optional policies such as “Investigating the Effects of Test-Optional Admissions Policies” and “Test-Optional Policies: Implementation Impact on Undergraduate Admissions and Enrollment” demonstrate that test-optional policies have led to increased application volume, but only modest gains in the diversity of applications (Paris et al., 2022) (Pellegrino, 2020). However, these changes have not resulted in proportional increases in underrepresented minority enrollment among admitted students, as highlighted by Christopher T. Bennett, an Education Research Analyst at RTI International. These findings demonstrate the limited effectiveness of these policies’ limited effectiveness in trying to combat systemic inequalities (Bennett 2022).
The results of these studies were only amplified by the COVID-19 pandemic, which forced many SAT test centers to close, in turn, making most colleges use test-optional policies in their admissions. These changes have also influenced student decisions. “Applying to college in a test-optional landscape,” a national study conducted by Common App, found that external factors, such as the COVID-19 pandemic were key reasons for the stark decrease colleges saw in students choosing not to submit SAT scores, suggesting that practical reasons, such as an inability to take the exam in person may outweigh academic considerations in a student’s decision to submit SAT scores (Freeman et al., 2021). These studies do effectively show how logistical challenges may reduce test score submission, however, they focus on singular events, which may not accurately reflect the entire college admissions or national landscape.
Institutional Requirements Influence on Score Submission
Another reason why students may choose to submit scores is because they were required to take the SAT at some point by their educational district or institution. Simply having the scores available because of being mandated to take the SAT may lead to students choosing to use their scores when applying to college. “The Effect of Mandatory College Entrance Exams on Postsecondary Attainment and Choice,” a study that aimed to examine the effects of making the ACT college entrance exam mandatory for high school students, supports this with their results that demonstrated that making standardized tests mandatory in the state of Maine led to a small, but noticeable increase in students using their scores to apply to college (Hyman, 2017). These findings are supplemented by the study “The Maine Question: How Is 4-Year College Enrollment Affected by Mandatory College Entrance Exams,” which also found that making the SAT mandatory in the state of Maine as part of the “No Child Left Behind Act” led to a 2-3% increase in full-time college enrollment (Hurwitz et al., 2015). While these studies present how institutional requirements affect submission and enrollment rates, they only focus on singular states, states such as Maine, which may not reflect other regions.
Economic and Logistical Barriers
Economic barriers influence both test-taking and score-submission behavior in students. “Surprising Ripple Effects: How Changing the SAT Score-Sending Policy for Low-Income Students Impacts College Access and Success,” a study that evaluates a change made by CollegeBoard that increases the number of free3 score reports available to low-income students exhibited that these additional reports increased the likelihood of students using their score to submit for consideration when applying to colleges (Hurwitz et al., 2017). Similarly, “The Effects of Mandatory and Free College Admission Testing on College Enrollment and Completion” indicated that removing barriers to testing, such as making them free, allowed students from underrepresented and underprivileged backgrounds to realize their potential and pursue more selective institutions (Vansuch, 2017). However, these studies focus mainly on national trends, which do not give insight into regional variations.
On the other hand, “Circumscribed Agency: The Relevance of Standardized College Entrance Exams for Low SES High School Students,” a study that interviewed Black and Latino students who were applying to college from five high-poverty high schools, documented that students in these backgrounds often lack awareness of the SAT and other standardized tests, which therefore decreased participation and limited use of scores in the college application process (Deil-Amen and Tevis, 2010). This study effectively demonstrates the effects of economic and logistical barriers but focuses solely on Chicago public schools, which limits its broader applications. Similarly, “Untested Admissions: Examining Changes in Application Behaviors and Student Demographics Under Test-Optional Policies” found that a student’s place in their community, namely their economic background and minority status may prevent them from taking the SAT, and on a broader level, even considering a college education at all (Bennett, 2022). These findings demonstrate how economic and logistical factors play into a student’s decision to submit or withhold scores when applying to colleges.
Perceived Accuracy and Fairness of Standardized Tests
Finally, how students view the fairness and accuracy of standardized tests such as the SAT may significantly influence their decisions in submitting scores. James A. Shepherd, a Psychology professor at the University of Florida who studied student perceptions of SAT scores, reported that students who received lower scores were more likely to view their results as inaccurate representations of their abilities, which leads to an increased likelihood of them not submitting it, causing score inflation (Shepperd, 1993). Similarly, “Selection Bias in College Admissions Test Scores” by Clark et al. (2009) and “Gender Bias in Standardized Tests: Evidence From a Centralized College Admissions System” identified systemic racial and gender biases in standardized college entrance exams, which lower people’s confidence in their accuracy and validity (Saygin, 2020).
These conclusions are supported by Jesse Rothstein, a professor at the University of California, Berkely, and Jonathan Baron, a professor at University of Pennsylvania, who both found that a student’s high school GPA and other factors better predict a student’s college potential than SAT scores (Rothstein, 2005) (Baron and Norman, 1992). These studies question the validity of standardized tests in college admissions and may explain why a larger amount of students from underrepresented or underprivileged backgrounds choose to withhold scores from consideration. However, these studies primarily use historical data, which may not apply to today’s landscape.
Conclusion, Research Gap, and Research Question
The existing body of literature provides an understanding of students’ motivations to submit or withhold SAT scores in the college admissions process, including academic performance, test-optional policies, institutional requirements, economic background, and perceptions of fairness and utility. However, most of the studies were conducted by observing on a national, state, or institutional level, which leaves a gap in the more regionalized area and population of Southern New Jersey high school students. This study aims to explore all of the factors that go into a member of this population’s decision-making process when applying to college, which will ultimately answer the research question: what prompts a student in a southern New Jersey high school to decide whether or not to submit their SAT scores when applying to college?
Methodology
Hypothesis
Based on the reviewed literature, the primary hypothesis formed was that many factors play into a student’s decision whether or not to submit SAT scores when applying to college, namely the student’s score in comparison to the average score of admitted students to the college they applied to, their ability to take the test, their perceptions of the SAT’s accuracy, and whether or not the SAT is required for admission.
Overview
Based on the literature, there is no clear consensus on how to carry out a study of this nature. Some studies used qualitative methods, while others used quantitative methods. Based on this, a mixed-methods approach was deemed necessary. To generate the best data to answer the research question, this study used a survey approach comprising three main steps: 1) Designing a survey to gather the necessary information, 2) distributing this survey to regional public and private high schools, and 3) analyzing the data collected from the responses to this survey.
Survey Design
To answer the research question as accurately as possible, a survey was deemed the most effective method to gather the necessary information from the target population. This is because the nature of this study requires input from individual people on their personal choices, and the best way to get this en masse is through a survey. The survey was created on the online survey creation platform Google Forms4.
Before taking the survey, respondents are asked to read and agree to an informed consent form that outlines the purpose of the study, eligibility to participate, procedures, a confidentiality statement, a statement of voluntary participation, an agreement to the risks, benefits, compensation of the study, and contact information of the primary researcher.
After agreeing to the informed consent form, respondents were presented with two screening questions, which ensured that the respondents were part of the target population of the survey by asking if they had both taken the SAT exam and were currently enrolled as seniors in a southern New Jersey high school. If one or more of these criteria were not met, the respondents were asked to exit the form and answer no further questions.
In the second section of the survey, respondents were asked a series of questions surrounding their SAT scores, the college application process, and whether or not they submitted scores. They were first asked what their highest “Reading & Writing” and “Math” scores were5. After this, they were asked whether or not they chose to submit their SAT score to “All”, “Some”, or “None” of the colleges to which they had applied to. After this, they were presented with an optional question asking them to list all of the colleges they applied to.
If the students answered “All” or “Some”, they were asked what factors influenced their decision to send their SAT scores. This question asked the respondents to “check all that apply” and provided some common reasons, such as beliefs that it would strengthen their application, a belief that it was competitive with the average of the college they were applying to, and applying to colleges that required SAT scores. In addition to this, an “other” option was provided for respondents to express their own reasoning, if applicable.
Below is an example of this question
Figure 01: Common Reasons to Submit Scores Survey Question
If students answered “None”, they were asked to answer a similar question, asking what factors influenced their decision not to send scores. This also provided common responses, such as doubts over the test’s accuracy, the use of test-optional policies, and a belief that scores would weaken their application. Again, an “other” option was provided for input of non-provided responses
If students answered “Some”, they were asked to answer both of the aforementioned questions.
The respondents were then asked if any college admissions counselors, guidance counselors, teachers, social media influencers, peers, or parents influenced their decision on whether to submit their SAT scores. If they answered “Yes” to this question, they were asked what advice they were given in an open-ended question.
Finally, respondents were asked an open-ended question that allowed them to provide any further information about their experience with SAT score submission.
Survey Distribution
After a short pilot survey was launched to a small group of peers at the researcher’s AP Research class, the survey was distributed to seniors at the researcher’s high school through the use of electronic mail. The survey was then expanded to six other local high schools, both public and private, in the southern New Jersey region, and was also distributed via electronic mail. After the data collection period had concluded, 30 responses were recorded.
Data Analysis
After all of the responses were collected, the results were exported from Google Forms to Google Sheets. This provided a way for the responses to be analyzed computationally, as well as generate graphs and other data visualizations. After the data was exported to the spreadsheet, some minor data cleanup was performed, such as expanding commonly known acronyms6 to help with data uniformity.
The first step of data analysis was to examine the responses to each non-open-ended, and quantitative question. These responses provided data which was then statistically analyzed. This provided data on the distribution of responses, allowing for a clearer understanding of trends and patterns within the dataset. Visualizations of this data were created through the Google Sheets spreadsheet, such as bar graphs and pie graphs.
Next, responses to open-ended questions were reviewed. Common themes and keywords were identified to process this qualitative data. Responses were categorized based on recurring topics into thematic sections7. This helped to summarize the overall sentiment of the open-ended questions
Finally, correlations within responses were examined. This was done by exporting the data of the spreadsheet into a downloadable file and then using a program written by the researcher in the Python programming language8. This action was taken to examine the average SAT score of responses that submitted their scores to “All”, “Some” or “None” of their colleges. A Python program was also used to perform statistical operations on the SAT score responses.
Results & Discussion
Response Demographics
The number of responses to the distribution of the survey totaled 30 responses. These were all seniors in southern New Jersey high schools for the 2024-2025 school year. All respondents agreed to the informed consent form and had taken the SAT exam at least once.
Quantitative Results
Both of the questions asking about respondents’ highest earned SAT scores (Reading & Writing and Math) received 30 responses.
Among these 30 responses, the mean “Reading and Writing” SAT score was 643.33, with a median of 650. The range of scores went from 780 at the highest to 400 at the lowest for a total range of 380. The standard deviation of this data was 94.86, which suggests a moderate spread in performance. The “Reading and Writing” scores had an IQR of 90, which implies that the scores clustered toward the higher end of the scale.
As for the “Math” SAT scores, the mean score was 578.33, with a median of 595. The range of scores went from 790 at the highest to 370 at the lowest for a total range of 420. The standard deviation of this data was 101.59. The “Math” scores had an IQR of 137.
Table 01: Statistical comparison of R&W and Math scores
“Reading and Writing” scores | “Math” scores | |
---|---|---|
Mean | 634.33 | 578.33 |
Median | 650 | 595 |
Standard Deviation | 94.86 | 101.59 |
Minimum | 400 | 370 |
Maximum | 780 | 790 |
Range | 380 | 420 |
IQR | 90 | 137.5 |
Q1 | 600 | 512.5 |
Q3 | 690 | 650 |
15 of the 30 responses indicated that they submitted their SAT scores to “some” of the colleges they applied to. 9 responses chose to not submit any scores, and 6 responses used their scores for all of the colleges they applied to. This shows the popularity of test-optional policies, with 70% of students utilizing them in some capacity to submit scores only where they feel it is necessary.
Figure 02: “Did you submit your SAT score to all, some, or none of the colleges you applied to” Question results
The average SAT score for students who submitted their SAT scores to all of the colleges that they applied to was 1303.33. The average score of students who chose to submit it to some of the colleges they applied to was 1284, and the average score for students who submitted their scores to none of the colleges they applied to was 1033.33. These findings suggest that students with higher scores may be more inclined to submit them when applying to colleges.
16 of 30 respondents reported that their decision to submit their SAT score was influenced by college admissions counselors, guidance counselors, teachers, social media influencers, peers, or parents, while the other 14 did not report being influenced by these groups. This finding is confirmed in the qualitative data gathered in this survey as well. However, their influence may be limited overall in larger populations as the difference between the two options neared an even split.
Figure 03: “Did any college admissions counselors, guidance counselors, teachers, social media influencers, peers, or parents influence your decision on whether to submit your SAT score?” question results
Qualitative Results
Out of the 30 survey respondents, 25 responses were recorded to the question asking students who chose not to submit their SAT scores why they chose not to. The most popular reason was a belief that including their SAT score with their application made their application weaker as a whole. This was followed by believing that their application was strong without SAT scores, not thinking the SAT was accurate enough to define their academic potential, and their SAT being lower than the average of the college they applied to, respectively. The total number of times people considered these a reason for not submitting SAT scores can be found in the table below.
Table 02: Factors influencing withholding of SAT scores
Factor Influencing Withholding of SAT Score for Consideration | # of Times Chosen |
---|---|
SAT scores make student’s application weaker | 18 |
Belief that the application is strong enough without SAT scores | 16 |
Doubts of SAT’s ability to show academic potential | 13 |
SAT score was lower than the average of the college applied to | 12 |
Need for perfect scores | 1 |
Financial implications of sending scores | 1 |
Guidance counselor influence | 1 |
Use of higher ACT scores | 1 |
All of the above results (besides the need for perfect scores) align with the common reasons discovered in the literature review for withholding SAT scores. These findings show the influence of perceptions of institutional valor, test-optional policies, perceived accuracy of standardized tests, and how students view submitting SAT scores as a “risk-reward” decision influence behavior in withholding SAT scores.
As for the question asking students who chose to submit their SAT scores why they chose to submit them, 23 responded9. The most popular reason to submit scores was knowing that their SAT score was competitive with or above the average of the college they were applying to. The next most common reason was that the student was applying to colleges or programs10 that required SAT scores as part of the admissions process. This was followed by a belief that their SAT scores strengthened their application and were an accurate reflection of their academic performance, respectively. The full results may be found in the table below.
Table 03: Factors influencing submission of SAT scores
Factor Influencing Submission of SAT Score for Consideration | # of Times Chosen |
---|---|
SAT scores were competitive with the college applied to’s average | 12 |
Applied to colleges or programs that require SAT scores | 10 |
Belief that SAT scores strengthened application | 8 |
Belief SAT was an accurate reflection of the applicant’s academic ability | 5 |
Null responses | 2 |
These results also align with the common reasons for submitting SAT scores discussed in the literature review. They show the impact that having a test-required policy, having confidence in a student’s score’s strength (especially compared to the college they are applying to), and confidence in the accuracy of the SAT have on score submission behaviors.
Students who claimed their decision was influenced by their peers were questioned on what the advice of those peers was. Of these, 16 people11 provided what advice they were given. After these responses were coded for common themes12, the most frequent type of advice given pertained to score-based considerations. This was followed by application strength consideration, school policy influence, general advice not to submit, and financial considerations, respectively. The frequency of such responses is summarized in the below table.
Table 04: Themes prevalent in advice given by student’s peers
General theme of advice given by peers | Description of theme | # of Occurrences |
---|---|---|
Score based considerations | Advice consisting of submitting to schools only where their score is above the school’s average accepted school, and vice versa | 5 |
Application strength considerations | Whether or not their college application would be stronger if they included SAT scores | 3 |
School policy influence | Guidelines and standards set by school guidance counselors on submitting scores | 3 |
General advice not to submit | Advice was given to students telling them not to submit, with no further explanation provided | 2 |
Financial Considerations | Saving money by not sending scores | 1 |
Null responses | Responses that were left blank or failed to properly answer the question | 2 |
Finally, respondents were given the opportunity to share anything else that they deemed important surrounding their experience submitting or withholding SAT scores. This question received 14 responses13 and was also analyzed for common themes14, the most common being the strategic decision-making and financial considerations surrounding the college application process. The full results are displayed in the table below.
Table 05: Themes prevalent in the open-ended question asking for further information
General theme of response | Description of theme | # of Occurrences |
---|---|---|
Strategy surrounding score submission | How scores were used when applying; submitting where necessary | 3 |
Financial Considerations | Remarks surrounding the fiscal impact of the college application process | 2 |
Personal experiences | Feelings surrounding the accuracy and test-taking experience of the SAT | 2 |
Technical difficulties | Technical difficulties encountered in sending SAT scores | 1 |
General observations | Remarks surrounding the outcomes of respondents’ college application process | 1 |
Null responses | Responses that were left blank or failed to properly answer the question | 5 |
Discussion of Data
From the data collected, the most common reasons behind a student’s decision to submit or withhold their SAT score when applying to college was comparing their score to the average of the college they were applying to, and considerations of how strong their application is with or without it. Whether it is from a student’s own belief, or influenced by the student’s peers, respondents whose scores were lower than the average of the college they applied to tended to withhold scores for consideration, and those with higher scores than the average of the college they were applying to tended to use the scores for consideration. Beyond this, students who believed their application was stronger with SAT scores tended to use them, while those who believed that their SAT scores made their application weaker tended to withhold them.
Further, beliefs over the accuracy and ability of the SAT to showcase a student’s academic potential also played a large part in this decision-making process. One of the most common reasons students chose not to submit SAT scores was that they believed that it was not accurate enough to showcase their academic potential, while those who did submit their SAT scores viewed it as accurately reflecting their skills.
The presence (or absence) of test-optional policies also influenced students’ decisions regarding score submission. Colleges and programs that require the SAT as part of their admissions process accounted for a significant portion of the responses from students who reported submitting their SAT scores. Meanwhile, test-optional policies are what allow students to withhold SAT scores in the first place, so it is reasonable to assume that every respondent who reported withholding their SAT scores utilized these policies.
Guidance counselors, parents, and other peers also played into students’ decisions. Nearly half of the respondents reported being influenced by these groups in some way, oftentimes informing them of their final decision to submit or not.
The results showed that the most prevalent reasons included strategic planning/decision-making, test-optional policies, institutional requirements (test-required policies), economic and logistical barriers, and the perceived accuracy of the SAT. All of these reasons align with those discussed in the literature review, highlighting the significance of these findings.
Given this, it is reasonable to conclude that this study has collected a sufficient amount of data to answer the research question. The findings suggest that students choose to submit SAT scores based on a combination of strategic planning, institutional requirements, and personal perceptions of test accuracy. Specifically, those who perceive their scores as strengthening their applications are more likely to submit, while those who believe their scores may lessen their chances of getting into college are more likely to withhold, given they are operating under test-optional policies.
Limitations and Implications
Limitations
One of the primary limitations of this study is the sample size. Because only thirty people responded to the survey, the results may not be as generalizable to the broader South Jersey high school population. Additionally, the results were likely concentrated in one high school. Because a majority of the responses received before the survey was further distributed to other high schools, most responses likely came from the first high school it was distributed to.
Another limitation comes from methodological gaps in the survey design. Because the question asking what colleges the respondents applied to was optional, it left a gap in some responses, making the analysis of this data and connection to other questions especially difficult. Without this data, the study could not fully address its research question regarding the motivations behind SAT score submission in the context of institutional policies or the perceived prestige of colleges.
Additionally, some questions on the survey may have been misinterpreted, leading to unclear or misleading results, which affects the consistency of the responses. Further, some questions, specifically the ones asking for the reasons why students chose to or not to submit their scores, may have had a response bias. Because some common reasons discussed in the literature review were already provided, respondents may have been more inclined to select these answers over providing their own thoughts. This has the potential to skew the results toward the provided answers, leaving out self-reported reasons.
Implications
Despite its small limitations, this study effectively answered what it set out to research. The findings suggest that students consider multiple aspects, including perceived competitiveness, institutional policies, and personal confidence in their scores when deciding whether to submit their SAT results. Because of this, there are some implications to discuss
Firstly, for colleges and universities, this study shows how test-optional policies affect students’ decision-making. These colleges may need to re-evaluate how they communicate the role of standardized tests, such as the SAT, play in the admissions process as a whole. If students do not know how the scores will affect their overall application, or if it will benefit them at all, they may make decisions based on assumptions rather than clear guidance.
Next, for high school guidance counselors and independent college counselors, this study demonstrates the need for individualized counseling. Without personalized guidance, students may be making SAT score submission decisions based on incomplete information or misconceptions about how they will be used.
Beyond these groups, this study also has implications for individual students going through the college admissions process. Besides this time being stressful enough as it is for students, having the added stress of making such a consequential decision makes these findings even more important. If students are properly educated about test-optional policies and how scores are used, their decisions may change. Ultimately, this study was able to effectively answer the research question.
Conclusion
This study supports the idea that SAT score submission cannot be narrowed down to a single reason, and is instead influenced by a combination of policy, perception, and influence from others. These conclusions contribute to the broader conversation on standardized testing in the college admissions field and highlight the need for continued study on how test-optional policies affect students’ choice in and access to higher education.
Future studies should focus on broader populations or study these trends in different regions in the United States, such as portions of other states. Conducting such similar studies but with larger populations will help refine these findings, furthering their significance. Other topics of interest could include examining these factors over multiple application cycles, seeing if students who initially made one decision regret it later in their academic careers, or examining if students who make a certain choice submitting their score perform differently in college (GPA, retention rates, career outcomes) than those who made the opposite decision.
Works Cited
Allensworth, Elaine, et al. “From High School to the Future: ACT Preparation—Too Much, Too Late. Why ACT Scores Are Low in Chicago and What It Means for Schools.” Consortium on Chicago School Research, 2008, https://consortium.uchicago.edu/sites/default/files/2018-10/ACTReport08.pdf#page=5.12. Accessed 9 October 2024.
Baron, Jonathan, and M. Frank Norman. “SATs, achievement tests, and high-school class rank as predictors of college performance.” Educational and Psychological Measurement, vol. 52, no. 4, 1992, pp. 1047-1055, https://www.sas.upenn.edu/~norman/SAT.pdf. Accessed 10 October 2024.
Belasco, Andrew S., et al. “The Test-Optional Movement at America’s Selective Liberal Arts Colleges: A Boon for Equity or Something Else?” Educational Evaluation and Policy Analysis, vol. 37, no. 2, 2015, pp. 206-223. PennState CiteSeerX, https://citeseerx.ist.psu.edu/document?repid=rep1&type=pdf&doi=d58f7410848683a6db7b1daf6dab04312cb39b0e. Accessed 4 October 2024.
Bennett, Christopher T. “Untested Admissions: Examining Changes in Application Behaviors and Student Demographics Under Test-Optional Policies.” American Educational Research Journal, vol. 59, no. 1, 2022, pp. 180–216. 10.3102/00028312211003526.
Clark, Melissa, et al. “Selection bias in college admissions test scores.” Economics of Education Review, vol. 28, no. 3, 2009, pp. 295-307, https://jesse-rothstein.com/wp-content/uploads/2024/01/clark-rothstein-schanzenbach-EER2009.pdf#page=1.41. Accessed 7 October 2024.
Deil-Amen, Regina, and Tenisha L. Tevis. “Circumscribed Agency: The Relevance of Standardized College Entrance Exams for Low SES High School Students.” The Review of Higher Education, vol. 33, no. 2, 2010, pp. 141-175. ResearchGate, https://www.researchgate.net/publication/236711026_Circumscribed_Agency_The_Relevance_of_Standardized_College_Entrance_Exams_for_Low_SES_High_School_Students#:~:text=TENISHA%20LASHAWN%20TEVIS%20is%20an%20Assistant. Accessed 5 October 2024.
Freeman, Mark, et al. “Applying to college in a test-optional landscape | Common App.” Common App, 8 September 2021, https://s3.us-west-2.amazonaws.com/ca.research.publish/Research+briefs+2020/20210908_Paper4_TestOptional.pdf. Accessed 3 October 2024.
Goodman, Sarena. “Learning from the test: Raising selective college enrollment by providing information.” Review of Economics and Statistics, vol. 98, no. 4, 2016, pp. 671-684, https://www.federalreserve.gov/pubs/feds/2013/201369/201369pap.pdf#page=2.27. Accessed 7 October 2024.
Howell, Jessica, et al. “New Evidence on Recent Changes in College Applications, Admissions, and Enrollments.” CollegeBoard Research, July 2022, https://research.collegeboard.org/media/pdf/ARC-Research-Brief.pdf. Accessed 3 October 2024.
Hurwitz, Michael, et al. “The Maine Question: How Is 4-Year College Enrollment Affected by Mandatory College Entrance Exams?” Educational Evaluation and Policy Analysis, vol. 37, no. 1, 2015, pp. 138-159, https://journals.sagepub.com/doi/pdf/10.3102/0162373714521866#page=3.68. Accessed 6 October 2024.
Hurwitz, Michael, et al. “Surprising Ripple Effects: How Changing the SAT Score-Sending Policy for Low-Income Students Impacts College Access and Success.” Educational Evaluation and Policy Analysis 39.1, vol. 39, no. 1, 2017, pp. 77-103, https://journals.sagepub.com/doi/pdf/10.3102/0162373716665198. Accessed 11 October 2024.
Hyman, Joshua. “ACT for all: The effect of mandatory college entrance exams on postsecondary attainment and choice.” Education Finance and Policy, vol. 12, no. 3, 2017, pp. 281-311, https://watermark.silverchair.com/edfp_a_00206.pdf?token=AQECAHi208BE49Ooan9kkhW_Ercy7Dm3ZL_9Cf3qfKAc485ysgAAAzowggM2BgkqhkiG9w0BBwagggMnMIIDIwIBADCCAxwGCSqGSIb3DQEHATAeBglghkgBZQMEAS4wEQQM-7asqjDAVVp8s8o4AgEQgIIC7WImutdGaJJtlBXxBv5wlD2C_l6UNqvHWPSkr7mo9gL. Accessed 8 October 2024.
Paris, Joseph H., et al. “The Impact of Optional: Investigating the Effects of Test-Optional Admissions Policies.” Journal of College Access, vol. 7, no. 2, 2022, pp. 7-29, https://files.eric.ed.gov/fulltext/EJ1372848.pdf#page=18.70. Accessed 5 October 2024.
Pellegrino, Christina Marie. Test-Optional Policies: Implementation Impact on Undergraduate Admissions and Enrollment. Seton Hall Dissertations and Theses, 7 November 2020, https://scholarship.shu.edu/cgi/viewcontent.cgi?article=3920&context=dissertations#page=1.00&gsr=0. Accessed 5 October 2024.
Radasanu, Andrea, and Gregory Barker. “The Role of Admissions Practices in Diversifying Honors Populations: A Case Study.” Honors in Practice, vol. 17, 2021, pp. 45-62. EBSCOhost, research.ebsco.com/linkprocessor/plink?id=c1ae5810-43ee-36a4-b3d0-b41179311799. Accessed 4 October 2024.
Robinson, Michael, and James Monks. “Making SAT Scores Optional in Selective College Admissions: A Case Study.” Economics of Education Review, vol. 24, no. 4, 2005, pp. 393-405, https://users.nber.org/~confer/2002/hiedf02/monks.pdf. Accessed 5 October 2024.
Rothstein, Jesse. “SAT scores, high schools, and collegiate performance predictions.” annual meeting of the National Council on Measurement in Education, Montreal, CA., 2005. PennState CiteSeerX, https://citeseerx.ist.psu.edu/document?repid=rep1&type=pdf&doi=c1600632cb8901c715b4b82075dcd29e7e4602cc#page=2.10. Accessed 10 October 2024.
Saygin, Perihan O. “Gender bias in standardized tests: evidence from a centralized college admissions system.” Empirical Economics, vol. 59, no. 2, 2020, pp. 1037-1065, https://2018.economicsofeducation.com/user/pdfsesiones/027.pdf#page=2.07. Accessed 8 October 2024.
Shepperd, James A. “Student derogation of the Scholastic Aptitude Test: Biases in perceptions and presentations of College Board scores.” Basic and Applied Social Psychology, vol. 14, no. 4, 1993, pp. 455-473, https://people.clas.ufl.edu/shepperd/files/derogating.pdf. Accessed 6 October 2024.
Vansuch, Mary. “The Effects of Mandatory and Free College Admission Testing on College Enrollment and Completion.” 2017, https://mpra.ub.uni-muenchen.de/82262/1/MPRA_paper_82262.pdf.
Appendix
-
Survey Questions
Below is a full copy of the survey distributed to participants, exported from Google Forms15.
-
Data Analysis Programs
1. This Python script takes the data from a CSV file, makes sure the data is properly formatted and calculates total SAT scores. It then groups the data by submission category (all, some, or none) to calculate the average total SAT score and count for each group.
import pandas as pd
file_path = "[CSV FILE]"
df = pd.read_csv(file_path)
print(df["Did you submit your SAT score to all, some, or none of the colleges you applied to?"].value_counts())
df["Did you submit your SAT score to all, some, or none of the colleges you applied to?"].fillna("None", inplace=True)
df["Total SAT Score"] = df["What was your highest \"Reading and Writing\" SAT score?"] + df["What was your highest \"Math\" SAT score?"]
submission_groups = df.groupby("Did you submit your SAT score to all, some, or none of the colleges you applied to?")[
["Total SAT Score"]
].agg(["mean", "count"])
submission_groups.columns = ["Avg Total SAT Score", "CNT"]
print(submission_groups)
2. This Python script performs a statistical analysis on the data, calculating metrics such as mean, median, standard deviation, range, and interquartile range.
import pandas as pd
import numpy as np
def analyze_column(data, column_name):
numeric_data = pd.to_numeric(data[column_name], errors='coerce')
mean = np.mean(numeric_data)
median = np.median(numeric_data)
std_dev = np.std(numeric_data)
min_val = np.min(numeric_data)
max_val = np.max(numeric_data)
range_val = max_val - min_val
q1 = np.percentile(numeric_data, 25)
q3 = np.percentile(numeric_data, 75)
iqr = q3 - q1
print(f"\nStatistical Analysis for {column_name}:")
print("-" * 50)
print(f"Count: {len(numeric_data)}")
print(f"Mean: {mean:.2f}")
print(f"Median: {median:.2f}")
print(f"Standard Deviation: {std_dev:.2f}")
print(f"Range: {range_val:.2f} (Min: {min_val:.2f}, Max: {max_val:.2f})")
print(f"Interquartile Range: {iqr:.2f}")
print(f"Q1: {q1:.2f}")
print(f"Q3: {q3:.2f}")
try:
df = pd.read_csv('[CSV FILE]')
reading_writing_column = 'What was your highest "Reading and Writing" SAT score?'
analyze_column(df, reading_writing_column)
math_column = 'What was your highest "Math" SAT score?'
analyze_column(df, math_column)
except FileNotFoundError:
print("file not found")
except Exception as e:
print(f"Error: {e}")
-
Abbreviations
To ensure data uniformity, certain common college abbreviations and nicknames had to be changed to their full name. As the data was coded, a list was kept with all of the shorthand names that were changed, listed in the table below.
Table 06: College abbreviations and expansions
Abbreviation | Expanded To |
---|---|
Bama | University of Alabama |
BC | Boston College |
BU | Boston University |
Catholic U | The Catholic University of America |
Florida St | Florida State University |
FSU | Florida State University |
JMU | James Madison University |
LSU | Louisiana State University |
NC State | North Carolina State University |
Nova | Villanova University |
Ole Miss | University of Mississippi |
Penn | University of Pennsylvania |
Pitt | University of Pittsburgh |
Rutgers NB | Rutgers University-New Brunswick |
SJU | Saint Joseph’s University |
St. Joe’s | Saint Joseph’s University |
TCNJ | The College of New Jersey |
U of Arizona | University of Arizona |
UConn | University of Connecticut |
UFlorida | University of Florida |
UMiami | University of Miami |
UNC | University of North Carolina, Chapel Hill |
UofSC | University of South Carolina |
UPenn | University of Pennsylvania |
USC | University of Southern California |
USCarolina | University of South Carolina |
USF | University of South Florida |
UTK | University of Tennessee, Knoxville |
UVA | University of Virginia |
VMI | Virginia Military Institute |
-
Open Ended Survey Question Results
1. Influence of advice of peers on SAT score submissions
This table shows the responses16 to the open-ended question that asked “If you answered yes [to the previous question asking about peer influence], what advice did they give you?”, as well as how the responses were coded into themes.
Table 07: Peer influence question responses
Category | Responses |
---|---|
Score based considerations | “send them to the schools that my score was above the average” “They told me to only submit when my score was above the school’s average.” “Just that a certain college might not think it’s good, or that it was below the average.” “My mommy told me my math score was good to submit because I am majoring in math.” “My original math score was 700, and I was advised to take it again and score a little higher before submitting. I then was advised not to submit to Penn unless I had a perfect math score.” |
Application strength considerations | “It would make my application stronger.” “My mom told me that my application was very strong and my SAT could only drag me down, which I agreed with.” “They said that my application was better off/good enough without it.” |
School policy influence | “[Guidance counselor] told me which schools to send my scores to, and my parents did research too to confirm that my scores matched the schools I was sending to.” “Guidance counselor told me to not send it to some of the reach schools I put on my list, like Villanova, Richmond.” “It was an option made a standard by my guidance counselors.” |
General advice not to submit | “They would advise against submitting them.” “Not to submit them” |
Financial Considerations | “They told me not to submit the SAT score to colleges that did not require it. Part of it was because of the financial aspect– it costs money to send scores.” |
2. Further information shared by respondents
This table shows the responses to the open-ended question that asked “Is there anything else you would like to share about your experience with SAT score submission?”, as well as how the responses were coded into themes.
Table 08: Further information question responses
Category | Responses |
---|---|
Strategy surrounding score submission | “I only used it for ROTC, not college applications.” “Don’t submit them if it won’t help!!!!” “I used super score for my SAT, and didn’t feel exactly fully comfortable with my single scores.” |
Financial Considerations | “Tutoring is hard and a lot of money.” “It was an unnecessary extra expense.” |
Personal experiences | “I took the SAT 3 times.” “I don’t think my score really showed my intelligence and I felt somewhat ashamed of it.” |
Technical difficulties | “Every time I would go to the ‘score send’ page, it would say that the page is not available and would redirect back to the AP CollegeBoard Homepage” |
General observations | “I got deferred from the University of Tennessee, which required me to submit my SAT score, however, I got into Clemson without my SAT score. I found this interesting, since Clemson has a lower acceptance rate by around 20%. It made me question whether or not submitting an SAT score actually does impact your application.” |
Footnotes
-
SAT is a registered trademark of the College Board, which was not involved in the production of nor endorses this research. ↩
-
ACT is a registered trademark of the ACT, Inc., which was not involved in the production of nor endorses this research. ↩
-
Sending SAT scores to colleges costs money. ↩
-
A full copy of this survey may be found in Part A of the appendix. ↩
-
A link to the CollegeBoard “MySAT” webpage was provided to the respondents on the question to allow them to more easily check their scores ↩
-
e.g. “Bama” expanded to “University of Alabama” or “BC” expanded to “Boston College”; a full list of abbreviations used can be found in Part C of the appendix. ↩
-
The responses that were coded into themes can be found in Part D of the appendix ↩
-
The specific code used in these programs may be found in Part B of the appendix. ↩
-
Of the 23 responses, only 21 were legitimate, as the two blank responses do not count toward the data used in this study. ↩
-
Such as ROTC or other scholarship programs. ↩
-
Of these 16 responses, only 14 were legitimate. The other 2 responses were considered null. ↩
-
Individual responses and how they were coded can be found in Part D of the appendix. ↩
-
Of these 14 responses, only 9 were legitimate. The other 5 responses were considered null. ↩
-
Individual responses and how they were coded can be found in part D of the appendix. ↩
-
Note that personally identifiable information has been removed from the document, but has in no other way been modified from the original copy. ↩
-
Certain personally identifiable information, such as a guidance counselor’s name, was redacted. The data remains in no other way changed. ↩