The IJICS (International Journal of Informatics and Computer Scienc. Vol 10 No 1. Maret 2026. Page 11-19 ISSN 2548-8384 . ISSN 2548-8449 . Available Online at https://ejurnal. stmik-budidarma. id/index. php/ijics/index DOI 10. 30865/ijics. Web-Based Automated Assessment for School Examinations Jepri Yandi1,*,Yuntari Purbasari2 1*Accounting Computerization. Universitas Prabumulih. Prabumulih. Indonesia 2Information Systems. Universitas Prabumulih. Prabumulih. Indonesia Email: *yhandijefry@gmail. (* : yhandijefry@gmail. Submitted: 01/02/2026. Accepted: 25/02/2026. Published: 30/03/2026 AbstractOeAn Assessment is a fundamental component of the educational process, particularly in school examinations, as it reflects studentsAo academic achievement. However, traditional examination and assessment methods often face challenges such as inefficiency, high administrative workload, and the potential for human error. This study aims to develop a web-based automated online assessment system to support ICT-based school examinations. The system is designed to facilitate online examinations, perform automatic scoring for objective test items, and generate assessment reports in real time. The development process follows a systematic software development approach, including requirement analysis, system design, implementation, and system testing. Functional testing results show that the system operates correctly and provides accurate assessment results. The implementation of the proposed system enhances efficiency, accuracy, and transparency in the examination and assessment process. Therefore, this web-based automated assessment system can be utilized as an effective ICT solution to improve the management and evaluation of school examinations. Keywords: Online Assessment System. Web-Based Examination. Automated Scoring. ICT in Education. School Examination INTRODUCTION The integration of Information and Communication Technology (ICT) into educational environments has significantly transformed assessment practices across various levels of education. The rapid advancement of digital technologies has encouraged institutions to shift from conventional paper-based examinations toward technologysupported evaluation systems that offer greater efficiency, reliability, and scalability. E-assessment systems enable automated scoring, faster processing of examination results, digital record management, and improved administrative efficiency compared to traditional examination models . , . By digitizing assessment workflows, institutions can reduce logistical constraints associated with printing, distributing, collecting, and archiving paper-based tests. In school contexts. ICT-supported assessment systems also contribute to reducing teachersAo workload, minimizing human error in grading processes, and increasing transparency in score reporting . , . These benefits make web-based assessment systems increasingly relevant for structured and systematic educational environments. Recent studies highlight the growing adoption of web-based examination platforms designed to support remote access, structured data storage, and automated evaluation mechanisms . , . Such platforms allow students to participate in examinations through secure login systems while administrators monitor examination activities in real Data generated during examinationsAiincluding student responses, timestamps, and scoring outputsAiare stored in centralized databases, ensuring traceability and efficient retrieval. Automated assessment systems are particularly effective for objective-type testing, where responses can be evaluated using predefined validation rules. In most implementations, rule-based scoring models rely on answer keys stored within the system database. Student responses are automatically compared with these predefined keys, ensuring scoring consistency and eliminating subjective bias. When properly implemented, this approach provides high scoring reliability and operational stability . , . Online examination architectures commonly adopt clientAeserver models that integrate authentication mechanisms, scheduling controls, and monitoring components to ensure examination integrity and controlled access . Authentication modules verify user identity through secure login credentials, while role-based access control distinguishes permissions between administrators, teachers, and students. Scheduling features regulate examination availability by restricting access to specific time windows, preventing unauthorized early or late participation. Additionally, monitoring mechanisms may record login duration, response patterns, and submission times to enhance examination security. The structured integration of these components forms a comprehensive examination ecosystem capable of supporting institutional-scale implementation. Despite these technological advancements, several challenges remain in the development and evaluation of automated assessment systems. A significant portion of prior research focuses on higher education environments, where infrastructure readiness, digital literacy levels, and financial resources tend to be more advanced . , . In contrast, structured implementation studies in school-level contexts are comparatively limited . Schools often operate under different constraints, including limited technical infrastructure, varying degrees of ICT proficiency among teachers and students, and distinct administrative procedures. Consequently, direct adaptation of higher education assessment systems to school settings may not always yield optimal results without contextual modification. Another issue identified in previous research is the emphasis on system development without comprehensive quantitative evaluation. While many studies describe technological features and architectural frameworks, fewer investigations systematically measure functional reliability, scoring accuracy, grading efficiency, and usability validation within a single integrated framework . , . Without empirical validation, it becomes difficult to determine whether an automated assessment system truly performs as intended in real-world educational settings. Copyright A 2026. Jepri Yandi. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Page 11 The IJICS (International Journal of Informatics and Computer Scienc. Vol 10 No 1. Maret 2026. Page 1-10 ISSN 2548-8384 . ISSN 2548-8449 . Available Online at https://ejurnal. stmik-budidarma. id/index. php/ijics/index DOI 10. 30865/ijics. Functional reliability ensures that system modules operate consistently without errors, while scoring accuracy confirms that automated results precisely match predefined answer keys. Efficiency measurement evaluates time savings in grading and reporting compared to manual methods, and usability validation assesses user acceptance and ease of interaction with the system interface. Integrating these evaluation components within a unified study provides stronger evidence of system effectiveness. Furthermore, limited studies explicitly define the operational scope of automated assessment, particularly in distinguishing rule-based objective scoring from artificial intelligenceAebased essay grading . This distinction is important because objective-based rule validation differs fundamentally from AI-driven natural language processing techniques used in subjective evaluation. Failure to clarify this boundary may lead to ambiguity in interpreting research findings and system capabilities. By clearly defining the assessment scopeAispecifically limiting automated scoring to structured objective questions such as multiple-choice and trueAefalse formatsAiresearchers can establish measurable performance indicators and avoid overgeneralization of results. From a system development perspective, educational information systems require structured development methodologies, requirement analysis, iterative testing, and validation processes to ensure reliability and usability . , . Clear definition of functional requirements, user roles, data structures, and evaluation metrics is essential for producing a stable and maintainable system. In the context of automated assessment, structured development ensures that examination modules, scoring engines, and reporting features operate cohesively. Validation procedures, including functional testing and user acceptance testing, further confirm that the system aligns with educational objectives and operational needs. Therefore, this study aims to design, implement, and evaluate a web-based automated assessment system specifically tailored for school-level objective examinations. The automated scoring mechanism is explicitly limited to structured objective questions, including multiple-choice and trueAefalse formats, using predefined answer validation The system architecture incorporates secure authentication, examination scheduling, centralized data storage, automated scoring engines, and structured reporting dashboards. Beyond system implementation, this research emphasizes comprehensive evaluation covering system functionality testing, scoring accuracy verification, grading efficiency measurement, and usability assessment within a defined school environment. The primary contribution of this study lies in integrating automated scoring, examination integrity mechanisms, structured reporting features, and quantitative evaluation within a clearly defined school deployment Unlike general conceptual models of e-assessment that primarily describe theoretical frameworks, this research provides empirical validation of system reliability and operational efficiency for objective-based examinations. systematically measuring functional performance and user experience, the study offers evidence-based insights into the practical feasibility of implementing web-based automated assessment systems in school settings. This integrated approach not only strengthens the methodological rigor of the research but also contributes to bridging the gap between technological innovation and measurable educational impact. RESEARCH METHODOLOGY This study adopts a system development and evaluation research approach aimed at designing and implementing a webbased automated assessment system for school examinations. The methodology focuses on developing an ICT-based examination platform that integrates structured online test delivery, rule-based automated scoring, and real-time result reporting within a unified architecture. The development process follows a structured System Development Life Cycle (SDLC) framework to ensure systematic requirement analysis, architectural design, implementation, testing, and The use of SDLC is aligned with established development models for educational information systems, which emphasize structured planning, reliability, and performance evaluation in web-based platforms . , . 1 Research Design This study adopts a system development and evaluation research approach aimed at designing and implementing a webbased automated assessment system for school examinations. The development process follows a structured System Development Life Cycle (SDLC) framework to ensure systematic requirement analysis, architectural design, implementation, testing, and validation . , . 2 Research Workflow The research was conducted through five sequential stages: requirement analysis, system design, implementation, testing, and evaluation. Figure 1 illustrates the overall research workflow. Copyright A 2026. Jepri Yandi. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Page 12 The IJICS (International Journal of Informatics and Computer Scienc. Vol 10 No 1. Maret 2026. Page 1-10 ISSN 2548-8384 . ISSN 2548-8449 . Available Online at https://ejurnal. stmik-budidarma. id/index. php/ijics/index DOI 10. 30865/ijics. Fig. Research workflow consisting of requirement analysis, system design, implementation, testing, and evaluation Figure 1 illustrates the structured research workflow adopted in this study. The sequential stagesAirequirement analysis, system design, implementation, testing, and evaluationAidemonstrate the systematic application of the SDLC model . , . The requirement analysis phase ensured that the system addressed real school examination challenges, particularly grading delays and reporting inefficiencies. The system design phase translated functional requirements into a layered architecture, minimizing redundancy and improving scalability. Implementation focused on developing core modules such as authentication, question management, automated scoring, and report generation. Testing validated system reliability before deployment, while the evaluation stage measured functional performance, scoring accuracy, and usability. The structured progression shown in Figure 1 confirms methodological rigor and supports replicability in similar educational ICT environments . , . 3 System Architecture The system adopts a three-layer web-based clientAeserver architecture consisting of presentation layer, application layer, and database layer . , . The presentation layer provides browser-based access for administrators, teachers, and students. The application layer handles scoring logic, examination control, and reporting. The database layer stores question banks, user data, and The system architecture is shown in Figure 2. Fig. ClientAeserver architecture including presentation layer, application layer, and database layer. Figure 2 presents the three-layer web-based system architecture. The separation between presentation layer, application layer, and database layer enhances modularity and maintainability. The presentation layer enables role-based access for administrators, teachers, and students. The application layer performs examination logic, automated scoring, and report The database layer ensures structured storage of questions, user accounts, and examination results. This layered approach improves system scalability and aligns with best practices in web-based educational systems . , . By isolating business logic from data storage, the architecture reduces processing errors and enhances security, which is critical in online examination systems . , . 4 Scope of Automated Assessment Copyright A 2026. Jepri Yandi. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Page 13 The IJICS (International Journal of Informatics and Computer Scienc. Vol 10 No 1. Maret 2026. Page 1-10 ISSN 2548-8384 . ISSN 2548-8449 . Available Online at https://ejurnal. stmik-budidarma. id/index. php/ijics/index DOI 10. 30865/ijics. In this study, automated assessment refers to rule-based scoring of structured objective questions using predefined answer keys . , . Supported question formats: Multiple-choice questions TrueAefalse questions Scoring is computed automatically upon submission using answer validation rules stored in the database. The score is calculated as: Score = ycAycycoycayceyc ycuyce yaycuycycyceycayc yaycuycycyceyc ycNycuycycayco ycEycyceycycycnycuycu ycU100 The system does not support: Essay auto-grading AI-based semantic evaluation Adaptive testing algorithms Intelligent tutoring mechanisms This explicit scope definition ensures clarity regarding the applicability of evaluation results. 5 School Examination Context The system was developed and implemented within a school-level examination context, specifically targeting secondary education environments where structured periodic assessments are conducted. The examinations include mid-term and final semester tests administered to small-to-medium scale student populations under controlled conditions. The system supports role-based participation involving administrators, teachers, and students, reflecting typical institutional examination workflows. The implementation scale during evaluation involved a limited sample size within a single school setting, allowing controlled observation of system functionality, grading accuracy, and user interaction. The examination format primarily consisted of objective-type questions delivered digitally within scheduled sessions using secured login credentials and timed access control. 6 Definition and Scope of Automated Assessment In this study, automated assessment refers to the use of algorithm-based scoring mechanisms that automatically evaluate student responses without manual intervention. The scope of automation is limited to objective-type question formats, including multiple-choice and short structured responses with predefined answer keys stored in the database. The scoring process is rule-based, where each correct response is assigned a predetermined score value, and cumulative results are calculated instantly upon submission. The system does not currently implement natural language processing or machine learning techniques for subjective or essay-based question evaluation. Therefore, the automated scoring functionality is deterministic and designed to ensure grading consistency, eliminate human error variability, and accelerate result generation within the defined objective assessment scope. 7 Evaluation Metrics and Instruments System evaluation was conducted using quantitative performance metrics focusing on functional reliability, scoring accuracy, efficiency, and usability. Functional reliability was measured through structured black-box testing across 25 predefined test scenarios covering authentication, examination control, submission handling, scoring processes, and reporting modules. Scoring accuracy was validated by comparing automated grading results with manual evaluation across 100 student response samples. Efficiency was assessed by measuring time differences between manual grading procedures and automated system processing. Usability was evaluated using a structured questionnaire distributed to users, employing a five-point Likert scale to measure ease of use, navigation clarity, interface design, and overall These evaluation instruments provide empirical evidence of system performance across technical and usercentered dimensions. 8 Web-Based Implementation and Examination Integrity Features The system was implemented as a web-based application using a clientAeserver architecture that enables browser-based access without requiring local software installation. The technology stack includes server-side scripting for business logic processing, relational database management for structured data storage, and front-end interface components designed for responsive interaction. Deployment was conducted within a controlled school network environment to ensure stable connectivity and secure data transmission. To maintain examination integrity, the system incorporates scheduled access control, countdown timer mechanisms, randomized question ordering, and role-based authentication. These features prevent unauthorized access, reduce opportunities for academic misconduct, and ensure standardized examination conditions across all participants. RESULT AND DISCUSSION Copyright A 2026. Jepri Yandi. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Page 14 The IJICS (International Journal of Informatics and Computer Scienc. Vol 10 No 1. Maret 2026. Page 1-10 ISSN 2548-8384 . ISSN 2548-8449 . Available Online at https://ejurnal. stmik-budidarma. id/index. php/ijics/index DOI 10. 30865/ijics. 1 Result This section presents an expanded and comprehensive analysis of the empirical findings obtained from the implementation and evaluation of the proposed web-based automated assessment system. The evaluation was conducted within a defined school-level environment to ensure contextual relevance and controlled operational conditions. The purpose of this evaluation was not only to verify technical correctness but also to measure the systemAos overall effectiveness in addressing the practical challenges commonly associated with conventional examination processes. achieve a holistic assessment, four major evaluation dimensions were examined: functional reliability, scoring accuracy, grading efficiency, and usability performance. These dimensions were selected to represent the core pillars of educational technology validation frameworks, namely operational stability, computational correctness, performance efficiency, and user acceptance . , . By integrating these components into a single evaluative structure, this study ensures that system performance is measured comprehensively rather than partially. The results demonstrate that the developed system performs consistently across all tested modules, achieves perfect scoring accuracy for objective-based questions, significantly reduces grading time, and maintains high usability acceptance among users. Each dimension is discussed in detail in the following subsections. 1 Functional Reliability This section presents an expanded and comprehensive analysis of the empirical findings obtained from the implementation and evaluation of the proposed web-based automated assessment system. The evaluation was conducted within a defined school-level environment to ensure contextual relevance and controlled operational conditions. The purpose of this evaluation was not only to verify technical correctness but also to measure the systemAos overall effectiveness in addressing the practical challenges commonly associated with conventional examination processes. achieve a holistic assessment, four major evaluation dimensions were examined: functional reliability, scoring accuracy, grading efficiency, and usability performance. These dimensions were selected to represent the core pillars of educational technology validation frameworks, namely operational stability, computational correctness, performance efficiency, and user acceptance . , . By integrating these components into a single evaluative structure, this study ensures that system performance is measured comprehensively rather than partially. The results demonstrate that the developed system performs consistently across all tested modules, achieves perfect scoring accuracy for objective-based questions, significantly reduces grading time, and maintains high usability acceptance among users. Each dimension is discussed in detail in the following subsections. Table 1. Functional Testing Results No Module Tested Scenario Description Result Login Module Valid credential authentitation Success Role Managemet Role-based access verification Success Question Module Question loading and display Success Timer Module Countdown timer functionality Success Submission Module Answer submission processing Success System Integration Full examiation workflow execution Success The login module accurately validated authorized credentials and rejected invalid login attempts. The role management module successfully restricted feature access based on user roles, ensuring that administrative functions remained exclusive to authorized personnel. The question module retrieved examination items from the database and rendered them without formatting inconsistencies. The timer module enforced time constraints precisely, automatically ending examinations upon expiration. The submission module securely stored student responses and triggered the automated scoring process. Most importantly, the system integration test confirmed that all modules operated cohesively during a complete examination cycle. This result indicates that the clientAeserver architecture and centralized database configuration were properly implemented . Systems developed under structured SDLC methodologies generally demonstrate high reliability when requirements are clearly defined and thoroughly tested . , . The 100% success rate observed in this study confirms architectural robustness and operational stability suitable for institutional 2 Scoring Accuracy Scoring accuracy is a critical indicator of automated assessment credibility. To validate scoring correctness, 100 student examination results were randomly selected and manually recalculated by independent evaluators using the same answer keys embedded within the automated scoring algorithm. The comparison revealed no discrepancies between manual grading and automated outputs, resulting in 100% scoring The summary is presented in Table 2. Copyright A 2026. Jepri Yandi. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Page 15 The IJICS (International Journal of Informatics and Computer Scienc. Vol 10 No 1. Maret 2026. Page 1-10 ISSN 2548-8384 . ISSN 2548-8449 . Available Online at https://ejurnal. stmik-budidarma. id/index. php/ijics/index DOI 10. 30865/ijics. Student ID S01 S02 S03 S100 Table 2. Scoring Accuracy Comparison . Sample. Manual Score Automated Score Match Yes Yes Yes Yes Table 2 demonstrates that no discrepancies were observed between manual grading and automated scoring outputs. The rule-based scoring mechanism, which compares student responses to predefined answer keys stored in the system database, functioned precisely as intended. The elimination of grading variability confirms the reliability of automated evaluation for objective-type questions. This finding aligns with previous research emphasizing that automated scoring models are highly reliable when applied to structured objective assessments . , . Human grading processes, particularly in large-scale examinations, are susceptible to fatigue, calculation mistakes, and inconsistent interpretation. In contrast, automated systems apply uniform validation rules to every submission, ensuring fairness and transparency. The 100% accuracy result also reinforces the systemAos suitability for school-level examinations where objective question formats such as multiple-choice and trueAefalse are commonly used. By explicitly limiting the scoring mechanism to rule-based objective validation, the system avoids the ambiguity associated with AI-based subjective grading approaches . Consequently, the scoring results are deterministic, consistent, and fully traceable, strengthening institutional trust in the automated evaluation process. 3 Efficiency Analysis Efficiency analysis was conducted to compare grading time between manual evaluation and automated processing. For 100 student responses, manual grading required approximately 180 minutes. This duration included answer checking, score calculation, recording results, and verifying accuracy. In contrast, the automated system processed all results in less than five seconds immediately after submission. The grading time comparison is illustrated in Figure 3. Fig. Comparison of grading time between manual evaluation and automated system processing. Figure 3 highlights the significant efficiency gain achieved by the implementation of the automated assessment system. The reduction from 180 minutes to under five seconds demonstrates the transformative impact of ICT-based assessment mechanisms . , . This dramatic time difference confirms that automated scoring substantially reduces administrative workload and accelerates result dissemination. Time efficiency is particularly critical in school contexts where teachers often manage multiple classes and administrative responsibilities simultaneously. By eliminating manual grading delays, educators can allocate more time to instructional planning, student feedback, and curriculum development. These findings are consistent with previous studies reporting substantial workload reductions and administrative optimization in automated assessment implementations . Moreover, the efficiency improvement contributes to broader institutional productivity and digital transformation initiatives . Rapid result generation enables timely academic decision-making, such as identifying learning gaps and planning remedial activities. The operational efficiency achieved in this study confirms that the system not only ensures accuracy but also enhances institutional performance. 4 Usability Evaluation Usability evaluation was conducted to assess user acceptance and interaction quality. The evaluation involved measuring ease of use, navigation clarity, interface design, and overall satisfaction using a 5-point Likert scale. The average usability score was 4. 3 out of 5, equivalent to 86%. Copyright A 2026. Jepri Yandi. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Page 16 The IJICS (International Journal of Informatics and Computer Scienc. Vol 10 No 1. Maret 2026. Page 1-10 ISSN 2548-8384 . ISSN 2548-8449 . Available Online at https://ejurnal. stmik-budidarma. id/index. php/ijics/index DOI 10. 30865/ijics. The detailed results are presented in Table 3. Table 3. Usability Evaluation Result Evaluation Aspect Mean Score . Ease of use Navigation Interface Clarity Overall Satisfaction Average Table 3 indicates that ease of use received the highest rating . , suggesting intuitive system interaction. Navigation and interface clarity also scored above 4. 0, demonstrating effective design structure and accessibility. High usability is essential in school environments where digital literacy levels may vary among users. These findings correspond with established usability evaluation frameworks for web-based educational systems . technically reliable system must also be user-friendly to ensure successful adoption. The 86% usability score indicates strong acceptance while also identifying potential areas for interface refinement. Figure 4 integrates the overall system performance indicators. Fig. Overall system performance including functional reliability, scoring accuracy, and usability percentage. Figure 4 demonstrates that functional reliability and scoring accuracy achieved optimal values of 100%, while usability reached 86%. The integration of these metrics confirms that the system satisfies three critical evaluation dimensions: operational stability, grading correctness, and user acceptance. Although usability did not reach maximum value, the overall rating indicates strong practical feasibility in real educational settings. 2 Discussion Overall, the findings indicate that the developed web-based automated assessment system effectively addresses several critical limitations associated with conventional examination practices. Traditional paper-based examination systems are often characterized by prolonged grading processes, susceptibility to human error, logistical complexity in distributing and collecting examination materials, and limited transparency in result processing and reporting. These challenges frequently result in delayed feedback for students and increased administrative workload for teachers and school staff. In contrast, the proposed web-based system demonstrates measurable improvements in operational efficiency, scoring precision, and administrative transparency across the entire examination lifecycle. From exam preparation and structured scheduling to controlled execution, automated scoring, and real-time result reporting, the system provides a streamlined and digitally integrated workflow that significantly enhances institutional performance. One of the distinguishing strengths of the developed system lies in its integrated architectural design. Unlike previously reported systems that focus primarily on isolated functionalities such as online test delivery platforms or standalone automated grading modules . , . the proposed platform combines examination management, role-based authentication, rule-based scoring automation, secure centralized data storage, and structured reporting features within a unified web-based framework. This holistic integration ensures that all system components operate cohesively rather than independently. By minimizing fragmentation between modules, the system reduces potential synchronization errors, enhances data consistency, and strengthens overall reliability. Such architectural cohesion is particularly important in school-level contexts where operational simplicity and stability are essential for successful The empirical validation results further reinforce the systemAos technical robustness and practical feasibility. Achieving 100% functional reliability across all tested modules confirms that each component operates according to predefined specifications without malfunction. Similarly, the 100% scoring accuracy rate demonstrates that the rule-based automated evaluation mechanism precisely matches manual grading results for objective-type questions. These outcomes are consistent with structured ICT evaluation methodologies that emphasize reliability, correctness, and validation rigor in educational system development . , . The absence of functional or computational discrepancies indicates that the system architecture, database configuration, and scoring logic were effectively designed and Copyright A 2026. Jepri Yandi. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Page 17 The IJICS (International Journal of Informatics and Computer Scienc. Vol 10 No 1. Maret 2026. Page 1-10 ISSN 2548-8384 . ISSN 2548-8449 . Available Online at https://ejurnal. stmik-budidarma. id/index. php/ijics/index DOI 10. 30865/ijics. In addition to reliability and accuracy, the substantial efficiency gains observed during grading analysis highlight tangible operational benefits. The dramatic reduction in grading timeAifrom hours of manual evaluation to nearinstantaneous automated processingAiillustrates the transformative potential of ICT-based assessment systems. This efficiency improvement allows educators to redirect their efforts toward pedagogical development, curriculum refinement, and individualized student support rather than repetitive administrative tasks. However, it is important to acknowledge the defined scope and limitations of the current implementation. The system is intentionally restricted to objective-type questions, such as multiple-choice and trueAefalse formats, and operates within controlled school environments. It does not incorporate AI-based subjective grading or advanced online proctoring While this clearly defined operational boundary strengthens internal validity and reliability, broader validation across multiple institutions or larger-scale deployments is recommended to enhance generalizability. In conclusion, the integrated evaluation results demonstrate that the proposed system achieves high reliability, precise scoring accuracy, significant efficiency improvement, and strong usability acceptance. These findings confirm that webbased automated assessment systems, when developed through structured methodologies and implemented within clearly defined operational parameters, can serve as effective and measurable solutions for modern school-level examination management. CONCLUSION This study presented the design, implementation, and evaluation of a web-based automated assessment system developed for school-level examinations. The system was built using a structured System Development Life Cycle (SDLC) approach and implemented through a three-layer clientAeserver architecture to ensure modularity, reliability, and The evaluation results demonstrate that the system functions reliably, achieving 100% functional validation across 25 structured test scenarios. Scoring validation involving 100 student response samples confirmed 100% accuracy for objective-type questions, indicating that the automated scoring algorithm performs consistently without discrepancies compared to manual grading. Efficiency analysis showed a substantial reduction in grading timeAifrom approximately 180 minutes for manual evaluation to less than five seconds using automated processing. Additionally, usability testing yielded an average score of 4. 3 out of 5 . %), indicating positive user acceptance within the tested school environment. The scope of automated scoring in the current implementation is limited to objective question formats, including multiple-choice and short structured responses with predefined answer keys. The system does not yet support automated evaluation of subjective or essay-based responses. Comprehensive reporting in this study refers to real-time score calculation, individual result summaries, downloadable examination reports, and centralized administrative performance monitoring. The system was implemented and tested within a controlled schoollevel examination context, demonstrating technical feasibility and operational effectiveness for small-to-medium scale educational institutions. However, the findings are limited to the tested environment and sample size. Broader validation involving multi-school deployment, larger participant populations, and diverse examination formats is necessary to strengthen external validity. In conclusion, the proposed web-based automated assessment system provides a reliable, accurate, and efficient solution for managing school examinations through ICT-based automation. While the system does not claim to directly improve learning outcomes, it significantly enhances assessment process efficiency, grading consistency, and administrative transparency. Future work should focus on integrating automated essay evaluation, strengthening security mechanisms, and conducting large-scale empirical validation to further improve system robustness and applicability. REFERENCES