Numerous prior systems for automatic grading of student queries, but these are all focused on the DML side of things. • RDBI (Dietrich 1993) supports relational algebra, domain and tuple relational calculus, and SQL (SELECT only). Uses its own internal DBMS. \citeauthor{Dietrich.S-1993a-An-educational}'s \emph{RDBI} \cite{Dietrich.S-1993a-An-educational} was a Prolog-based interpreter for relational algebra, tuple and domain relational calculus, and SQL. It focused primarily on queries, and used its own non-SQL data definition language. RDBI did not provide feedback on students' attempts beyond basic syntax checking and displaying query results. • esql (Kearns et al., 1997) supports CREATE TABLE only as a pass-through to set up or modify a schema. It uses its own internal DBMS. \citeauthor{Kearns.R-1997a-A-teaching}'s \emph{esql} \cite{Kearns.R-1997a-A-teaching} supported students in learning the fundamental concepts underlying SQL. It could parse and execute \texttt{CREATE}, \texttt{DROP}, \texttt{ALTER}, \texttt{DELETE}, \texttt{INSERT}, and \texttt{SELECT} statements, but all of these except \texttt{SELECT} were simply passed through to the DBMS. The system enabled students to better understand the steps in the execution of a query by visualizing the intermediate tables generated by each step of the query. It did not provide feedback on students' attempts beyond basic syntax checking and displaying query results. • SQL-Tutor (Mitrovic, 1998) is an intelligent tutoring system that only only supports SELECT. \citeauthor{Mitrovic.A-1998a-Learning}'s \emph{SQL-Tutor} \cite{Mitrovic.A-1998a-Learning} was an intelligent teaching system that provided students with a guided discovery learning environment for SQL queries. It supported only the \texttt{SELECT} statement, and used constraint-based modeling \cite{Ohlsson.S-1992a-Constraint-based,Ohlsson.S-2016a-Constraint-based} to provide feedback to students on both syntactic and semantic SQL errors. • SQLator (Sadiq et al., 2004) only supports SELECT. \citeauthor{Sadiq.S-2004a-SQLator} \emph{SQLator} \cite{Sadiq.S-2004a-SQLator} was a web-based interactive tool for learning SQL. Students were presented with a series of questions in English, and had to write SQL \texttt{SELECT} statements to answer these questions. SQLator used an ``equivalence engine'' to determine whether an SQL query fulfilled the requirements of the original English question. SQLator supported only the \texttt{SELECT} statement, and provided only basic feedback (correct or incorrect) to students. SQLator was able to automatically mark about a third of submitted queries as correct, thus improving the speed of grading. • AsseSQL (Prior & Lister, 2004) only supports SELECT. \citeauthor{Prior.J-2004a-Backwash}'s \emph{AsseSQL} \cite{Prior.J-2004a-Backwash} was an online examination environment for evaluating students' ability to formulate SQL queries. Students would write and execute their queries, and the data set produced by their query would be compared against the correct data set. The answer would then be flagged as correct or incorrect as appropriate. AsseSQL supported only the \texttt{SELECT} statement. • ActiveSQL (Cumming & Russell, 2005; Russell & Cumming, 2005) only supports SELECT. \citeauthor{Russell.G-2004a-Improving}'s \emph{ActiveSQL}\footnote{\url{https://db.grussell.org/}} \cite{Russell.G-2004a-Improving,Russell.G-2005a-Online} was an online interactive learning environment for SQL that provided immediate feedback to students. ActiveSQL measured the accuracy of a query in a similar way to \citeauthor{Prior.J-2004a-Backwash}'s AsseSQL, but instead of a simple correct/incorrect answer, it computed an accuracy score based on the differences between the query output and the correct answer. It was also able to detect ``hard-coded'' queries that produced the desired result, but would fail if the data set changed \cite{Russell.G-2005a-Online}. ActiveSQL supported only the \texttt{SELECT} statement. • SQLify (Dekeyser et al., 2007) only supports SELECT. \citeauthor{Dekeyser.S-2007a-Computer}'s \emph{SQLify} \cite{Dekeyser.S-2007a-Computer} was another online SQL learning system that incorporated semantic feedback and automatic assessment. SQLify evaluated each query on an eight-level scale that covered query syntax, output schema, and query semantics. Instructors could use this information to award an overall grade. Again, SQLify supported only the \texttt{SELECT} statement. • SQL Exploratorium (Brusilovsky et al., 2010) \citeauthor{Brusilovsky.P-2010a-Learning}'s \emph{SQL Exploratorium} \cite{Brusilovsky.P-2010a-Learning} took an interesting approach to generating problems, using parameterized query templates to generate the questions given to students. Again, the SQL Exploratorium supported only the \texttt{SELECT} statement. • aSQLg (Kleiner et al., 2013) only supports SELECT. Works with any back end DBMS. Uses the DBMS as a syntax checker. \citeauthor{Kleiner.C-2013a-Automated}'s \emph{aSQLg} \cite{Kleiner.C-2013a-Automated} was an automated assessment tool that provided feedback to students. This enabled students to improve their learning by making further submissions after incorporating this feedback. The aSQLg system checked queries for syntax, efficiency (cost), result correctness, and statement style. Again, aSQLg supported only the \texttt{SELECT} statement. • (Kenny & Pahl, 2005) queries only. \citeauthor{Kenny.C-2005a-Automated} \cite{Kenny.C-2005a-Automated} described an SQL learning system similar to those already described, which also incorporated an assessment of a student's previous progress. This enabled a more personalized and adaptive approach to student learning, where feedback was tailored according to student progress. Again, this system supported only the \texttt{SELECT} statement. • XData (Bhangdiya et al., 2015, Chandra et al., 2015; Chandra et al., 2016) only supports SELECT. \citeauthor{Bhangdiya.A-2015a-XDa-TA}'s \emph{XDa-TA}\footnote{\url{http://www.cse.iitb.ac.in/infolab/xdata/}} extended the idea of automated grading of SQL by adding the ability to generate data sets designed to catch common errors. These data sets were automatically derived from a set of correct SQL queries \cite{Bhangdiya.A-2015a-XDa-TA,Chandra.B-2015a-Data}. Later work \cite{Chandra.B-2016a-Partial} added support for awarding partial marks. • “CS 121 Automation Tool” (anjoola, cs12x-automate, GitHub, last updated 2015) Automated marking system for SQL that appears be customisable for all kinds of statements? However the code to test CREATE TABLE “Simply executes the CREATE TABLE statements to make sure they have the correct syntax”. It doesn't check against any form of specification. \citeauthor{Gong.A-2015a-CS-121-Automation}'s ``CS 121 Automation Tool'' \cite{Gong.A-2015a-CS-121-Automation} was a tool designed to semi-automate the grading of SQL assessments, again focusing on SQL DML statements. Interestingly, the system appears to be extensible and could thus potentially be modified to support grading of SQL DDL statements. On the pure testing side of things: • SVTe (Farré et al., 2008) tests the “correctness” of a schema, but focuses mainly on consistency of constraints. • Ambler (n.d., http://www.agiledata.org/essays/databaseTesting.html) talks purely about testing the database functionality. There appears to be very little work on automated grading of students’ schema definitions. No-one has used unit testing as a framework for doing so. What use of unit testing there is, is more focused on testing database /applications/ and effective generation of test cases: • (Binnig et al., 2013) Generating test databases (fixtures) for OLTP applications. • (Chays & Shahid, 2008) Generating test inputs for database applications based on analysing the queries used. • (Marcozzi et al, 2012) Generating test inputs for database applications using a formal language (ImperDB). No-one appears to have tried to automatically test whether an extant schema is consistent with its specification. This is almost more of a formal methods thing? These are more focused on /generating/ a conforming schema from the specification (e.g., Vatanawood & Rivepiboon, 2004; Luković et al., 2003), rather than checking that an already existing schema conforms. Also more focused on schema /transformations/ and evolution (e.g., Bench-Capon et al., 1998). ------------------------------------------------------------ NOTE: Checking structure of table Product. TEST: [[ Product ]] + OK +++ PASSED: Table Product exists. TEST: [[ Product.Product_code ]] + OK ... +++ PASSED: Table Product contains all the expected columns. TEST: [[ Product.Product_code: data type is NUMBER | INTEGER ]] + OK ... +++ PASSED: All columns of table Product have data types compatible with the specification. TEST: [[ Product.Product_code precision and scale = 8 (with scale 0) ]] + OK ... +++ PASSED: All columns of table Product have lengths compatible with the specification. TEST: [[ Product.Product_code nullability should be N ]] + OK ... +++ PASSED: All columns of table Product have the expected nullability. TEST: [[ Product PK ]] + OK +++ PASSED: Primary key of table Product exists. TEST: [[ Product PK: Product_code ]] + OK +++ PASSED: Primary key of table Product includes (only) the expected columns. TEST: [[ Product check constraint PRODUCT_STOCK_INVALID ]] + OK ... +++ PASSED: All constraints of table Product that should be are explicitly named. NOTE: Testing constraints of table Product. TEST: [[ Product.Stock_count accepts “0” ]] + OK TEST: [[ Product.Stock_count accepts “99999” ]] + OK TEST: [[ Product.Restock_level accepts “0” ]] - FAILED! Column Product.Restock_level won’t accept legal value 0 [-0.5]. Failed asserting that false is true. TEST: [[ Product.Restock_level accepts “99999” ]] + OK TEST: [[ Product.Minimum_level accepts “0” ]] - FAILED! Column Product.Minimum_level won’t accept legal value 0 [-0.5]. Failed asserting that false is true. TEST: [[ Product.Minimum_level accepts “653” ]] + OK TEST: [[ Product.List_price accepts “0” ]] + OK TEST: [[ Product.List_price accepts “99999.99” ]] + OK --- FAILED: 2 of 8 legal values tested were rejected by a CHECK constraint.