diff --git a/build/report.pdf b/build/report.pdf index eee6ea5..b4d9c67 100644 Binary files a/build/report.pdf and b/build/report.pdf differ diff --git a/chapter/ch3.tex b/chapter/ch3.tex index 4cc76bc..e148552 100644 --- a/chapter/ch3.tex +++ b/chapter/ch3.tex @@ -15,7 +15,7 @@ The program will allow the users to preview proposals way before they are part o \subsection{Applying a proposal} -The way this project will use the pre-existing knowledge a user has of their own code is to use that code as base for showcasing a proposals features. Using the users own code as base requires the following steps in order to automatically implement the examples that showcase the proposal inside the context of the users own code. +The way this project will use the pre-existing knowledge a user has of their own code is to use that code as base for showcasing a proposals features. Using the users own code as base requires the following steps to automatically implement the examples that showcase the proposal inside the context of the users own code. The ide is to identify where the features and additions of a proposal could have been used. This means identifying parts of the users program that use pre-existing ECMAScript features that the proposal is interacting with and trying to solve. This will then identify all the different places in the users program the proposal can be applied. This step is called \textit{matching} in the following chapters @@ -23,7 +23,7 @@ Once we have matched all the parts of the program the proposal could be applied The output of the previous step is then a set of code pairs, where one a part of the users original code, and the second is the transformed code. The transformed code is then ideally a perfect replacement for the original user code if the proposal is part of ECMAScript. These pairs are used as examples to present to the user, presented together so the user can see their original code together with the transformed code. This allows for a direct comparison and an easier time for the user to understand the proposal. -The steps outlined in this section require some way of defining matching and transforming of code. This has to be done very precisely and accurately in order to avoid examples that are wrong. Imprecise definition of the proposal might lead to transformed code not being a direct replacement for the code it was based upon. For this we suggest two different methods, a definition written in a custom DSL \DSL and a definition written in a self-hosted way only using ECMAScript as a language as definition language. Read more about this in SECTION HERE. +The steps outlined in this section require some way of defining matching and transforming of code. This has to be done very precisely and accurately to avoid examples that are wrong. Imprecise definition of the proposal might lead to transformed code not being a direct replacement for the code it was based upon. For this we suggest two different methods, a definition written in a custom DSL \DSL and a definition written in a self-hosted way only using ECMAScript as a language as definition language. Read more about this in SECTION HERE. \section{Applicable proposals} \label{sec:proposals} @@ -297,7 +297,7 @@ The pipe operator is present in many other languages such as F\#~\cite{FPipeOper The "Do Expression"~\cite{Proposal:DoProposal} proposal, is a proposal meant to bring a style of \textit{expression oriented programming}~\cite{ExpressionOriented} to ECMAScript. Expression oriented programming is a concept taken from functional programming which allows for combining expressions in a very free manner, resulting in a highly malleable programming experience. -The motivation of the "Do Expression" proposal is to allow for local scoping of a code block that is treated as an expression. Thus, complex code requiring multiple statements will be confined inside its own scope~\cite[8.2]{ecma262} and the resulting value is returned from the block implicitly as an expression, similarly to how a unnamed functions or arrow functions are currently used. In order to achieve this behavior in the current stable version of ECMAScript, one needs to use immediately invoked unnamed functions~\cite[15.2]{ecma262} and invoke them immediately, or use an arrow function~\cite[15.3]{ecma262}. +The motivation of the "Do Expression" proposal is to allow for local scoping of a code block that is treated as an expression. Thus, complex code requiring multiple statements will be confined inside its own scope~\cite[8.2]{ecma262} and the resulting value is returned from the block implicitly as an expression, similarly to how a unnamed functions or arrow functions are currently used. To achieve this behavior in the current stable version of ECMAScript, one needs to use immediately invoked unnamed functions~\cite[15.2]{ecma262} and invoke them immediately, or use an arrow function~\cite[15.3]{ecma262}. The codeblock of a \texttt{do} expression has one major difference from these equivalent functions, as it allows for implicit return of the final statement of the block, and is the resulting value of the entire \texttt{do} expression. The local scoping of this feature allows for a cleaner environment in the parent scope of the \texttt{do} expression. What is meant by this is for temporary variables and other assignments used once can be enclosed inside a limited scope within the \texttt{do} block. Allowing for a cleaner environment inside the parent scope where the \texttt{do} block is defined. @@ -381,7 +381,7 @@ Transforming using this imaginary proposal, will result in a returning the expre \section{Searching user code for applicable snippets} -In order to identify snippets of code in the user's code where a proposal is applicable, we need some way to define patterns of code to use as a query. To do this, we have designed and implemented a domain-specific language that allows matching parts of code that is applicable to some proposal, and transforming those parts to use the features of that proposal. +To identify snippets of code in the user's code where a proposal is applicable, we need some way to define patterns of code to use as a query. To do this, we have designed and implemented a domain-specific language that allows matching parts of code that is applicable to some proposal, and transforming those parts to use the features of that proposal. \subsection{Structure of \DSL} \label{sec:DSLStructure} @@ -399,7 +399,7 @@ proposal Pipeline_Proposal {} \end{lstlisting} \paragraph*{Case definition.} -Each proposal will have one or more definitions of a template for code to identify in the users codebase, and its corresponding transformation definition. These are grouped together in order to have a simple way of identifying the corresponding cases of matching and transformations. This section of the proposal is defined by the keyword \textit{case} and a block that contains its related fields. A proposal definition in \DSL should contain at least one \texttt{case} definition. This allows for matching many different code snippets and showcasing more of the proposal than a single concept the proposal has to offer. +Each proposal will have one or more definitions of a template for code to identify in the users codebase, and its corresponding transformation definition. These are grouped together to have a simple way of identifying the corresponding cases of matching and transformations. This section of the proposal is defined by the keyword \textit{case} and a block that contains its related fields. A proposal definition in \DSL should contain at least one \texttt{case} definition. This allows for matching many different code snippets and showcasing more of the proposal than a single concept the proposal has to offer. \begin{lstlisting} case case_name { @@ -409,7 +409,7 @@ Each proposal will have one or more definitions of a template for code to identi \paragraph*{Template used for matching} -In order to define the template used to match, we have another section defined by the keyword \textit{applicable to}. This section will contain the template defined using JavaScript with specific DSL keywords defined inside the template. This template is used to identify applicable parts of the user's code to a proposal. +To define the template used to match, we have another section defined by the keyword \textit{applicable to}. This section will contain the template defined using JavaScript with specific DSL keywords defined inside the template. This template is used to identify applicable parts of the user's code to a proposal. \begin{lstlisting} applicable to { @@ -420,7 +420,7 @@ This \texttt{applicable to} template, will create matches on any \texttt{Variabl \paragraph*{Defining the transformation} -In order to define the transformation that is applied to a specific matched code snippet, the keyword \textit{transform to} is used. This section is similar to the template section, however it uses the specific DSL identifiers defined in applicable to, in order to transfer the context of the matched user code, this allows us to keep parts of the users code important to the original context it was written in. +To define the transformation that is applied to a specific matched code snippet, the keyword \textit{transform to} is used. This section is similar to the template section, however it uses the specific DSL identifiers defined in applicable to, to transfer the context of the matched user code, this allows us to keep parts of the users code important to the original context it was written in. \begin{lstlisting} transform to{ @@ -506,7 +506,7 @@ A wildcard section is defined on the right hand side of an assignment statement. When matching sections of the users code has been found, we need some way of defining how to transform those sections to showcase a proposal. This is done using the \texttt{transform to} template. This template describes the general structure of the newly transformed code, with context from the users code by using wildcards. -A transformation template defines how the matches will be transformed after applicable code has been found. The transformation is a general template of the code once the match is replaced in the original AST. However, without transferring over the context from the match, this would be a template search and replace. Thus, in order to transfer the context from the match, wildcards are defined in this template as well. These wildcards use the same block notation found in the \texttt{applicable to} template, however they do not need to contain the types, as those are not needed in the transformation. The only required field of the wildcard is the identifier defined in \texttt{applicable to}. This is done in order to know which wildcard match we are taking the context from, and where to place it in the transformation template. +A transformation template defines how the matches will be transformed after applicable code has been found. The transformation is a general template of the code once the match is replaced in the original AST. However, without transferring over the context from the match, this would be a template search and replace. Thus, to transfer the context from the match, wildcards are defined in this template as well. These wildcards use the same block notation found in the \texttt{applicable to} template, however they do not need to contain the types, as those are not needed in the transformation. The only required field of the wildcard is the identifier defined in \texttt{applicable to}. This is done to know which wildcard match we are taking the context from, and where to place it in the transformation template. @@ -641,7 +641,7 @@ In Listing \ref{def:doExpression}, the specification of "Do Expression" proposal The imaginary proposal "Await to Promise" is created to transform code snippets from using \texttt{await}, to use a promise with equivalent functionality. -This proposal was created in order to evaluate the tool, as it is quite difficult to define applicable code in this current template form. This definition is designed to create matches in code using await, and highlight how await could be written using a promise in stead. This actually highlights some of the issues with the current design of \DSL that will be described in Future Work. +This proposal was created to evaluate the tool, as it is quite difficult to define applicable code in this current template form. This definition is designed to create matches in code using await, and highlight how await could be written using a promise in stead. This actually highlights some of the issues with the current design of \DSL that will be described in Future Work. \begin{lstlisting}[language={JavaScript}, caption={Definition of Await to Promise evaluation proposal in \DSL}, label={def:awaitToPromise}] proposal awaitToPomise{ diff --git a/chapter/ch4.tex b/chapter/ch4.tex index d8bd3de..ac1fae3 100644 --- a/chapter/ch4.tex +++ b/chapter/ch4.tex @@ -80,7 +80,7 @@ The rule \texttt{AplicableTo}, is designed to hold a single template used for ma The rule \texttt{TransformTo}, is created to contain a single template used for transforming a match. It starts with the keywords \texttt{transform} and \texttt{to}, followed by a block that holds the transformation definition. This transformation definition is declared with the terminal \texttt{STRING}, and is parser at a string of characters, same as the template in \texttt{applicable to}. -In order to define exactly what characters/tokens are legal in a specific definition, Langium uses terminals defined using regular expressions, these allow for a very specific character-set to be legal in specific keys of the AST generated by the parser generated by Langium. In the definition of \texttt{Proposal} and \texttt{Pair} the terminal \texttt{ID} is used; this terminal is limited to allow for only words and can only begin with a character of the alphabet or an underscore. In \texttt{Section} the terminal \texttt{STRING} is used, this terminal is meant to allow any valid JavaScript code and the custom DSL language described in \ref{sec:DSL_DEF}. Both these terminals defined allows Langium to determine exactly what characters are legal in each location. +To define exactly what characters/tokens are legal in a specific definition, Langium uses terminals defined using regular expressions, these allow for a very specific character-set to be legal in specific keys of the AST generated by the parser generated by Langium. In the definition of \texttt{Proposal} and \texttt{Pair} the terminal \texttt{ID} is used; this terminal is limited to allow for only words and can only begin with a character of the alphabet or an underscore. In \texttt{Section} the terminal \texttt{STRING} is used, this terminal is meant to allow any valid JavaScript code and the custom DSL language described in \ref{sec:DSL_DEF}. Both these terminals defined allows Langium to determine exactly what characters are legal in each location. \begin{lstlisting}[caption={Definition of \DSL in Langium.}, label={def:JSTQLLangium}] grammar Jstql @@ -112,14 +112,14 @@ terminal ID: /[_a-zA-Z][\w_]*/; terminal STRING: /"[^"]*"|'[^']*'/; \end{lstlisting} -With \DSL, we are not implementing a programming language meant to be executed. We are using Langium in order to generate an AST that will be used as a markup language, similar to YAML, JSON or TOML~\cite{TOML}. The main reason for using Langium in such an unconventional way is Langium provides support for Visual Studio Code integration, and it solves the issue of parsing the definition of each proposal manually. However, with this grammar we cannot actually verify the wildcards placed in \texttt{apl\_to\_code} and \texttt{transform\_to\_code} are correctly written. To do this, we have implemented several validation rules. +With \DSL, we are not implementing a programming language meant to be executed. We are using Langium to generate an AST that will be used as a markup language, similar to YAML, JSON or TOML~\cite{TOML}. The main reason for using Langium in such an unconventional way is Langium provides support for Visual Studio Code integration, and it solves the issue of parsing the definition of each proposal manually. However, with this grammar we cannot actually verify the wildcards placed in \texttt{apl\_to\_code} and \texttt{transform\_to\_code} are correctly written. To do this, we have implemented several validation rules. \subsection*{Langium Validator} A Langium validator allows for further checks DSL code, a validator allows for the implementation of specific checks on specific parts of the grammar. -\DSL does not allow empty typed wildcard definitions in \texttt{applicable to} blocks, this means we cannot define a wildcard that allows any AST type to match against it. This is not defined within the grammar, as inside the grammar the code is defined as a \texttt{STRING} terminal. This means further checks have to be implemented using code. In order to do this we have a specific \texttt{Validator} implemented on the \texttt{Case} definition of the grammar. This means every time anything contained within a \texttt{Case} is updated, the language server created with Langium will perform the validation step and report any errors. +\DSL does not allow empty typed wildcard definitions in \texttt{applicable to} blocks, this means we cannot define a wildcard that allows any AST type to match against it. This is not defined within the grammar, as inside the grammar the code is defined as a \texttt{STRING} terminal. This means further checks have to be implemented using code. To do this we have a specific \texttt{Validator} implemented on the \texttt{Case} definition of the grammar. This means every time anything contained within a \texttt{Case} is updated, the language server created with Langium will perform the validation step and report any errors. The validator uses \texttt{Case} as its entry point, as it allows for a checking of wildcards in both \texttt{applicable to} and \texttt{transform to}, allowing for a check for whether a wildcard identifier used in \texttt{transform to} exists in the definition of \texttt{applicable to}. @@ -155,7 +155,7 @@ When interfacing with the Langium parser to get the Langium generated AST, the e \section{Wildcard extraction and parsing} -In order to refer to internal DSL variables defined in \texttt{applicable to} and \texttt{transform to} blocks of the transformation, we need to extract this information from the template definitions and pass that on to the matcher. +To refer to internal DSL variables defined in \texttt{applicable to} and \texttt{transform to} blocks of the transformation, we need to extract this information from the template definitions and pass that on to the matcher. \subsection*{Why not use Langium for wildcard parsing?} @@ -163,11 +163,11 @@ Langium has support for creating a generator to output an artifact, which is som \subsection*{Extracting wildcards from \DSL} -In order to allow the use of Babel~\cite{Babel}, the wildcards present in the \texttt{applicable to} blocks and \texttt{transform to} blocks have to be parsed and replaced with some valid JavaScript. This is done by using a pre-parser that extracts the information from the wildcards and inserts an \texttt{Identifier} in their place. +To allow the use of Babel~\cite{Babel}, the wildcards present in the \texttt{applicable to} blocks and \texttt{transform to} blocks have to be parsed and replaced with some valid JavaScript. This is done by using a pre-parser that extracts the information from the wildcards and inserts an \texttt{Identifier} in their place. To extract the wildcards from the template, we look at each character in the template. If a start token of a wildcard is discovered, which is denoted by \texttt{<<}, everything after that until the closing token, which is denoted by \texttt{>>}, is then treated as an internal DSL variable and will be stored by the tool. A variable \texttt{flag} is used (line 5,10 \ref{lst:extractWildcard}), when the value of flag is false, we know we are currently not inside a wildcard block, this allows us to pass the character through to the variable \texttt{cleanedJS} (line 196 \ref{lst:extractWildcard}). When \texttt{flag} is true, we know we are currently inside a wildcard block and we collect every character of the wildcard block into \texttt{temp}. Once we hit the end of the wildcard block, when we have consumed the entirety of the wildcard, the contents of the \texttt{temp} variable is passed to a tokenizer, then the tokens are parsed by a recursive descent parser (line 10-21 \ref{lst:extractWildcard}). -Once the wildcard is parsed, and we know it is safely a valid wildcard, we insert an identifier into the JavaScript template where the wildcard would reside. This allows for easier identifications of wildcards when performing matching/transformation as we can identify whether or not an Identifier in the code is the same as the identifier for a wildcard. This however, does introduce the problem of collisions between the wildcard identifiers inserted and identifiers present in the users code. In order to avoid this, the tool adds \texttt{\_\-\-\_} at the beginning of every identifier inserted in place of a wildcard. This allows for easier identification of if an Identifier is a wildcard, and avoids collisions where a variable in the user code has the same name as a wildcard inserted into the template. This can be seen on line 17 of Listing~\ref{lst:extractWildcard}. +Once the wildcard is parsed, and we know it is safely a valid wildcard, we insert an identifier into the JavaScript template where the wildcard would reside. This allows for easier identifications of wildcards when performing matching/transformation as we can identify whether or not an Identifier in the code is the same as the identifier for a wildcard. This however, does introduce the problem of collisions between the wildcard identifiers inserted and identifiers present in the users code. To avoid this, the tool adds \texttt{\_\-\-\_} at the beginning of every identifier inserted in place of a wildcard. This allows for easier identification of if an Identifier is a wildcard, and avoids collisions where a variable in the user code has the same name as a wildcard inserted into the template. This can be seen on line 17 of Listing~\ref{lst:extractWildcard}. \begin{lstlisting}[language={JavaScript}, caption={Extracting wildcard from template.}, label={lst:extractWildcard}] export function parseInternal(code: string): InternalParseResult { @@ -250,9 +250,9 @@ Our recursive descent parser produces an AST, which is later used to determine w \paragraph*{Extracting wildcards from \DSLSH} -The self-hosted version \DSLSH also requires some form of pre-parsing in order to prepare the internal DSL environment. This is relatively minor and only parsing directly with no insertion compared to \DSL. +The self-hosted version \DSLSH also requires some form of pre-parsing to prepare the internal DSL environment. This is relatively minor and only parsing directly with no insertion compared to \DSL. -In order to use JavaScript as the meta language, we define a \texttt{prelude} on the object used to define the transformation case. This prelude is required to consist of several \texttt{Variable declaration} statements, where the variable names are used as the internal DSL variables and right side expressions are strings that contain the type expression used to determine a match for that specific wildcard. +To use JavaScript as the meta language, we define a \texttt{prelude} on the object used to define the transformation case. This prelude is required to consist of several \texttt{Variable declaration} statements, where the variable names are used as the internal DSL variables and right side expressions are strings that contain the type expression used to determine a match for that specific wildcard. We use Babel to generate the AST of the \texttt{prelude} definition, this allows us to get a JavaScript object structure. Since the structure is very strictly defined, we can expect every \texttt{stmt} of \texttt{stmts} to be a variable declaration, otherwise throw an error for invalid prelude. Then the string value of each of the variable declarations is passed to the same parser used for \DSL wildcards. @@ -261,14 +261,14 @@ The reason this is preferred is it allows us to avoid having to extract the wild \section{Using Babel to parse} \label{sec:BabelParse} -Allowing the tool to perform transformations of code requires the generation of an Abstract Syntax Tree from the users code, \texttt{applicable to} and \texttt{transform to}. This means parsing JavaScript into an AST, in order to do this we use Babel~\cite{Babel}. +Allowing the tool to perform transformations of code requires the generation of an Abstract Syntax Tree from the users code, \texttt{applicable to} and \texttt{transform to}. This means parsing JavaScript into an AST, to do this we use Babel~\cite{Babel}. The most important reason for choosing to use Babel for the purpose of generating the AST's used for transformation is due to the JavaScript community surrounding Babel. As this tool is dealing with proposals before they are part of JavaScript, a parser that supports early proposals for JavaScript is required. Babel works closely with TC39 to support experimental syntax~\cite{BabelProposalSupport} through its plugin system, which allows the parsing of code not yet part of the language. \subsection*{Custom Tree Structure} -Performing matching and transformation on each of the sections inside a \texttt{case} definition, they have to be parsed into and AST in order to allow the tool to match and transform accordingly, for this we use Babel~\cite{Babel}. However, Babels AST structure does not suit traversing multiple trees at the same time, this is a requirement for matching and transforming. Therefore we take the AST and transform it into a simple custom tree structure to allow for simple traversal of the tree. +Performing matching and transformation on each of the sections inside a \texttt{case} definition, they have to be parsed into and AST to allow the tool to match and transform accordingly, for this we use Babel~\cite{Babel}. However, Babels AST structure does not suit traversing multiple trees at the same time, this is a requirement for matching and transforming. Therefore we take the AST and transform it into a simple custom tree structure to allow for simple traversal of the tree. As can be seen in \figFull[def:TreeStructure] we use a recursive definition of a \texttt{TreeNode} where a nodes parent either exists or is null (it is top of tree), and a node can have any number of children elements. This definition allows for simple traversal both up and down the tree. Which means traversing two trees at the same time can be done in the matcher and transformer section of the tool. @@ -291,7 +291,7 @@ Placing the AST generated by Babel into this structure means utilizing the libra To place the AST into our tree structure, we use \texttt{@babel/traverse}~\cite{BabelTraverse} to visit each node of the AST in a \textit{depth first} manner, the idea is we implement a \textit{visitor} for each of the nodes in the AST and when a specific node is encountered, the corresponding visitor of that node is used to visit it. When transferring the AST into our simple tree structure we simply have to use the same visitor for every kind of AST node, and place that node into the tree. -Visiting a node using the \texttt{enter()} function means we went from the parent to that child node, and it should be added as a child node of the parent. The node is automatically added to its parent list of children nodes from the constructor of \texttt{TreeNode}. Whenever leaving a node the function \texttt{exit()} is called, this means we are moving back up into the tree, and we have to update what node was the \textit{last} in order to generate the correct tree structure. +Visiting a node using the \texttt{enter()} function means we went from the parent to that child node, and it should be added as a child node of the parent. The node is automatically added to its parent list of children nodes from the constructor of \texttt{TreeNode}. Whenever leaving a node the function \texttt{exit()} is called, this means we are moving back up into the tree, and we have to update what node was the \textit{last} to generate the correct tree structure. \begin{lstlisting}[language={JavaScript}] traverse(ast, { diff --git a/chapter/ch5.tex b/chapter/ch5.tex index a945467..61311bd 100644 --- a/chapter/ch5.tex +++ b/chapter/ch5.tex @@ -4,7 +4,7 @@ In this chapter we will discuss how we evaluated \DSL and its related tools. Thi \section{Real Life source code} -In order to perform actual large scale trial of this program, we have collected some github projects containing many or large JavaScript files. Every JS file within the project is then passed through the entire tool, and we will evaluate it based upon the amount of matches discovered, as well as manual checking that the transformation resulted in correct code on the matches. +To perform actual large scale trial of this program, we have collected some github projects containing many or large JavaScript files. Every JS file within the project is then passed through the entire tool, and we will evaluate it based upon the amount of matches discovered, as well as manual checking that the transformation resulted in correct code on the matches. Each case study was evaluated by running this tool on every .js file in the repository, then collecting the number of matches found in total and how many files were successfully searched. Evaluating if the transformation was correct is done by manually sampling output files, and verifying that it passes through Babel Generate~\cite{BabelGenerate} without error. diff --git a/chapter/future_work.tex b/chapter/future_work.tex index 28add44..cdd2085 100644 --- a/chapter/future_work.tex +++ b/chapter/future_work.tex @@ -2,9 +2,7 @@ \section{Conclusions} -In this thesis, we have developed a way to define transformations of JavaScript based on a proposal definition. This tool is created to facilitate tooling that enables early feedback on syntactic proposals for ECMAScript. - - +In this thesis, we have developed a way to define transformations of JavaScript based on a proposal definition. The idea this thesis started to explore is to facilitate tooling that enables early feedback on syntactic proposals for ECMAScript. The tool created allows for matching and transformation of user code based on a proposal definition. This tool is meant to be the initial step of gathering user feedback by using a user's familiarity with their own code. Currently we support transformations of "Do expressions" and "Pipeline", and other syntactic proposals are definable in this tool. \section{Future Work} diff --git a/chapter/related_work.tex b/chapter/related_work.tex index 6c95eca..4b650cf 100644 --- a/chapter/related_work.tex +++ b/chapter/related_work.tex @@ -83,7 +83,7 @@ When doing structural search in Jetbrains IntelliJ IDEA, templates are used to d This tool is an interactive experience, where each match is showcased in the find tool, and the developer can decide which matches to apply the replace template to. This allows for error avoidance and a stricter search that is verified by humans. If the developer wants, they do not have to verify each match and just replace everything. -When comparing this tool to \DSL and its corresponding program, there are some similarities. They are both template based, which means a search uses a template to define query, both templates contain variables/wildcards in order to match against a free section, and the replacing structure is also a template based upon those same variables. A way of matching the variables/wildcards of structural search and replace also exists, one can define the amount of X node to match against, similar to the \texttt{+} operator used in \DSL. A core difference between \DSL and structural search and replace is the variable type system. When performing a match and transformation in \DSL the types are used extensively to limit the match against the wildcards, while this limitation is not possible in structural search and replace. +When comparing this tool to \DSL and its corresponding program, there are some similarities. They are both template based, which means a search uses a template to define query, both templates contain variables/wildcards to match against a free section, and the replacing structure is also a template based upon those same variables. A way of matching the variables/wildcards of structural search and replace also exists, one can define the amount of X node to match against, similar to the \texttt{+} operator used in \DSL. A core difference between \DSL and structural search and replace is the variable type system. When performing a match and transformation in \DSL the types are used extensively to limit the match against the wildcards, while this limitation is not possible in structural search and replace. \section{Other JavaScript parsers} @@ -102,7 +102,7 @@ Compared to Babel used in this paper, SWC focuses on speed, as its main selling \subsection*{Acorn} -Acorn~\cite{AcornJS} is parser written in JavaScript to parse JavaScript and it's related languages. Acorn focuses on plugin support in order to support extending and redefinition of how it's internal parser works. Acorn focuses on being a small and fast JavaScript parser, has it's own tree traversal library Acorn Walk. Babel is originally a fork of Acorn, while Babel has since had a full rewrite, Babel is still heavily based on Acorn and Acorn-jsx~\cite{BabelAcornBased}. +Acorn~\cite{AcornJS} is parser written in JavaScript to parse JavaScript and it's related languages. Acorn focuses on plugin support to support extending and redefinition of how it's internal parser works. Acorn focuses on being a small and fast JavaScript parser, has it's own tree traversal library Acorn Walk. Babel is originally a fork of Acorn, while Babel has since had a full rewrite, Babel is still heavily based on Acorn and Acorn-jsx~\cite{BabelAcornBased}. Acorn suffers from a similar problem to SWC when it was considered for use in this project. It does not have the same wide community as Babel, and does not have the same recommendation from TC39 as Babel does~\cite{TC39RecommendBabel}. Even though it supports plugins and the plugin system is powerful, there does not exist the same amount of pre-made plugins for early stage proposals as Babel has.