{"id":1806,"date":"2017-01-09T10:33:38","date_gmt":"2017-01-09T15:33:38","guid":{"rendered":"http:\/\/cs4760.csl.mtu.edu\/2017\/?page_id=1806"},"modified":"2017-03-03T10:16:03","modified_gmt":"2017-03-03T15:16:03","slug":"mashup-programming","status":"publish","type":"page","link":"http:\/\/cs4760.csl.mtu.edu\/2017\/lectures\/mashup-programming\/","title":{"rendered":"Mashup Programming"},"content":{"rendered":"<h1><span style=\"font-weight: 400;\">Introduction<\/span><\/h1>\n<h2><span style=\"font-weight: 400;\">Mashup and Use<\/span><\/h2>\n<p><span style=\"font-weight: 400;\">Mashup or mashing-up is when a website makes requests to multiple services to provide content requested by a user. <\/span><\/p>\n<p><span style=\"font-weight: 400;\">It is a relatively new technique that enables a website to leverage the services from other sites and data providers. Mashing-up can be used to customize a service, e.g. simplifying the interaction. It can also be used to enhance a service, e.g. adding additional features to a service.<\/span><\/p>\n<h2><span style=\"font-weight: 400;\">Request Flow<\/span><\/h2>\n<p><span style=\"font-weight: 400;\">The typical request flow is that the user makes a request to the host website and the host website responses to the user&#8217;s request by making the multiple requests to remote services, and then packages the responses from the remote services into a map, and final passes the map to the view to respond to user&#8217;s request. <\/span><\/p>\n<p><span style=\"font-weight: 400;\">In this request flow there are two types of requests and responses. There is the request made by the user to the host website and the requests made by the host website to remote services. Likewise there are two types of responses. The responses made by the remote services and the response made by the host website to the user. These requests and responses are distinct and illustrated below. <\/span><\/p>\n<p>&nbsp;<\/p>\n<pre>Request flow: \u00a0User --------&gt;&gt; Host website -------&gt;&gt; Remote service\r\nResponse flow: User &lt;&lt;-------- Host website &lt;&lt;-------\u00a0Remote service<\/pre>\n<p><span style=\"font-weight: 400;\">For a simple request made by the user, e.g. clicking on a link, the request is encoded in the URL and Grails routes the request to a controller and its action. In a more complex case, e.g. clicking on a submit button, Grails packages the form data into the &#8220;request&#8221; object and sends it the the controller&#8217;s action.<\/span><\/p>\n<p><a href=\"http:\/\/docs.grails.org\/latest\/ref\/Servlet%20API\/request.html\"><span style=\"font-weight: 400;\">http:\/\/docs.grails.org\/latest\/ref\/Servlet%20API\/request.html<\/span><\/a><\/p>\n<p><span style=\"font-weight: 400;\">In Grails, or any MVC framework, the request are routed and handled by the controller. The controller may need to access the domain or other services to fulfill the request. After accessing the services the controller responds to the requests by packaging the data into a map and sending the map to the view. To be consistent with this this design pattern, the controller&#8217;s action should make the request to the remote services and package the data, i.e. responses from the remote services are packaged into a map for the view. <\/span><\/p>\n<p><span style=\"font-weight: 400;\">The request object in Grails is named in reference to the host website, meaning that it is a request made of the website. It is not the object to use for the website to make a request to remote a service. A different object must be used by the controller to make requests to remote services. Groovy calls this object HTTPBuilder.<\/span><\/p>\n<p><a href=\"https:\/\/github.com\/jgritman\/httpbuilder\/wiki\"><span style=\"font-weight: 400;\">https:\/\/github.com\/jgritman\/httpbuilder\/wiki<\/span><\/a><\/p>\n<p><span style=\"font-weight: 400;\">In other languages HTTPBulider is called CURL.<\/span><\/p>\n<ul>\n<li><a href=\"https:\/\/en.wikipedia.org\/wiki\/CURL\"><span style=\"font-weight: 400;\">https:\/\/en.wikipedia.org\/wiki\/CURL<\/span><\/a><\/li>\n<li><a href=\"http:\/\/php.net\/manual\/en\/book.curl.php\"><span style=\"font-weight: 400;\">http:\/\/php.net\/manual\/en\/book.curl.php<\/span><\/a><\/li>\n<li><a href=\"https:\/\/curl.haxx.se\/\"><span style=\"font-weight: 400;\">https:\/\/curl.haxx.se\/<\/span><\/a><\/li>\n<\/ul>\n<p><span style=\"font-weight: 400;\">Curl is a funny name, but it is really cURL, short for callURL. Groovy has a URL class, but it is not nearly a powerful as curl.<\/span><\/p>\n<p><a href=\"http:\/\/docs.groovy-lang.org\/latest\/html\/groovy-jdk\/java\/net\/URL.html\"><span style=\"font-weight: 400;\">http:\/\/docs.groovy-lang.org\/latest\/html\/groovy-jdk\/java\/net\/URL.html<\/span><\/a><\/p>\n<h2><span style=\"font-weight: 400;\">HTTPBuilder<\/span><\/h2>\n<p><span style=\"font-weight: 400;\">Using HTTPBuilder is a two step process. First you make a HTTPbuilder object using the domain name of the site you wish to make the request to, and then using the HTTPBuilder object you make the request, specifying the method and content-type. The third argument to the request is a closure for handling the different responses, e.g. 200 or 404. <\/span><\/p>\n<p><span style=\"font-weight: 400;\">HTTPBuilder can handle two types of request methods, GET and POST. We&#8217;ll study the easier of the two methods first.<\/span><\/p>\n<h3><span style=\"font-weight: 400;\">GET Method<\/span><\/h3>\n<p><span style=\"font-weight: 400;\">This section references the HTTPBuilder wiki at <\/span><\/p>\n<p><a href=\"https:\/\/github.com\/jgritman\/httpbuilder\/wiki\/GET-Examples\"><span style=\"font-weight: 400;\">https:\/\/github.com\/jgritman\/httpbuilder\/wiki\/GET-Examples<\/span><\/a><\/p>\n<p><span style=\"font-weight: 400;\">The code for the first GET example:<\/span><\/p>\n<pre>\/**\r\n * Grab gets the http-builder jar from the maven site\r\n *\r\n * If you get the error\r\n *\r\n *       error groovyc cannot @Grab without Ivy\r\n *\r\n * then\r\n *      1. Download the binary for Ivy at\r\n *            http:\/\/ant.apache.org\/ivy\/\r\n *      2. Unzip and extract the jar\r\n *      3. Put it in a nearby directory\r\n *      4. Add it as a module to the project by\r\n *             i. File -&gt; Project structure -&gt; Modules -&gt; Dependencies\r\n *             ii. Add by clicking on the \"+\" on the right, select JARs\r\n *             iii. Navigate to where you put the Ivy jar\r\n *      Reference\r\n *        https:\/\/intellij-support.jetbrains.com\/hc\/en-us\/community\/posts\/206913575-Installing-Ivy-plugin-\r\n *\/\r\n@Grab(group='org.codehaus.groovy.modules.http-builder', module='http-builder', version='0.7')\r\n\r\nimport groovyx.net.http.HTTPBuilder\r\nimport static groovyx.net.http.Method.GET\r\nimport static groovyx.net.http.ContentType.TEXT\r\n\r\ndef http = new HTTPBuilder(\"http:\/\/example.org\")\r\n\r\n\/* This works *\/\r\nhttp.request(GET, TEXT ){ req -&gt;\r\n    response.success = { resp, reader -&gt;\r\n        println \"success\"\r\n        println \"My response handler got response: ${resp.statusLine}\"\r\n        println \"Response length: ${resp.headers.'Content-Length'}\"\r\n        System.out &lt;&lt; reader\r\n    }\r\n    response.'404' = {println \"Not Found\"}\r\n}<\/pre>\n<p><span style=\"font-weight: 400;\">HTTPBuilder currently is not part of the standard groovy, so you have to get the HTTPBuilder API from the marvin repository. When I first try to use Grab, my program got an error:<\/span><\/p>\n<pre>error groovyc cannot @Grab without Ivy<\/pre>\n<p><span style=\"font-weight: 400;\">I followed the instructions for adding Ivy to the build for the script at <\/span><\/p>\n<p><a href=\"https:\/\/intellij-support.jetbrains.com\/hc\/en-us\/community\/posts\/206913575-Installing-Ivy-plugin-\"><span style=\"font-weight: 400;\">https:\/\/intellij-support.jetbrains.com\/hc\/en-us\/community\/posts\/206913575-Installing-Ivy-plugin-<\/span><\/a><\/p>\n<p><span style=\"font-weight: 400;\">I installed the jar in a directory called &#8220;jars&#8221; in my workspace and then added it to the module by<\/span><\/p>\n<ol>\n<li style=\"font-weight: 400;\"><span style=\"font-weight: 400;\">File -&gt; Project structure -&gt; Modules -&gt; Dependencies <\/span><\/li>\n<li style=\"font-weight: 400;\"><span style=\"font-weight: 400;\">Add by clicking on the &#8220;+&#8221; on the right, select JARs<\/span><\/li>\n<li style=\"font-weight: 400;\"><span style=\"font-weight: 400;\">Navigate to where you put the Ivy jar<\/span><\/li>\n<\/ol>\n<p><span style=\"font-weight: 400;\">Grab then works for all groovy scripts in the workspace. <\/span><\/p>\n<p><span style=\"font-weight: 400;\">On &#8220;success&#8221;, the script just outputs the text with the html code. Run the code from your own development machine. Try other websites. <\/span><\/p>\n<p><span style=\"font-weight: 400;\">HTTPBuilder has convenience methods for both GET and POST request methods. The convenience methods return the default response. For success, the get methods returns the html as a parsed DOM. Your program can then navigate the DOM and extract the text and attributes from the nodes. <\/span><\/p>\n<p><span style=\"font-weight: 400;\">Below is example code using the convenience get method and navigating the DOM.<\/span><\/p>\n<pre>\/**\r\n * Created by Robert Pastel on 1\/8\/2017.\r\n *\/\r\n\/\/ Grap HTTPBuilder component from maven repository\r\n@Grab(group='org.codehaus.groovy.modules.http-builder', module='http-builder', version='0.7')\r\n\r\n\/\/ import of HttpBuilder related stuff\r\nimport groovyx.net.http.HTTPBuilder\r\n\r\ndef http = new HTTPBuilder(\"http:\/\/example.org\")\r\n\r\nhtml = http.get(path : '')\r\nprintln \"html: \"\r\nprintln html\r\n\r\n\/\/ Now try transversing the DOM\r\nprintln \"Extract text from nodes\"\r\nprintln \"H1: \" + html.BODY.DIV.H1\r\nprintln \"Anchor: \" + html.BODY.DIV.P.A\r\nprintln \"\"\r\n\r\n\/\/ Extract attributes\r\nprintln \"Extract attributes from nodes\"\r\nprintln \"href: \" + html.BODY.DIV.P.A.@href\r\nprintln \"\"\r\n\r\n\/\/Extract the name of a tag\r\nprintln \"Extract names of tags\"\r\nprintln \"html name: \" + html.name()\r\nprintln \"html.BODY.DIV name: \" + html.BODY.DIV.name()\r\nprintln \"\"\r\n\r\n\/\/ Depth first search of nodes\r\nprintln \"Find all paragraph elements\"\r\nhtml.\"**\".findAll {it.name() == \"P\"}.each{\r\n    println \"\"\r\n    println it\r\n}<\/pre>\n<p>&nbsp;<\/p>\n<p><span style=\"font-weight: 400;\">In the example code, the &#8220;html&#8221; object is the parsed DOM. It is a GPath object, actually a GPathResult object:<\/span><\/p>\n<p><span style=\"font-weight: 400;\"><a href=\"http:\/\/www.groovy-lang.org\/processing-xml.html#_gpath\">http:\/\/www.groovy-lang.org\/processing-xml.html#_gpath<\/a><\/span><\/p>\n<p><span style=\"font-weight: 400;\">Printing &#8220;html&#8217;, prints the entire GPath object, but only the content of the tags, not the tag themselves or their attributes. <\/span><\/p>\n<p><span style=\"font-weight: 400;\">You can traverse the DOM by designating the tag sequence down the branch. For example:<\/span><\/p>\n<pre>html.BODY.DIV.H1<\/pre>\n<p><span style=\"font-weight: 400;\">Will navigate into the html tag then the body tag to the first div and then the h1 tag. While:<\/span><\/p>\n<pre>html.BODY.DIV.P.A<\/pre>\n<p><span style=\"font-weight: 400;\">will navigate from html to body to the first div the first paragraph and finally the anchor tag. To get the value of a tag attribute, use the &#8220;@&#8221; operator to navigate into the tag. For example \u00a0<\/span><\/p>\n<pre>html.BODY.DIV.P.A.@href<\/pre>\n<p><span style=\"font-weight: 400;\">retrieves the value of the href attribute of the anchor tag. To get the name of the tag, use the name() methods. For example<\/span><\/p>\n<pre>html.name()<\/pre>\n<p><span style=\"font-weight: 400;\">returns &#8220;HTML.&#8221; You need the preferences for the name method, otherwise GPath will think it is looking for the next tag. This may not seem very useful, since you already know the name of the tag, but you can also make breadth first and depth first searches in GPath object. Then you will want the name of the tag. Breath first and depth first searches have shorthand notations, &#8220;*&#8221; for breadth first search and \u00a0&#8220;**&#8221; for depth first. <\/span><\/p>\n<p><a href=\"http:\/\/www.groovy-lang.org\/processing-xml.html#_speed_things_up_with_breadthfirst_and_depthfirst\"><span style=\"font-weight: 400;\">http:\/\/www.groovy-lang.org\/processing-xml.html#_speed_things_up_with_breadthfirst_and_depthfirst<\/span><\/a><\/p>\n<p><span style=\"font-weight: 400;\">For example, we can use the depth first search with the findAll method to find all the paragraphs: <\/span><\/p>\n<pre>html.\"**\".findAll {it == \"P\"}.each{...}<\/pre>\n<p><span style=\"font-weight: 400;\">Copy the above code, load another webpage and navigate its DOM.<\/span><\/p>\n<h3><span style=\"font-weight: 400;\">POST Method<\/span><\/h3>\n<p><span style=\"font-weight: 400;\">Requesting by a POST method, requires a post body. Typically these are the form data when a user makes a submit. <\/span><\/p>\n<p><span style=\"font-weight: 400;\">HTTPBuilder has a &#8220;post&#8221; convenience method that has a &#8220;body&#8221; argument. The body argument is a map or json, which HTTPBuilder will encode. <\/span><\/p>\n<p><span style=\"font-weight: 400;\">The example code below uses the &#8220;restmirror.appsot.com&#8221; web site to send a post. The &#8220;restmirror&#8221; just mirrors the post back.<\/span><\/p>\n<pre>\/**\r\n * Created by Robert Pastel on 11\/12\/2016.\r\n *\/\r\n\/\/ Grap HTTPBuilder component from maven repository\r\nimport groovy.json.JsonSlurper\r\n@Grab(group='org.codehaus.groovy.modules.http-builder', module='http-builder', version='0.7')\r\n\r\nimport groovyx.net.http.HTTPBuilder\r\nimport static groovyx.net.http.ContentType.*\r\n\r\ndef http = new HTTPBuilder( 'http:\/\/restmirror.appspot.com\/' )\r\ndef postBody = [name: [first: 'robert', last: 'pastel'], title: 'programmer'] \/\/ will be url-encoded\r\n\r\ndef html = http.post(path:'\/', body: postBody, requestContentType: JSON)\r\n\r\nprintln \"*** html ****\"\r\nprintln html\r\nprintln\"\"\r\n\r\n\/\/ Use as a JSON\r\n\/\/ Unfortunately the response is not really json,\r\n\/\/ so we make a JSON\r\ndef jsonSlurper = new JsonSlurper()\r\ndef json = jsonSlurper.parseText(html.toString())\r\nprintln \"json.name.first = \" + json.name.first<\/pre>\n<p><span style=\"font-weight: 400;\">Unfortunately the response from &#8220;restmirror&#8221; is not really a JSON, but a node from a DOM. We have to convert the response to a String and then use JsonSlurper to convert the response to a JSON. <\/span><\/p>\n<h1><span style=\"font-weight: 400;\">Mashing Up an Old Website: Disturbed WEPP<\/span><\/h1>\n<h2><span style=\"font-weight: 400;\">Explanation of Current Website<\/span><\/h2>\n<p><span style=\"font-weight: 400;\">This is an old style mashup example. It demonstrates programmatically submitting a website form and parsing a response that is a web page and text. It requires thorough analysis of the website, what it is doing and returning to user. <\/span><\/p>\n<p><span style=\"font-weight: 400;\">Visit <\/span><\/p>\n<p><a href=\"https:\/\/forest.moscowfsl.wsu.edu\/cgi-bin\/fswepp\/wd\/weppdist.pl\"><span style=\"font-weight: 400;\">https:\/\/forest.moscowfsl.wsu.edu\/cgi-bin\/fswepp\/wd\/weppdist.pl<\/span><\/a><\/p>\n<p><span style=\"font-weight: 400;\">The website is for Forest Service personnel to estimate the erosion of at slopes. The basic function of the webpage is a form formatted as a table table that the user submits by clicking the &#8220;Run WEPP&#8221; button. When the user clicks the &#8220;Run WEPP&#8221; button, JavaScript collects the parameters entered in the table and sends the parameters to a Peril script which in turns feeds the parameters to a model called WEPP. <\/span><\/p>\n<p><span style=\"font-weight: 400;\">In order, to make the estimate of erosion the WEPP model needs:<\/span><\/p>\n<ul>\n<li style=\"font-weight: 400;\"><span style=\"font-weight: 400;\">The number of years for the estimate. See the &#8220;Years to simulate&#8221; field.<\/span><\/li>\n<li style=\"font-weight: 400;\"><span style=\"font-weight: 400;\">The climate model for the region. See the selection field under &#8220;Climate.&#8221; <\/span><\/li>\n<li style=\"font-weight: 400;\"><span style=\"font-weight: 400;\">The soil texture. See the selection field under &#8220;Soil Texture&#8221; <\/span><\/li>\n<li style=\"font-weight: 400;\"><span style=\"font-weight: 400;\">Parameters for slope which is composed of two types. See table rows &#8220;Upper&#8221; and &#8220;Lower.&#8221; The slope parameters include:<\/span>\n<ul>\n<li style=\"font-weight: 400;\"><span style=\"font-weight: 400;\">The vegetation or treatment. See the selection field under &#8220;Vegetation\/Treatment&#8221;. \u00a0\u00a0<\/span><\/li>\n<li style=\"font-weight: 400;\"><span style=\"font-weight: 400;\">The gradient of the slope. See the &#8220;Gradient&#8221; field.<\/span><\/li>\n<li style=\"font-weight: 400;\"><span style=\"font-weight: 400;\">The length of the slope. See the &#8220;Horizontal Length&#8221; field.<\/span><\/li>\n<li style=\"font-weight: 400;\"><span style=\"font-weight: 400;\">The percent coverage of the slope, which is complicated. Click the &#8220;?&#8221; adjacent to the &#8220;Cover&#8221; field. <\/span><\/li>\n<li style=\"font-weight: 400;\"><span style=\"font-weight: 400;\">The percent of rocks on the slope surface. See the &#8220;Rock&#8221; field.<\/span><\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<p><span style=\"font-weight: 400;\">Try running the model for different values. Be sure to change the &#8220;Years to simulate&#8221; values to several different values. The response to clicking the &#8220;Run WEPP&#8221; button is another webpage with some of the model output in several tables. \u00a0In addition the results webpage has links on the bottom. The first five just show the input parameters that the user gave to model. The last link, &#8220;WEPP results&#8221; shows the complete output from the WEPP model. Click on the button. You&#8217;ll see that it is plain text with many tables. The four tables in the results web page are derived from the tables in the complete output from WEPP.<\/span><\/p>\n<h2><span style=\"font-weight: 400;\">Client Goals and Basic Implementation<\/span><\/h2>\n<p><span style=\"font-weight: 400;\">In a sense this, website simplifies the use of a complex model developed by scientists. The website enables Forest Service personnel to use a complex model by simplifying the input and output of the script. Our scientist\/client wants citizen to be able to use the website. Our client understands that the current &#8220;Disturbed WEPP&#8221; website is too complex for untrained citizens to use. Our clients proposes several changes to simplify the website:<\/span><\/p>\n<ul>\n<li style=\"font-weight: 400;\"><span style=\"font-weight: 400;\">Making the slope have only one gradient instead of &#8220;upper&#8221; and &#8220;lower.&#8221;<\/span><\/li>\n<li style=\"font-weight: 400;\"><span style=\"font-weight: 400;\">The client has a geodatabase, so that it can determine the soil texture, rock coverage and possibly the slope given the latitude and longitude location of the slope.<\/span><\/li>\n<li style=\"font-weight: 400;\"><span style=\"font-weight: 400;\">The result can be only one table, &#8220;Return period analysis&#8221; table and a single parameter from the complete results, &#8220;AVERAGE ANNUAL SEDIMENT LEAVING PROFILE&#8221; in tons per hectare (t\/ha).<\/span><\/li>\n<\/ul>\n<p><span style=\"font-weight: 400;\">We can implement the client&#8217;s desires by mashing up. We can make our own website with a form that inputs only the parameters we need. Our website can make remote services request to the client&#8217;s geodatabase and then call the script that the &#8220;Disturbed WEPP&#8221; website calls with the body that our program constructs. When we get the response, we then parse the webpage, extracting the results that we want, and display them our website. <\/span><\/p>\n<h2><span style=\"font-weight: 400;\">Analysis of Current Website<\/span><\/h2>\n<h3><span style=\"font-weight: 400;\">Request Analysis<\/span><\/h3>\n<p><span style=\"font-weight: 400;\">We know our goal and basically how to implement it, now is the time to get to work. First we must study how the website works. Run the website by clicking on the &#8220;Run WEPP&#8221; button. On the results web page, view the developer tools. In Chrome, right click the page and select &#8220;Inspect Elements.&#8221; Make sure the &#8220;Network&#8221; tab is showing and displays the list of request made by the web page. You may have to click &#8220;Network&#8221; and refresh the page. At the top of the list of network request should be &#8220;wd.pl&#8221; ran as a POST method. This is the script that returns the results for the original request of the webpage. Click &#8220;wd.pl&#8221;, it is a link, and you should see a accordion with sections &#8220;General&#8221;, &#8220;Response Headers&#8221;, &#8220;Request Headers&#8221;, and &#8220;Form Data.&#8221; We are interested in the &#8220;Form Data&#8221;, so click the arrow adjacent to &#8220;Form Data.&#8221; see the details. What you see is the body of the post. If it is not well formatted, click &#8220;view parsed.&#8221; If it is well formatted, you can view the original format by clicking &#8220;view source.&#8221; We want to look at the well formatted form data. It the map (body) of the post request sent to script that runs the WEPP model. What the keys correspond to in the form input should be obvious. If not obvious then you can play with the form website, inputting values that you can recognize in this list. \u00a0<\/span><\/p>\n<h3><span style=\"font-weight: 400;\">Web Page Structure<\/span><\/h3>\n<p><span style=\"font-weight: 400;\">Now let us analyze the results page structure. Right click on the results web page and select &#8220;view source page.&#8221; In the HEAD of the source are several long scripts. They are basically what is run when the user clicks the links at the bottom of the page. We&#8217;ll ignore them for the time being, but will inspect them later. <\/span><\/p>\n<p><span style=\"font-weight: 400;\">Scan down to the BODY of the source. It is at the very bottom. Now search for the table of interest, &#8220;Return period analysis.&#8221; Recall that we use HTTPBuilder to make the request, which will give us a GPath to find our table in the DOM. What is the path to the table. Working backwards through the tagas you discover that it is <\/span><\/p>\n<pre>html.BODY.FONT.BLOCKQUOTE.CENTER.P.TABLE<\/pre>\n<p><span style=\"font-weight: 400;\">But there are several tables in the web page, so this path may not be unique. But funny and lucky for us, it is a unique path. It is the only table within a paragraph tag, &lt;p&gt; \u2026 &lt;\/p&gt; after the center tag. If it was not we would have to search for the CENTER tags and check that the H3 tag had content beginning with &#8220;Return period analysis. We could then grab the table within that center tag.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Now go to the very bottom of the BODY where the links at the bottom of the page are defined. Note that the last one is for &#8220;WEPP results&#8221;<\/span><\/p>\n<pre>&lt;a href=\"javascript:void(showextendedoutput())\"&gt;WEPP results&lt;\/a&gt;<\/pre>\n<p><span style=\"font-weight: 400;\">The href defines the JavaScript function to run. The JavaSript &#8220;void&#8221; operator results in the web browser showing the results of the JavaScript on a new page. It is a trick.<\/span><\/p>\n<ul>\n<li><a href=\"http:\/\/stackoverflow.com\/questions\/1291942\/what-does-javascriptvoid0-mean\"><span style=\"font-weight: 400;\">http:\/\/stackoverflow.com\/questions\/1291942\/what-does-javascriptvoid0-mean<\/span><\/a><\/li>\n<li><a href=\"https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/JavaScript\/Reference\/Operators\/void\"><span style=\"font-weight: 400;\">https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/JavaScript\/Reference\/Operators\/void<\/span><\/a><\/li>\n<\/ul>\n<p><span style=\"font-weight: 400;\">Search for the JavaScript function, showextendedoutptut, it is near the top of the page. In fact it is the bulk of the source. You&#8217;ll see that the function is primarily composed of lines like <\/span><\/p>\n<pre>filewindow.document.writeln(\"...\")<\/pre>\n<p><span style=\"font-weight: 400;\">Each line just writes a line to the window. Although it is a very long script, it&#8217;s structure is basically very simple. Recall that our client wants the value of &#8220;<\/span><span style=\"font-weight: 400;\">AVERAGE ANNUAL SEDIMENT LEAVING PROFILE&#8221; in units for tons per hectare. Search for the section in the JavaScript function. Note that the units are designated &#8220;t\/ha&#8221;. Search the page source for &#8220;t\/ha&#8221;. You&#8217;ll notice that it is the only occurrence in the whole page source. We got lucky again. This will make coding easy, as you will see. <\/span><\/p>\n<h2><span style=\"font-weight: 400;\">Coding the Groovy Script<\/span><\/h2>\n<p><span style=\"font-weight: 400;\">Now it is time to code the script that will request results from Disturbed WEPP website and parse the response. We know everything we need:<\/span><\/p>\n<ul>\n<ul>\n<li style=\"font-weight: 400;\"><span style=\"font-weight: 400;\"><strong>Domain:<\/strong> https:\/\/forest.moscowfsl.wsu.edu<\/span><\/li>\n<li style=\"font-weight: 400;\"><span style=\"font-weight: 400;\"><strong>Path:<\/strong> \/cgi-bin\/fswepp\/wd\/wd.pl<\/span><\/li>\n<\/ul>\n<li style=\"font-weight: 400;\"><span style=\"font-weight: 400;\"><strong>Body<\/strong> for the POST from the Form Data<\/span><\/li>\n<li style=\"font-weight: 400;\"><span style=\"font-weight: 400;\"><strong>GPath<\/strong> to the our table: html.BODY.FONT.BLOCKQUOTE.CENTER.P.TABLE<\/span><\/li>\n<li style=\"font-weight: 400;\"><span style=\"font-weight: 400;\">How to find the tons per hectare value: searching on &#8220;<strong>t\/ha<\/strong>&#8220;<\/span><\/li>\n<\/ul>\n<p><span style=\"font-weight: 400;\">The rest is hard and tedious work of coding. <\/span><\/p>\n<pre>\/**\r\n * Created by Robert Pastel on 11\/12\/2016.\r\n *\/\r\n\/**\r\n * Grab gets the http-builder jar from the maven site\r\n *\r\n * If you get the error\r\n *\r\n *       error groovyc cannot @Grab without Ivy\r\n *\r\n * then\r\n *      1. Download the binary for Ivy at\r\n *            http:\/\/ant.apache.org\/ivy\/\r\n *      2. Unzip and extract the jar\r\n *      3. Put it in a nearby directory\r\n *      4. Add it as a module to the project by\r\n *             i. File -&gt; Project structure -&gt; Modules -&gt; Dependencies\r\n *             ii. Add by clicking on the \"+\" on the right, select JARs\r\n *             iii. Navigate to where you put the Ivy jar\r\n *      Reference\r\n *        https:\/\/intellij-support.jetbrains.com\/hc\/en-us\/community\/posts\/206913575-Installing-Ivy-plugin-\r\n *\/\r\n@Grab(group='org.codehaus.groovy.modules.http-builder', module='http-builder', version='0.7')\r\n\r\nimport groovyx.net.http.HTTPBuilder\r\nimport static groovyx.net.http.ContentType.*\r\nimport static groovyx.net.http.Method.*\r\n\r\n\/**\r\n * Make the post request\r\n *\r\n * Note that the HTTPBulider should reference only the site\r\n * and  the path specifies the path to script.\r\n * This avoids 403 (Forbidden)\r\n *\r\n * Documentation for HTTPBuilder is at\r\n * https:\/\/github.com\/jgritman\/httpbuilder\/wiki\r\n *\/\r\ndef http = new HTTPBuilder( 'https:\/\/forest.moscowfsl.wsu.edu' )\r\n\/\/ You can find the post variables and value, by submitting a request from the website\r\n\/\/ then inspect -&gt; Networks -&gt; click wd.pl -&gt; Form Data.\r\n\/\/ You can even copy and paste from the inspector to your script.\r\ndef postBody = [\r\n        me:'' ,\r\n        units:'ft',\r\n        description:'' ,\r\n        climyears:'10',\r\n        Climate:'..\/climates\/al010831',\r\n        achtung:'WEPP run',\r\n        SoilType:'clay',\r\n        UpSlopeType:'OldForest',\r\n        ofe1_top_slope:'0',\r\n        ofe1_length:'50',\r\n        ofe1_pcover:'100',\r\n        ofe1_rock:'20',\r\n        ofe1_mid_slope:'30',\r\n        LowSlopeType:'OldForest',\r\n        ofe2_top_slope:'30',\r\n        ofe2_length:'50',\r\n        ofe2_pcover:'100',\r\n        ofe2_rock:'20',\r\n        ofe2_bot_slope:'5',\r\n        climate_name:'BIRMINGHAM WB AP AL',\r\n        Units:'m',\r\n        actionw:'Run WEPP'\r\n]\r\n\/\/ Make the post request and get back the GPath for the html.\r\n\/\/ Note the path to the script. It is necessary to split up the URI this way.\r\ndef html = http.post(path: '\/cgi-bin\/fswepp\/wd\/wd.pl', body: postBody)\r\n\r\n\r\n\r\n\/\/ Now get the table of interest results using GPATH\r\n\/\/ See http:\/\/groovy-lang.org\/processing-xml.html#_gpath\r\ndef erodeTable = html.BODY.FONT.BLOCKQUOTE.CENTER.P.TABLE\r\n\/\/ Note that sometimes the GPath hierarchy is broken,\r\n\/\/ but you can always make the depth first searches\r\n\/**\r\n * Map of Maps\r\n *\r\n * We want a map like this:\r\n * analysis[period][variable] -&gt; value\r\n *\r\n * also want to make a Map from variables to untis\r\n *\r\n * Note that this should work for any value of \"years to simulate\".\r\n *\/\r\n\/\/ create the analysis Map\r\ndef analysis = [:]\r\n\r\n\/\/ Gather the keys and make the units map\r\ndef i = 0 \/\/ counts table rows, so we can do something special for the first table row\r\ndef periods = []\r\ndef variables = []\r\ndef units = [:]\r\n\/\/ Note that erodeTable is a GPathResult, so we can search it.\r\nerodeTable.\"**\".findAll{it.name() == \"TR\"}.each{tr -&gt;\r\n    \/\/ The first table row lists the variables with their untis\r\n    if ( i == 0){\r\n        for(j = 0; j &lt; tr.TH.size(); j++){\r\n            \/\/ We want to sikp the first header\r\n            if(j &gt; 0){\r\n                String variable_unit = tr.TH[j]\r\n                \/\/ Some regular expression to extract variable names and units\r\n                \/\/ See http:\/\/groovy-lang.org\/operators.html#_regular_expression_operators\r\n                \/\/ and http:\/\/www.regular-expressions.info\/\r\n                \/\/ Variable names will have only alphabets and units are inside parenthesis\r\n                def m = variable_unit =~ \/([A-Za-z]+)\\((.+)\\)\/\r\n                if (m){\r\n                    \/\/ Note that the capture groups are Strings\r\n                    variables[j-1] = m[0][1]\r\n                    units.put(variables[j-1], m[0][2])\r\n                }\r\n            }\r\n        }\r\n    }\r\n    \/\/ Table rows greater than 0 contains a periods\r\n    else if(i &gt; 0){\r\n        \/\/ We will want to use the table header as a key to a map,\r\n        \/\/ so we MUST use toString method so that the hashing works properly.\r\n        \/\/ Java hashes Objects different from Strings\r\n        periods[i-1] = tr.TH.toString()\r\n    }\r\n    i++\r\n}\r\n\r\n\/\/println periods\r\n\/\/println variables\r\n\/\/println units\r\n\r\n\/\/ Now construct the analysis table from the bottom up\r\ni = 0 \/\/ for tracking the periods and table row\r\nerodeTable.\"**\".findAll{it.name() == \"TR\"}.each{ tr -&gt;\r\n    \/\/ construct the period_variable map\r\n    if (i &gt; 0) { \/\/ skip the first row because it is a header row\r\n        def j = 0; \/\/ for tracking the variables\r\n        def period_variable = [:]\r\n        tr.\"**\".findAll { it.name() == \"TD\" }.each { td -&gt;\r\n            period_variable.put(variables[j], td)\r\n            j++\r\n        }\r\n        analysis.put(periods[i-1], period_variable) \/\/ use i-1 because we skip the first table row\r\n    }\r\n    i++\r\n}\r\n\/\/ now we can access the analysis table like this\r\nString period = \"Average\"\r\nString variable = \"Runoff\"\r\nprintln \"The ${period} ${variable} is ${analysis[period][variable]} ${units[variable]}\"\r\n\r\n\/**\r\n *  Find the average annual sediment leaving profile in units t\/ha\r\n *  It is in function showextendedoutput() that is evoked by \"WEPP results\" link\r\n *\r\n *  Note that there is only one occurrence of \"t\/ha\" in the entire response.\r\n *\r\n *\/\r\ndef scriptNode = html.HEAD.SCRIPT\r\n\/\/ extract the SCRIPT\r\nString script = scriptNode.toString() \/\/ Make sure it is a string\r\n\/\/ create the variables to save captures from matches\r\ndef leavingLine\r\ndef leavingValues = []\r\ndef leavingUnits = [\"t\/ha\", \"ha\"]\r\n\r\n\/\/ Use regular expressions to get line, Note that quotes delineate the line\r\n\/\/def m = script =~ \/\".+t\\\/ha.+\"\/  \/\/ this works, but the slash must be escaped\r\n\/\/ We can also use groovy strings and then the \/ is escaped for us\r\ndef m = script =~ \/\".+${leavingUnits[0]}.+\"\/\r\nif(m){\r\n    \/\/ Found it, so clean up the leavingLine. Note the use of regular expressions in the replaceAll\r\n    leavingLine = m[0]\r\n    leavingLine = leavingLine.replaceAll(\/ +\/, ' ') \/\/ remove extra spaces\r\n    leavingLine = leavingLine.replaceAll(\/\" \/,'') \/\/ remove leading quote\r\n    leavingLine = leavingLine.replaceAll(\/\"\/,'') \/\/ remove remaining quotes\r\n\r\n    \/\/ Now extract the values\r\n    m = leavingLine =~ \/([\\d\\.]+) ${leavingUnits[0]} \\([\\w\\s]+([\\d\\.]+) ${leavingUnits[1]}\/\r\n    if(m){\r\n        leavingValues[0] = m[0][1]\r\n        leavingValues[1] = m[0][2]\r\n    }\r\n}\r\n\/\/ Now we can use our regular expression captures like this\r\n\/\/println leavingLine\r\nprintln \"The average annual sediment leaving is ${leavingValues[0]} ${leavingUnits[0]}\"<\/pre>\n<p>You can also download the code from the resource\/mashup\/ directory.<\/p>\n<p><span style=\"font-weight: 400;\">First part of the code, making the HTTPBuilder object and making the POST request should be familiar. The only trick is to be sure to separate the domain from the path. If you should put all the URL in the argument for the HTTPBulider constructor, i.e.<\/span><\/p>\n<p><a href=\"https:\/\/forest.moscowfsl.wsu.edu\/cgi-bin\/fswepp\/wd\/wd.pl\"><span style=\"font-weight: 400;\">https:\/\/forest.moscowfsl.wsu.edu\/cgi-bin\/fswepp\/wd\/wd.pl<\/span><\/a><\/p>\n<p><span style=\"font-weight: 400;\">You will get a &#8220;forbidden&#8221; response. I believe that the cgi-bin\/ directory is protected so that only requests from the https:\/\/forest.moscowfsl.wsu.edu domain have access. <\/span><\/p>\n<h3><span style=\"font-weight: 400;\">Grabbing and Analyzing the Table<\/span><\/h3>\n<p><span style=\"font-weight: 400;\">Then the code grabs the table of interest, which is the erodeTable:<\/span><\/p>\n<pre>def erodeTable = html.BODY.FONT.BLOCKQUOTE.CENTER.P.TABLE<\/pre>\n<p><span style=\"font-weight: 400;\">Now the code constructs a map of the table value so that it can be passed to our own view. The map is the &#8220;analysis&#8221; object. It will be a map of maps. <\/span><\/p>\n<pre>analysis[period][variable] -&gt; value<\/pre>\n<p><span style=\"font-weight: 400;\">For each period (row in the table) there is a map of value with keys: Precipitation, Runoff, Erosion, Sediment. If you played with the website, in particular tried different years to simulate, you&#8217;ll will have discovered that the number of rows and years for the Return period are different. We&#8217;ll need to parse these years, called periods in the code, and use them as keys to the maps that represent the rows in the table. To do all this, we need to use Regular Expressions. Hopefully you paid attention in your formal methods course.<\/span><\/p>\n<h3><span style=\"font-weight: 400;\">Brief Introduction to Regular Expressions<\/span><\/h3>\n<p><span style=\"font-weight: 400;\">Groovy has regular expressions built into the language as operators. Study the syntax at <\/span><\/p>\n<p><a href=\"http:\/\/groovy-lang.org\/operators.html#_regular_expression_operators\"><span style=\"font-weight: 400;\">http:\/\/groovy-lang.org\/operators.html#_regular_expression_operators<\/span><\/a><\/p>\n<p><span style=\"font-weight: 400;\">I use the find operator <\/span><\/p>\n<p><a href=\"http:\/\/groovy-lang.org\/operators.html#_find_operator\"><span style=\"font-weight: 400;\">http:\/\/groovy-lang.org\/operators.html#_find_operator<\/span><\/a><\/p>\n<p><span style=\"font-weight: 400;\">There are many tutorial on the web for Regular Expression, but please note that regular expressions comes in different flavors depending on the programming language. My favorite reference is<\/span><\/p>\n<p><span style=\"font-weight: 400;\"><a href=\"http:\/\/www.regular-expressions.info\/\">http:\/\/www.regular-expressions.info\/<\/a><\/span><\/p>\n<p><span style=\"font-weight: 400;\">Although it is not the best tutorial, it is the most complete reference I have found. If you go to the reference manual<\/span><\/p>\n<p><span style=\"font-weight: 400;\"><a href=\"http:\/\/www.regular-expressions.info\/reference.html\">http:\/\/www.regular-expressions.info\/reference.html<\/a><\/span><\/p>\n<p><span style=\"font-weight: 400;\">You&#8217;ll notice that you can select the language of your choice to display the tables. We are interested in Java. That is is the flavor of Groovy. And you might need the JavaScript tables. <\/span><\/p>\n<p><span style=\"font-weight: 400;\">I assume you know the basics of making patterns, but go back to the quick totorials<\/span><\/p>\n<p><a href=\"http:\/\/www.regular-expressions.info\/tutorial.html\"><span style=\"font-weight: 400;\">http:\/\/www.regular-expressions.info\/tutorial.html<\/span><\/a><\/p>\n<p><span style=\"font-weight: 400;\">Study the &#8220;Special Characters&#8221;, \u00a0&#8220;Character Classes&#8221;, \u00a0&#8220;Repetition&#8221;, and &#8220;Grouping and Capturing&#8221; tutorials:<\/span><\/p>\n<ul>\n<li><a href=\"http:\/\/www.regular-expressions.info\/characters.html\"><span style=\"font-weight: 400;\">http:\/\/www.regular-expressions.info\/characters.html<\/span><\/a><\/li>\n<li><a href=\"http:\/\/www.regular-expressions.info\/charclass.html\"><span style=\"font-weight: 400;\">http:\/\/www.regular-expressions.info\/charclass.html<\/span><\/a><\/li>\n<li><a href=\"http:\/\/www.regular-expressions.info\/repeat.html\"><span style=\"font-weight: 400;\">http:\/\/www.regular-expressions.info\/repeat.html<\/span><\/a><\/li>\n<li><a href=\"http:\/\/www.regular-expressions.info\/brackets.html\"><span style=\"font-weight: 400;\">http:\/\/www.regular-expressions.info\/brackets.html<\/span><\/a><\/li>\n<\/ul>\n<p><span style=\"font-weight: 400;\">I use these aspects of regular expression extensively. <\/span><\/p>\n<p><span style=\"font-weight: 400;\">In the code we first parse the keys for the map we are going to make from the table. Note that the keys or the first table row and column which use the TH tag. We use the a double loop to parse the keys for the map, <strong>periods<\/strong> and \u00a0<strong>variables<\/strong>, but the outer loop is not a for-loop. The looping is done by the &#8220;each&#8221; method for the findall in the depth first search. <\/span><\/p>\n<pre>erodeTable.\"**\".findAll{it.name() == \"TR\"}.each{tr -&gt; ... }<\/pre>\n<p><span style=\"font-weight: 400;\">We control the index variable, i, by hand, so that we can identify which row the code is parsing. The first row, i == 0, has the variable keys and their units. We need to the for loop to go through this row. The variable key is made of alphabet character, lower and upper case, and the units are anything inside of parenthesis. We want to capture them both. The pattern is <\/span><\/p>\n<pre>def m = variable_unit =~ \/([A-Za-z]+)\\((.+)\\)\/<\/pre>\n<p><span style=\"font-weight: 400;\">Capture groups are designated by &#8220;(&#8230;.)&#8221; in the pattern, so to match on parenthesis you have to escape them. The first capture group is ([A-Za-z]+) and the second capture group is inside the escaped parentheses, (.+). \u00a0The variable &#8220;m&#8221; contains the matches which is a 2 dimensional array, i.e. matrix. The element m[0] and m[0][0] contains the entire match. The element m[0][1] contains the first capture group, the variable name for the key and m[0][2] is the second capture group, the units for the variable. <\/span><\/p>\n<p><span style=\"font-weight: 400;\">Get the years for the period key is easier. It is the content for the TH tag. Note that because the key will be hashed by Java\/Groovy, we must be assured that it is a String. What is returned from the GPath is not a String but an object, so we use toString method to convert the object to a string. <\/span><\/p>\n<p><span style=\"font-weight: 400;\">Now that we have the keys for our map, we can construct the map. This does not require regular expression. We use two interleaved depth first searches. First on the TR tags to find the rows then on the TD tags to find the data cells. The inner depth first search makes the map for the period_variable map and the outer depth first search puts the peroid_vairable map into the analysis map. <\/span><\/p>\n<h3><span style=\"font-weight: 400;\">Grabbing the <\/span><span style=\"font-weight: 400;\">ANNUAL SEDIMENT LEAVING PROFILE<\/span><\/h3>\n<p><span style=\"font-weight: 400;\">Because the tons per hectare units, &#8220;t\/ha&#8221;, only occurs once on the page source it is fairly easy to get this value, but it still takes two matches to get the values. First we grab only the SCRIPT part of the GPath, and then match on the line that contains &#8220;t\/ha&#8221;.<\/span><\/p>\n<pre>def m = script =~ \/\".+${leavingUnits[0]}.+\"\/<\/pre>\n<p><span style=\"font-weight: 400;\">This matching pattern is using groovy string notation, ${&#8230;}. The value of the array element leavingUntits[0] is &#8220;t\/ha&#8221;. So the match is anything between quotes, &#8220;&#8230;&#8221; that has the character sequence &#8220;t\/ha&#8221;. Now that we have the line with the &#8220;t\/ha&#8221;, we clean it up by removing unnecessary spaces and the quotes. Thes we use capture groups to grab the values we want<\/span><\/p>\n<pre>m = leavingLine =~ \/([\\d\\.]+) ${leavingUnits[0]} \\([\\w\\s]+([\\d\\.]+) ${leavingUnits[1]}\/<\/pre>\n<p><span style=\"font-weight: 400;\">The values contain digits and a decimal point. The period representing the decimal point must be escaped because it is a regular expression special character. Again we use groovy string notation make sure that the capture groups are in the correct location. <\/span><\/p>\n<h1><span style=\"font-weight: 400;\">Mashing Up a New Service<\/span><\/h1>\n<p><span style=\"font-weight: 400;\">Our client will need to make a API for the geodatabase, and that will be one of our new service. Because our website will be just retrieving entries from the database, our HTTPBuilder object will probably use a GET method. It might need a query string to for the latitude and longitude. \u00a0Study the examples on GET Example page.<\/span><\/p>\n<p><a href=\"https:\/\/github.com\/jgritman\/httpbuilder\/wiki\/GET-Examples\"><span style=\"font-weight: 400;\">https:\/\/github.com\/jgritman\/httpbuilder\/wiki\/GET-Examples<\/span><\/a><\/p>\n<p><span style=\"font-weight: 400;\">The query string is specified in the &#8220;query&#8221; parameter in the get method. It is an array of key values.<\/span><\/p>\n<pre>query : [q:'Groovy']<\/pre>\n<p><span style=\"font-weight: 400;\">In the example above,\u00a0the &#8220;q&#8221; is the key and &#8216;Groovy&#8217; is the value. The keys must be recognized by the services. Another example with two query keys. <\/span><\/p>\n<pre>uri.query = [ v:'1.0', q: 'Calvin and Hobbes' ]<\/pre>\n<p><span style=\"font-weight: 400;\">The query parameter will be converted into<\/span><\/p>\n<pre>?v='1.0'&amp;q='Calvin and Hobbes'<\/pre>\n<p><span style=\"font-weight: 400;\">And append it to the URL.<\/span><\/p>\n<h2>Client Point Query API<\/h2>\n<p>Our client has provided a simple API with one query, point_query. An example using the API:<\/p>\n<p><a href=\"http:\/\/geodjango.mtri.org\/baer\/hci\/point_query?lat=37&amp;lon=-105\">http:\/\/geodjango.mtri.org\/baer\/hci\/point_query?lat=37&amp;lon=-105<\/a><\/p>\n<p>If you point your browser or click on the link, the return will be:<\/p>\n<pre>{\"slope\": 47.5440521240234, \"soil_rock_percent\": 5.0, \"soil_texture\": \"loam\"}<\/pre>\n<p>This is a josn with three properties:<\/p>\n<ul>\n<li>slope<\/li>\n<li>soil rock percent<\/li>\n<li>soil texture<\/li>\n<\/ul>\n<h3>Coding the Groovy Script<\/h3>\n<pre>\/**\r\n * Created by Robert Pastel on 1\/19\/2017.\r\n *\/\r\n\r\nimport groovy.json.JsonSlurper\r\n@Grab(group='org.codehaus.groovy.modules.http-builder', module='http-builder', version='0.7')\r\n\r\n\/\/ import of HttpBuilder related stuff\r\nimport groovyx.net.http.HTTPBuilder\r\n\r\ndef http = new HTTPBuilder(\"http:\/\/geodjango.mtri.org\")\r\n\r\ndef json = http.get( path : '\/baer\/hci\/point_query', query : [lat:37, lon:-105] )\r\n\r\nprintln \"json = \" + json\r\nprintln json.getClass() \/\/ It is a JSON Map\r\nprintln \"keySet = \" + json.keySet() \/\/ with these strings\r\n\r\n\/\/ We can access values like this\r\nprintln \"slope = \" + json.slope\r\nprintln \"soil_rock_percent = \" + json.soil_rock_percent\r\nprintln \"soil_texture = \" + json.soil_texture<\/pre>\n<p>We create the HTTPBuilder object, http, with the domain of the service and then use HTTPBuilder&#8217;s convenient get method to specify the \u00a0path to &#8220;point_query&#8221; and the query. The the JSON, json, is returned.<\/p>\n<p>The &#8220;json&#8221; object in the code does not print like a JSON. That is because it has already been converted to a Groovy Map by HTTPBulider.<\/p>\n<p><a href=\"http:\/\/groovy-lang.org\/groovy-dev-kit.html#Collections-Maps\">http:\/\/groovy-lang.org\/groovy-dev-kit.html#Collections-Maps<\/a><\/p>\n<p>We can access the values for the keys using the &#8220;dot&#8221; notation.<\/p>\n<h2>Restricting Access<\/h2>\n<p><span style=\"font-weight: 400;\">Some services\u00a0restrict access, so that not anyone can use the service. This is done by either provide an API key or token to the developers. The API key is added to the URL and unless the API key matches one in the list that the service has, the service will deny service. A token is like a API key but placed in the header of the request. For examples using API\u00a0keys, see Weather Underground API:<\/span><\/p>\n<p><a href=\"https:\/\/www.wunderground.com\/weather\/api\/d\/docs\"><span style=\"font-weight: 400;\">https:\/\/www.wunderground.com\/weather\/api\/d\/docs<\/span><\/a><\/p>\n<p><span style=\"font-weight: 400;\">For examples using a token, see NOAA API<\/span><\/p>\n<p><a href=\"https:\/\/www.ncdc.noaa.gov\/cdo-web\/webservices\/v2\"><span style=\"font-weight: 400;\">https:\/\/www.ncdc.noaa.gov\/cdo-web\/webservices\/v2<\/span><\/a><\/p>\n<p><span style=\"font-weight: 400;\">To add to the header, using HTTPBuilder, you can use the full version of the request not. See the last example in the GET Example page.<\/span><\/p>\n<p><span style=\"font-weight: 400;\"> <a href=\"https:\/\/github.com\/jgritman\/httpbuilder\/wiki\/GET-Examples\">https:\/\/github.com\/jgritman\/httpbuilder\/wiki\/GET-Examples<\/a><\/span><\/p>\n<p><span style=\"font-weight: 400;\"> It might be possible to use the GET convenient method, <\/span><\/p>\n<p><span style=\"font-weight: 400;\">Also look at the RESTClient, for more ways to make requests.<\/span><\/p>\n<p><a href=\"https:\/\/github.com\/jgritman\/httpbuilder\/wiki\/RESTClient\"><span style=\"font-weight: 400;\">https:\/\/github.com\/jgritman\/httpbuilder\/wiki\/RESTClient<\/span><\/a><\/p>\n<h1>Building HTTPBuilder in a Grails App<\/h1>\n<p>The &#8220;@Grab&#8221; is a Grape annotation which adds dependencies at run time.<\/p>\n<p><a href=\"http:\/\/docs.groovy-lang.org\/latest\/html\/documentation\/grape.html\">http:\/\/docs.groovy-lang.org\/latest\/html\/documentation\/grape.html<\/a><\/p>\n<p>This works well for quickly writing Groovy scripts without using a build script, but it will not work in a Grails app deployed on the Tomcat server. Even adding the Ivy dependencies in build.gradle will result in a 500 Internal Server Error.<\/p>\n<p>We need to add HTTPBuilder dependencies directly to the project&#8217;s build. There are two ways to do this. We can download the HTTPBuilder from the Maven Central repository to the project&#8217;s local repository or we can configure build.gradle to access the Maven Central repository directly. Both techniques are outline below.<\/p>\n<h2>Adding HTTPBuilder to Local Maven Repository<\/h2>\n<p>We can use IntelliJ IDEA to download HTTPBuilder to the local Maven repository and then associate it with the project modules. This takes multiple steps:<\/p>\n<ol>\n<li>Select &#8220;File&#8221; menu and then select &#8220;Project Structure &#8230;&#8221; \u00a0to open the Project Structure\u00a0window.<\/li>\n<li>In the Project Structure middle pane, click the green &#8220;+&#8221; and select &#8220;From Maven&#8230;&#8221; to open the Download Library From Maven Repository window.<\/li>\n<li>Search for HTTPBuilder in the Maven repository by typing &#8220;org.codehaus.groovy.modules.http-builder&#8221; in the text box and clicking the search icon to right of the text box.<\/li>\n<li>Wait while InelliJ IDEA searches for all the versions of HTTPBuilder.<\/li>\n<li>After the search is complete, select the highest version, currently \u00a0 &#8220;org.codehaus.groovy.modules.http-builder:http-builder:0.7.1&#8221; and click &#8220;OK&#8221; and the Choose Modules widow opens.<\/li>\n<li>In the Choose Modules window, select all modules of the project and click OK.<\/li>\n<\/ol>\n<p>The HTTPBuilder is now downloaded into your local repository and associated with the projects modules. Now the dependency should be added to build.gradle. Add the compile dependency to the dependencies section in build.gradle.<\/p>\n<pre>dependencies {\r\n    compile \"org.codehaus.groovy.modules.http-builder:http-builder:0.7.1\"\r\n    ...\r\n}<\/pre>\n<p>You can now import groovy.net.http.HTTPBuilder in your controller classes and use HTTPBuilder.<\/p>\n<h2>Configuring build.gradle to Access Maven Central<\/h2>\n<p>To configure the build to access Maven Central directly only requires adding &#8220;mavenCentral()&#8221; function to the repositories section in build.gradle.<\/p>\n<pre>repositories {\r\n    mavenLocal()\r\n    maven { url \"https:\/\/repo.grails.org\/grails\/core\" }\r\n    mavenCentral()\r\n}<\/pre>\n<p>You do not want to add it the the repository section in &#8220;buildscript&#8221;. section. Also best to\u00a0add &#8220;mavenCentral()&#8221; to the bottom of the list because list determines the order that gradle will search the repository. We want gradle to search mavenCentral last.<\/p>\n<p>Add the compile dependency to the dependencies section in build.gradle.<\/p>\n<pre>dependencies {\r\n    compile \"org.codehaus.groovy.modules.http-builder:http-builder:0.7.1\"\r\n    ...\r\n}<\/pre>\n<p>You can now import groovy.net.http.HTTPBuilder in your controller classes and use HTTPBuilder.<\/p>\n<p>It may appear that configuring to build.gradle to access Maven Central is easier, but you have to know that the jar version. Also the process of using IntelliJ IDEA does not take that long.<\/p>\n<p>If you want to search Maven Central for jar without using IntelliJ IDEA, you can use the Maven Search website.<\/p>\n<p><a href=\"https:\/\/search.maven.org\/\">https:\/\/search.maven.org\/<\/a><\/p>\n<p>Form the Maven Search website, you can download the pom.xml, jar or source.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Introduction Mashup and Use Mashup or mashing-up is when a website makes requests to multiple services to provide content requested by a user. It is a relatively new technique that enables a website to leverage the services from other sites and data providers. Mashing-up can be used to customize a service, e.g. simplifying the interaction. [&hellip;]<\/p>\n","protected":false},"author":3,"featured_media":0,"parent":112,"menu_order":19,"comment_status":"closed","ping_status":"closed","template":"","meta":{"footnotes":""},"class_list":["post-1806","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"http:\/\/cs4760.csl.mtu.edu\/2017\/wp-json\/wp\/v2\/pages\/1806","targetHints":{"allow":["GET"]}}],"collection":[{"href":"http:\/\/cs4760.csl.mtu.edu\/2017\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"http:\/\/cs4760.csl.mtu.edu\/2017\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"http:\/\/cs4760.csl.mtu.edu\/2017\/wp-json\/wp\/v2\/users\/3"}],"replies":[{"embeddable":true,"href":"http:\/\/cs4760.csl.mtu.edu\/2017\/wp-json\/wp\/v2\/comments?post=1806"}],"version-history":[{"count":7,"href":"http:\/\/cs4760.csl.mtu.edu\/2017\/wp-json\/wp\/v2\/pages\/1806\/revisions"}],"predecessor-version":[{"id":1888,"href":"http:\/\/cs4760.csl.mtu.edu\/2017\/wp-json\/wp\/v2\/pages\/1806\/revisions\/1888"}],"up":[{"embeddable":true,"href":"http:\/\/cs4760.csl.mtu.edu\/2017\/wp-json\/wp\/v2\/pages\/112"}],"wp:attachment":[{"href":"http:\/\/cs4760.csl.mtu.edu\/2017\/wp-json\/wp\/v2\/media?parent=1806"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}