{"id":276,"date":"2016-05-07T21:44:58","date_gmt":"2016-05-07T21:44:58","guid":{"rendered":"http:\/\/blogs.softwareclue.com\/?p=276"},"modified":"2016-05-07T21:44:58","modified_gmt":"2016-05-07T21:44:58","slug":"practical-guide-to-implementing-neural-networks-in-python-using-theano","status":"publish","type":"post","link":"http:\/\/blog.softwareclues.com\/zh\/practical-guide-to-implementing-neural-networks-in-python-using-theano","title":{"rendered":"Practical Guide to implementing Neural Networks in Python (using Theano)"},"content":{"rendered":"<p>By <span class=\"entry-author\"> Aarshay Jain<\/span><\/p>\n<p>Source: <a href=\"http:\/\/www.analyticsvidhya.com\/blog\/2016\/04\/neural-networks-python-theano\/\" target=\"_blank\">http:\/\/www.analyticsvidhya.com\/blog\/2016\/04\/neural-networks-python-theano\/<\/a><\/p>\n<h2>Introduction<\/h2>\n<p>In my last article, I discussed the <a href=\"http:\/\/www.analyticsvidhya.com\/blog\/2016\/03\/introduction-deep-learning-fundamentals-neural-networks\/\" target=\"_blank\">fundamentals of deep learning<\/a>, where I explained the basic working of a artificial neural network. If you\u2019ve been following this series, today we\u2019ll become familiar with practical process of implementing neural network in Python (using Theano package).<\/p>\n<p>I found various other packages also such as Caffe, Torch, TensorFlow etc to do this job. But, Theano is no less than and satisfactorily execute all the tasks. Also, it has multiple benefits which further enhances the coding experience\u00a0in Python.<\/p>\n<p>In this article, I\u2019ll\u00a0provide a comprehensive practical guide to implement Neural Networks using Theano. If you are here for just python codes, feel free to skip the sections and learn at your pace. And, if you are new to Theano, I suggest you to follow the article sequentially to gain complete knowledge.<\/p>\n<p><em>Note:<\/em><\/p>\n<ol>\n<li><em>This article is best suited for\u00a0users\u00a0with knowledge of neural network &amp; deep learning.<\/em><\/li>\n<li><em>If you don\u2019t know python, <a href=\"http:\/\/www.analyticsvidhya.com\/blog\/2016\/01\/complete-tutorial-learn-data-science-python-scratch-2\/\" target=\"_blank\">start here<\/a>.<\/em><\/li>\n<li><em>If you don\u2019t know deep learning, <a href=\"http:\/\/www.analyticsvidhya.com\/blog\/2016\/03\/introduction-deep-learning-fundamentals-neural-networks\/\" target=\"_blank\">start here<\/a>.<\/em><\/li>\n<\/ol>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"size-full wp-image-24683 aligncenter\" src=\"http:\/\/i1.wp.com\/www.analyticsvidhya.com\/wp-content\/uploads\/2016\/04\/1-1.jpg?resize=500%2C280\" sizes=\"(max-width: 500px) 100vw, 500px\" srcset=\"http:\/\/i1.wp.com\/www.analyticsvidhya.com\/wp-content\/uploads\/2016\/04\/1-1.jpg?w=500 500w, http:\/\/i0.wp.com\/www.analyticsvidhya.com\/wp-content\/uploads\/2016\/04\/1-1.jpg?resize=300%2C168 300w, http:\/\/i0.wp.com\/www.analyticsvidhya.com\/wp-content\/uploads\/2016\/04\/1-1.jpg?resize=257%2C144 257w\" alt=\"Practical guide to implement neural network in python using theano\" width=\"500\" height=\"280\" \/><\/p>\n<p>&nbsp;<\/p>\n<h2>Table of Contents<\/h2>\n<ol>\n<li>Theano Overview<\/li>\n<li>Implementing Simple expressions<\/li>\n<li>Theano Variable Types<\/li>\n<li>Theano Functions<\/li>\n<li>Modeling a Single Neuron<\/li>\n<li>Modeling a Two-Layer Networks<\/li>\n<\/ol>\n<p>&nbsp;<\/p>\n<h2>1. Theano Overview<\/h2>\n<p>In short, we can define Theano as:<\/p>\n<ul>\n<li>A programming language which runs on top of Python but has its own data structure\u00a0which are tightly integrated with numpy<\/li>\n<li>A linear algebra compiler with defined C-codes at the backend<\/li>\n<li>A python package allowing faster implementation of mathematical expressions<\/li>\n<\/ul>\n<p>As popularly known, Theano was developed at the University of Montreal in 2008. It is\u00a0used for defining and evaluating mathematical expressions in general.<\/p>\n<p>Theano has several features which\u00a0optimize the processing time of expressions. For instance\u00a0it\u00a0modifies the symbolic expressions we define before converting them to C codes. Examples:<\/p>\n<ul>\n<li>It makes the expressions faster, for instance it will change { (x+y) + (x+y) } to { 2*(x+y) }<\/li>\n<li>It makes expressions more stable, for instance it will change { exp(a) \/ exp(a).sum(axis=1) } to { softmax(a) }<\/li>\n<\/ul>\n<p>Below are some powerful\u00a0advantages of using Theano:<\/p>\n<ol>\n<li>It defines C-codes for different mathematical expressions.<\/li>\n<li>The implementations are much faster as compared to some of the python\u2019s default implementations.<\/li>\n<li>Due to\u00a0fast implementations, it works well in case of high dimensionality problems.<\/li>\n<li>It allows GPU implementation which works blazingly fast specially for problems like deep learning.<\/li>\n<\/ol>\n<p>Let\u2019s now focus on Theano (with example) and try to understand it as a programming language.<\/p>\n<p>&nbsp;<\/p>\n<h2>2. Implementing Simple Expressions<\/h2>\n<p>Lets start by implementing a simple mathematical expression, say a multiplication in Theano and see how the system works. In later sections, we will take a deep dive into individual components.\u00a0The general structure of a Theano code works in 3 steps:<\/p>\n<ol>\n<li>Define variables\/objects<\/li>\n<li>Define a mathematical expression in the form of a function<\/li>\n<li>Evaluate expressions by passing values<\/li>\n<\/ol>\n<p>Lets look at the following code for simply multiplying 2 numbers:<\/p>\n<h4>Step 0: Import libraries<\/h4>\n<pre>import numpy as np\r\nimport theano.tensor as T\r\nfrom theano import function<\/pre>\n<p>Here, we have simply imported 2 key functions of theano \u2013 tensor and function.<\/p>\n<h4>Step 1: Define variables<\/h4>\n<pre>a = T.dscalar('a')\r\nb = T.dscalar('b')<\/pre>\n<p>Here 2 variables are defined. Note that we have used Theano tensor object type here. Also, the arguments passed to dscalar function are just name of tensors which are useful while debugging. They\u00a0code will work even without them.<\/p>\n<h4>Step 2: Define expression<\/h4>\n<pre>c = a*b\r\nf = function([a,b],c)<\/pre>\n<p>Here we have defined a function f which has 2 arguments:<\/p>\n<ol>\n<li>Inputs [a,b]: these are inputs to system<\/li>\n<li>Output c: this has been previously defined<\/li>\n<\/ol>\n<h4>Step 3: Evaluate Expression<\/h4>\n<pre>f(1.5,3)<\/pre>\n<p><a href=\"http:\/\/i1.wp.com\/www.analyticsvidhya.com\/wp-content\/uploads\/2016\/04\/1.-output-1.png\" rel=\"attachment wp-att-24610\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone wp-image-24610 \" src=\"http:\/\/i1.wp.com\/www.analyticsvidhya.com\/wp-content\/uploads\/2016\/04\/1.-output-1.png?resize=138%2C29\" alt=\"1. output 1\" width=\"138\" height=\"29\" \/><\/a><\/p>\n<p>Now we are simply calling the function with the 2 inputs and we get the output as a multiple of the two. In short, we saw how we can define mathematical expressions in Theano and evaluate them. Before we go into complex functions, lets understand some inherent properties of Theano which will be useful in building neural networks.<\/p>\n<p>&nbsp;<\/p>\n<h2>3. Theano\u00a0Variable Types<\/h2>\n<p>Variables are key building blocks of any programming language. In Theano, the objects are defined as tensors. A tensor can be\u00a0understood as a generalized form of a vector with dimension t. Different dimensions are analogous\u00a0to different types:<\/p>\n<ul>\n<li>t = 0: scalar<\/li>\n<li>t = 1: vector<\/li>\n<li>t = 2: matrix<\/li>\n<li>and so on..<\/li>\n<\/ul>\n<p><a href=\"https:\/\/www.youtube.com\/watch?v=f5liqUk0ZTw\" target=\"_blank\" rel=\"nofollow\">Watch this<\/a>\u00a0interesting video to get\u00a0a deeper level of intuition into vectors and tensors.<\/p>\n<p>These variables can be defined similar to our definition of \u2018dscalar\u2019 in the above code. The various keywords for defining variables are:<\/p>\n<ul>\n<li><strong>byte:<\/strong> bscalar, bvector, bmatrix, brow, bcol, btensor3, btensor4<\/li>\n<li><strong>16-bit integers:<\/strong> wscalar, wvector, wmatrix, wrow, wcol, wtensor3, wtensor4<\/li>\n<li><strong>32-bit integers:<\/strong> iscalar, ivector, imatrix, irow, icol, itensor3, itensor4<\/li>\n<li><strong>64-bit integers:<\/strong> lscalar, lvector, lmatrix, lrow, lcol, ltensor3, ltensor4<\/li>\n<li><strong>float:<\/strong> fscalar, fvector, fmatrix, frow, fcol, ftensor3, ftensor4<\/li>\n<li><strong>double:<\/strong> dscalar, dvector, dmatrix, drow, dcol, dtensor3, dtensor4<\/li>\n<li><strong>complex:<\/strong> cscalar, cvector, cmatrix, crow, ccol, ctensor3, ctensor4<\/li>\n<\/ul>\n<p>Now you understand that\u00a0we can define variables with different memory allocations and dimensions. But this is not an exhaustive list. We can define dimensions higher than 4 using a generic TensorType class. You\u2019ll find more details <a href=\"http:\/\/deeplearning.net\/software\/theano\/library\/tensor\/basic.html#libdoc-tensor-creation\" target=\"_blank\">here<\/a>.<\/p>\n<p>Please note that variables of these types are just symbols. They don\u2019t have a fixed value and are passed into functions as symbols. They only take values when a function is called. But, we often need variables which are constants and which we need not pass in all the functions. For this Theano provides shared variables. These have a fixed value and are not of the types discussed above. They can be defined as\u00a0numpy\u00a0data types or simple constants.<\/p>\n<p>Lets take an example. Suppose, we initialize\u00a0a shared variable as 0 and use a function which:<\/p>\n<ul>\n<li>takes an input<\/li>\n<li>adds the input to\u00a0the shared variable<\/li>\n<li>returns the square of shared variable<\/li>\n<\/ul>\n<p>This can be done as:<\/p>\n<pre>from theano import shared\r\nx = T.iscalar('x')\r\nsh = shared(0)\r\nf = function([x], sh**2, updates=[(sh,sh+x)])<\/pre>\n<p>Note that here function has an additional argument called updates. It has to be a list of lists or tuples, each containing 2 elements of form (shared_variable, updated_value).\u00a0The output for 3 subsequent runs is:<\/p>\n<p><a href=\"http:\/\/i1.wp.com\/www.analyticsvidhya.com\/wp-content\/uploads\/2016\/04\/2.-shared.png\" rel=\"attachment wp-att-24611\"><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-24611 aligncenter\" src=\"http:\/\/i2.wp.com\/www.analyticsvidhya.com\/wp-content\/uploads\/2016\/04\/2.-shared.png?resize=625%2C435\" sizes=\"(max-width: 625px) 100vw, 625px\" srcset=\"http:\/\/i0.wp.com\/www.analyticsvidhya.com\/wp-content\/uploads\/2016\/04\/2.-shared.png?w=880 880w, http:\/\/i2.wp.com\/www.analyticsvidhya.com\/wp-content\/uploads\/2016\/04\/2.-shared.png?resize=300%2C209 300w, http:\/\/i0.wp.com\/www.analyticsvidhya.com\/wp-content\/uploads\/2016\/04\/2.-shared.png?resize=768%2C534 768w, http:\/\/i2.wp.com\/www.analyticsvidhya.com\/wp-content\/uploads\/2016\/04\/2.-shared.png?resize=850%2C591 850w\" alt=\"2. shared\" width=\"625\" height=\"435\" \/><\/a><\/p>\n<p>You can see that for each run, it returns the square of the present value, i.e. the value before updating. After the run, the value of shared variable gets updated. Also, note that shared variables have 2 functions \u201cget_value()\u201d and \u201cset_value()\u201d which are used to read and modify the value of shared variables.<\/p>\n<p>&nbsp;<\/p>\n<h2>4. Theano Functions<\/h2>\n<p>Till now we saw the basic structure of a function and how it handles shared variables.\u00a0Lets move forward and discuss couple more things we can do with functions:<\/p>\n<h4>Return Multiple Values<\/h4>\n<p>We can return multiple values from a function. This can be easily done as shown in following example:<\/p>\n<pre>a = T.dscalar('a')\r\nf = function([a],[a**2, a**3])\r\nf(3)<\/pre>\n<p><a href=\"http:\/\/i1.wp.com\/www.analyticsvidhya.com\/wp-content\/uploads\/2016\/04\/3.-multiple-output.png\" rel=\"attachment wp-att-24612\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone wp-image-24612\" src=\"http:\/\/i1.wp.com\/www.analyticsvidhya.com\/wp-content\/uploads\/2016\/04\/3.-multiple-output.png?resize=272%2C32\" sizes=\"(max-width: 272px) 100vw, 272px\" srcset=\"http:\/\/i0.wp.com\/www.analyticsvidhya.com\/wp-content\/uploads\/2016\/04\/3.-multiple-output.png?w=442 442w, http:\/\/i0.wp.com\/www.analyticsvidhya.com\/wp-content\/uploads\/2016\/04\/3.-multiple-output.png?resize=300%2C35 300w\" alt=\"3. multiple output\" width=\"272\" height=\"32\" \/><\/a><\/p>\n<p>We can see that the output is an array with the square and cube of the number passed into the function.<\/p>\n<h4>Computing Gradients<\/h4>\n<p>Gradient computation is one of the most important part of training a deep learning model. This can be done easily in Theano. Let\u2019s define a function as the cube of a variable and determine its gradient.<\/p>\n<pre>x = T.dscalar('x')\r\ny = x**3\r\nqy = T.grad(y,x)\r\nf = function([x],qy)\r\nf(4)<\/pre>\n<p>This returns 48 which is 3x<sup>2<\/sup> for x=4. Lets see how Theano has implemented this derivative using the pretty-print feature as following:<\/p>\n<pre>from theano import pp  #pretty-print\r\nprint(pp(qy))<\/pre>\n<p><a href=\"http:\/\/i1.wp.com\/www.analyticsvidhya.com\/wp-content\/uploads\/2016\/04\/4.-pp.png\" rel=\"attachment wp-att-24613\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone wp-image-24613\" src=\"http:\/\/i0.wp.com\/www.analyticsvidhya.com\/wp-content\/uploads\/2016\/04\/4.-pp.png?resize=855%2C37\" sizes=\"(max-width: 855px) 100vw, 855px\" srcset=\"http:\/\/i0.wp.com\/www.analyticsvidhya.com\/wp-content\/uploads\/2016\/04\/4.-pp.png?w=2000 2000w, http:\/\/i2.wp.com\/www.analyticsvidhya.com\/wp-content\/uploads\/2016\/04\/4.-pp.png?resize=300%2C13 300w, http:\/\/i1.wp.com\/www.analyticsvidhya.com\/wp-content\/uploads\/2016\/04\/4.-pp.png?resize=768%2C33 768w, http:\/\/i1.wp.com\/www.analyticsvidhya.com\/wp-content\/uploads\/2016\/04\/4.-pp.png?resize=1024%2C44 1024w, http:\/\/i1.wp.com\/www.analyticsvidhya.com\/wp-content\/uploads\/2016\/04\/4.-pp.png?resize=850%2C37 850w\" alt=\"4. pp\" width=\"750\" height=\"32\" \/><\/a><\/p>\n<p>In short, it can be explained as: <strong>fill(x<sup>3<\/sup>,1)*3*x<sup>3-1<\/sup><\/strong> You can see that this is exactly the <strong>derivative of x<sup>3<\/sup><\/strong>. Note that fill(x<sup>3<\/sup>,1) simply means to make a matrix of same shape as x<sup>3<\/sup> and fill it with 1. This is used to handle high dimensionality input and can be ignored in this case.<\/p>\n<p>We can use Theano to compute Jacobian and Hessian matrices as well which you can find <a href=\"http:\/\/deeplearning.net\/software\/theano\/tutorial\/gradients.html\" target=\"_blank\">here<\/a>.<\/p>\n<p>There are various other aspects of Theano like conditional and looping constructs.\u00a0You can go into further detail using following resources:<\/p>\n<ol>\n<li><a href=\"http:\/\/deeplearning.net\/software\/theano\/tutorial\/conditions.html\" target=\"_blank\">Theano Conditional Constructs<\/a><\/li>\n<li><a href=\"http:\/\/deeplearning.net\/software\/theano\/tutorial\/loop.html\" target=\"_blank\">Theano Looping Statements<\/a><\/li>\n<li><a href=\"http:\/\/deeplearning.net\/software\/theano\/tutorial\/shape_info.html\" target=\"_blank\">Handling Shape Information<\/a><\/li>\n<\/ol>\n<p>&nbsp;<\/p>\n<h2>5. Modeling a Single Neuron<\/h2>\n<p>Lets start by modeling a single neuron.<\/p>\n<p>Note that I will take examples from my previous article on neuron networks here. If you wish to go in the detail of how these work, please read <a href=\"http:\/\/www.analyticsvidhya.com\/blog\/2016\/03\/introduction-deep-learning-fundamentals-neural-networks\/\" target=\"_blank\">this article<\/a>. For modeling a neuron, lets adopt a 2 stage process:<\/p>\n<ol>\n<li>Implement Feed Forward Pass\n<ul>\n<li>take\u00a0inputs and determine output<\/li>\n<li>use the fixed\u00a0weights for this case<\/li>\n<\/ul>\n<\/li>\n<li>Implement Backward Propagation\n<ul>\n<li>calculate error and gradients<\/li>\n<li>update weights using gradients<\/li>\n<\/ul>\n<\/li>\n<\/ol>\n<p>Lets implement an AND gate\u00a0for this purpose.<\/p>\n<p>&nbsp;<\/p>\n<h3>Feed Forward Pass<\/h3>\n<p>An AND gate can be implemented as:<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"size-full wp-image-24092 aligncenter\" src=\"http:\/\/i2.wp.com\/www.analyticsvidhya.com\/wp-content\/uploads\/2016\/03\/2.jpg?resize=324%2C271\" sizes=\"(max-width: 324px) 100vw, 324px\" srcset=\"http:\/\/i0.wp.com\/www.analyticsvidhya.com\/wp-content\/uploads\/2016\/03\/2.jpg?w=324 324w, http:\/\/i0.wp.com\/www.analyticsvidhya.com\/wp-content\/uploads\/2016\/03\/2.jpg?resize=300%2C251 300w\" alt=\"2\" width=\"324\" height=\"271\" \/><\/p>\n<p>Now we will define a feed forward network which takes inputs and uses the shown weights to determine the output. First we will define a neuron which computes the output a.<\/p>\n<pre>import theano\r\nimport theano.tensor as T\r\nfrom theano.ifelse import ifelse\r\nimport numpy as np\r\n\r\n#Define variables:\r\nx = T.vector('x')\r\nw = T.vector('w')\r\nb = T.scalar('b')\r\n\r\n#Define mathematical expression:\r\nz = T.dot(x,w)+b\r\na = ifelse(T.lt(z,0),0,1)\r\n\r\nneuron = theano.function([x,w,b],a)<\/pre>\n<p>I have simply used the steps we saw above. If you are not sure\u00a0how this expression works, please refer to the neural networks article I have referred above. Now let\u2019s test out all values in the truth table and see if the AND function has been implemented as desired.<\/p>\n<pre>#Define inputs and weights\r\ninputs = [\r\n    [0, 0],\r\n    [0, 1],\r\n    [1, 0],\r\n    [1, 1]\r\n]\r\nweights = [ 1, 1]\r\nbias = -1.5\r\n\r\n#Iterate through all inputs and find outputs:\r\nfor i in range(len(inputs)):\r\n    t = inputs[i]\r\n    out = neuron(t,weights,bias)\r\n    print 'The output for x1=%d | x2=%d is %d' % (t[0],t[1],out)<\/pre>\n<p><a href=\"http:\/\/i0.wp.com\/www.analyticsvidhya.com\/wp-content\/uploads\/2016\/04\/5.-single-neuron.png\" rel=\"attachment wp-att-24615\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone wp-image-24615 size-full\" src=\"http:\/\/i1.wp.com\/www.analyticsvidhya.com\/wp-content\/uploads\/2016\/04\/5.-single-neuron.png?resize=540%2C146\" sizes=\"(max-width: 540px) 100vw, 540px\" srcset=\"http:\/\/i2.wp.com\/www.analyticsvidhya.com\/wp-content\/uploads\/2016\/04\/5.-single-neuron.png?w=540 540w, http:\/\/i2.wp.com\/www.analyticsvidhya.com\/wp-content\/uploads\/2016\/04\/5.-single-neuron.png?resize=300%2C81 300w\" alt=\"5. single neuron\" width=\"540\" height=\"146\" \/><\/a><\/p>\n<p>Note that, in this case we had to provide weights while calling the function. However, we will be required to update them while training. So, its better that we define them as a shared variable. The following code implements <em>w<\/em> as a shared variable. Try this out and you\u2019ll get the same output.<\/p>\n<pre>import theano\r\nimport theano.tensor as T\r\nfrom theano.ifelse import ifelse\r\nimport numpy as np\r\n\r\n#Define variables:\r\nx = T.vector('x')\r\nw = theano.shared(np.array([1,1]))\r\nb = theano.shared(-1.5)\r\n\r\n#Define mathematical expression:\r\nz = T.dot(x,w)+b\r\na = ifelse(T.lt(z,0),0,1)\r\n\r\nneuron = theano.function([x],a)\r\n\r\n#Define inputs and weights\r\ninputs = [\r\n    [0, 0],\r\n    [0, 1],\r\n    [1, 0],\r\n    [1, 1]\r\n]\r\n\r\n#Iterate through all inputs and find outputs:\r\nfor i in range(len(inputs)):\r\n    t = inputs[i]\r\n    out = neuron(t)\r\n    print 'The output for x1=%d | x2=%d is %d' % (t[0],t[1],out)<\/pre>\n<p>Now the feedforward step is complete.<\/p>\n<p>&nbsp;<\/p>\n<h3>Backward Propagation<\/h3>\n<p>Now we have to modify the above code and perform following additional steps:<\/p>\n<ol>\n<li>Determine the cost or error based on true output<\/li>\n<li>Determine gradient of node<\/li>\n<li>Update the weights using this gradient<\/li>\n<\/ol>\n<p>Lets initialize the network as follow:<\/p>\n<pre>#Gradient\r\nimport theano\r\nimport theano.tensor as T\r\nfrom theano.ifelse import ifelse\r\nimport numpy as np\r\nfrom random import random\r\n\r\n#Define variables:\r\nx = T.matrix('x')\r\nw = theano.shared(np.array([random(),random()]))\r\nb = theano.shared(1.)\r\nlearning_rate = 0.01\r\n\r\n#Define mathematical expression:\r\nz = T.dot(x,w)+b\r\na = 1\/(1+T.exp(-z))<\/pre>\n<p>Note that, you will notice a change here as compared to above program. I have defined x as a matrix here and not a vector. This is more of a vectorized approach where we will determine all the outputs together and find the total cost which is required for determining the gradients.<\/p>\n<p>You should also keep in mind\u00a0that I am using the full-batch gradient descent here, i.e. we will use all training observations to update the weights.<\/p>\n<p>Let\u2019s determine the cost as follows:<\/p>\n<pre>a_hat = T.vector('a_hat') #Actual output\r\ncost = -(a_hat*T.log(a) + (1-a_hat)*T.log(1-a)).sum()<\/pre>\n<p>In this code, we have defined a_hat as the actual observations. Then we determine the cost using a simple logistic cost function since this is a classification problem. Now lets compute the gradients and define a means to update the weights.<\/p>\n<pre>dw,db = T.grad(cost,[w,b])\r\n\r\ntrain = function(\r\n    inputs = [x,a_hat],\r\n    outputs = [a,cost],\r\n    updates = [\r\n        [w, w-learning_rate*dw],\r\n        [b, b-learning_rate*db]\r\n    ]\r\n)<\/pre>\n<p>In here, we are first computing gradient of the cost w.r.t. the weights for inputs and bias unit. Then, the train function here does the weight update job. This is an elegant but tricky approach where the weights have been defined as shared variables and the updates argument of the function is used to update them every time a set of values are passed through the model.<\/p>\n<pre>#Define inputs and weights\r\ninputs = [\r\n    [0, 0],\r\n    [0, 1],\r\n    [1, 0],\r\n    [1, 1]\r\n]\r\noutputs = [0,0,0,1]\r\n\r\n#Iterate through all inputs and find outputs:\r\ncost = []\r\nfor iteration in range(30000):\r\n    pred, cost_iter = train(inputs, outputs)\r\n    cost.append(cost_iter)\r\n    \r\n#Print the outputs:\r\nprint 'The outputs of the NN are:'\r\nfor i in range(len(inputs)):\r\n    print 'The output for x1=%d | x2=%d is %.2f' % (inputs[i][0],inputs[i][1],pred[i])\r\n    \r\n#Plot the flow of cost:\r\nprint '\\nThe flow of cost during model run is as following:'\r\nimport matplotlib.pyplot as plt\r\n%matplotlib inline\r\nplt.plot(cost)<\/pre>\n<p><a href=\"http:\/\/i0.wp.com\/www.analyticsvidhya.com\/wp-content\/uploads\/2016\/04\/6.-output-single-update.png\" rel=\"attachment wp-att-24627\"><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-24627 aligncenter\" src=\"http:\/\/i1.wp.com\/www.analyticsvidhya.com\/wp-content\/uploads\/2016\/04\/6.-output-single-update.png?resize=749%2C711\" sizes=\"(max-width: 749px) 100vw, 749px\" srcset=\"http:\/\/i1.wp.com\/www.analyticsvidhya.com\/wp-content\/uploads\/2016\/04\/6.-output-single-update.png?w=880 880w, http:\/\/i2.wp.com\/www.analyticsvidhya.com\/wp-content\/uploads\/2016\/04\/6.-output-single-update.png?resize=300%2C285 300w, http:\/\/i2.wp.com\/www.analyticsvidhya.com\/wp-content\/uploads\/2016\/04\/6.-output-single-update.png?resize=768%2C730 768w, http:\/\/i0.wp.com\/www.analyticsvidhya.com\/wp-content\/uploads\/2016\/04\/6.-output-single-update.png?resize=850%2C808 850w\" alt=\"6. output single update\" width=\"749\" height=\"712\" \/><\/a><\/p>\n<p>Here we have simply defined the inputs, outputs and trained the model. While training, we have also recorded the cost and its plot shows that our cost reduced towards zero and then finally saturated at a low value. The output of the network also matched the desired output closely. Hence, we have successfully implemented and trained a single neuron.<\/p>\n<p>&nbsp;<\/p>\n<h2>6. Modeling a Two-Layer Neural Network<\/h2>\n<p>I hope you have understood the last section. If not, please do read it multiple times and proceed to this section. Along with learning Theano, this will enhance your understanding of neural networks on the whole.<\/p>\n<p>Lets consolidate our understanding\u00a0by taking a 2-layer example. To keep things simple, I\u2019ll take the XNOR example like in my previous article. If you wish to explore the nitty-gritty of how it works, I recommend reading the <a href=\"http:\/\/www.analyticsvidhya.com\/blog\/2016\/03\/introduction-deep-learning-fundamentals-neural-networks\/\" target=\"_blank\">previous\u00a0article<\/a>.<\/p>\n<p>The XNOR function can be implemented as:<\/p>\n<p><a href=\"http:\/\/i2.wp.com\/www.analyticsvidhya.com\/wp-content\/uploads\/2016\/03\/6.jpg\" rel=\"attachment wp-att-24098\"><img loading=\"lazy\" decoding=\"async\" class=\"size-full wp-image-24098 aligncenter\" src=\"http:\/\/i2.wp.com\/www.analyticsvidhya.com\/wp-content\/uploads\/2016\/03\/6.jpg?resize=499%2C288\" sizes=\"(max-width: 499px) 100vw, 499px\" srcset=\"http:\/\/i1.wp.com\/www.analyticsvidhya.com\/wp-content\/uploads\/2016\/03\/6.jpg?w=499 499w, http:\/\/i1.wp.com\/www.analyticsvidhya.com\/wp-content\/uploads\/2016\/03\/6.jpg?resize=300%2C173 300w\" alt=\"6\" width=\"499\" height=\"288\" \/><\/a><\/p>\n<p>As a reminder, the truth table of XNOR function is:<\/p>\n<p><a href=\"http:\/\/i1.wp.com\/www.analyticsvidhya.com\/wp-content\/uploads\/2016\/03\/8.-tt-xnor-case-1.png\" rel=\"attachment wp-att-24026\"><img loading=\"lazy\" decoding=\"async\" class=\"size-full wp-image-24026 aligncenter\" src=\"http:\/\/i0.wp.com\/www.analyticsvidhya.com\/wp-content\/uploads\/2016\/03\/8.-tt-xnor-case-1.png?resize=980%2C214\" sizes=\"(max-width: 980px) 100vw, 980px\" srcset=\"http:\/\/i1.wp.com\/www.analyticsvidhya.com\/wp-content\/uploads\/2016\/03\/8.-tt-xnor-case-1.png?w=980 980w, http:\/\/i0.wp.com\/www.analyticsvidhya.com\/wp-content\/uploads\/2016\/03\/8.-tt-xnor-case-1.png?resize=300%2C66 300w, http:\/\/i2.wp.com\/www.analyticsvidhya.com\/wp-content\/uploads\/2016\/03\/8.-tt-xnor-case-1.png?resize=768%2C168 768w, http:\/\/i0.wp.com\/www.analyticsvidhya.com\/wp-content\/uploads\/2016\/03\/8.-tt-xnor-case-1.png?resize=850%2C186 850w\" alt=\"8. tt xnor case 1\" width=\"750\" height=\"164\" \/><\/a><\/p>\n<p>Now we will directly implement both feed forward and backward at one go.<\/p>\n<h3>Step 1: Define variables<\/h3>\n<pre>import theano\r\nimport theano.tensor as T\r\nfrom theano.ifelse import ifelse\r\nimport numpy as np\r\nfrom random import random\r\n\r\n#Define variables:\r\nx = T.matrix('x')\r\nw1 = theano.shared(np.array([random(),random()]))\r\nw2 = theano.shared(np.array([random(),random()]))\r\nw3 = theano.shared(np.array([random(),random()]))\r\nb1 = theano.shared(1.)\r\nb2 = theano.shared(1.)\r\nlearning_rate = 0.01<\/pre>\n<p>In this step we have defined all the required variables as in the previous case. Note that now we have 3 weight vectors corresponding to each neuron and 2 bias units corresponding to 2 layers.<\/p>\n<p>&nbsp;<\/p>\n<h3>Step 2: Define mathematical expression<\/h3>\n<pre>a1 = 1\/(1+T.exp(-T.dot(x,w1)-b1))\r\na2 = 1\/(1+T.exp(-T.dot(x,w2)-b1))\r\nx2 = T.stack([a1,a2],axis=1)\r\na3 = 1\/(1+T.exp(-T.dot(x2,w3)-b2))<\/pre>\n<p>Here we have simply defined mathematical expressions for each neuron in sequence. Note that here an additional step was required where x2 is determined. This is required because we want the outputs of a1 and a2 to be combined into a matrix whose dot product can be taken with the weights vector.<\/p>\n<p>Lets explore this a bit further. Both a1 and a2 would return a vector with 4 units. So if we simply take an array [a1, a2] then we\u2019ll obtain something like [ [a11,a12,a13,a14], [a21,a22,a23,a24] ]. However, we want this to be [ [a11,a21], [a12,a22], [a13,a23], [a14,a24] ]. The stacking function of Theano does this job for us.<\/p>\n<p>&nbsp;<\/p>\n<h3>Step 3: Define gradient and update rule<\/h3>\n<pre>a_hat = T.vector('a_hat') #Actual output\r\ncost = -(a_hat*T.log(a3) + (1-a_hat)*T.log(1-a3)).sum()\r\ndw1,dw2,dw3,db1,db2 = T.grad(cost,[w1,w2,w3,b1,b2])\r\n\r\ntrain = function(\r\n    inputs = [x,a_hat],\r\n    outputs = [a3,cost],\r\n    updates = [\r\n        [w1, w1-learning_rate*dw1],\r\n        [w2, w2-learning_rate*dw2],\r\n        [w3, w3-learning_rate*dw3],\r\n        [b1, b1-learning_rate*db1],\r\n        [b2, b2-learning_rate*db2]\r\n    ]\r\n)<\/pre>\n<p>This is very similar to the previous case. The key difference being that now we have to determine the gradients of 3 weight vectors and 2 bias units and update them accordingly.<\/p>\n<p>&nbsp;<\/p>\n<h3>Step 4: Train the model<\/h3>\n<pre>inputs = [\r\n    [0, 0],\r\n    [0, 1],\r\n    [1, 0],\r\n    [1, 1]\r\n]\r\noutputs = [1,0,0,1]\r\n\r\n#Iterate through all inputs and find outputs:\r\ncost = []\r\nfor iteration in range(30000):\r\n    pred, cost_iter = train(inputs, outputs)\r\n    cost.append(cost_iter)\r\n    \r\n#Print the outputs:\r\nprint 'The outputs of the NN are:'\r\nfor i in range(len(inputs)):\r\n    print 'The output for x1=%d | x2=%d is %.2f' % (inputs[i][0],inputs[i][1],pred[i])\r\n    \r\n#Plot the flow of cost:\r\nprint '\\nThe flow of cost during model run is as following:'\r\nimport matplotlib.pyplot as plt\r\n%matplotlib inline\r\nplt.plot(cost)<\/pre>\n<p><a href=\"http:\/\/i2.wp.com\/www.analyticsvidhya.com\/wp-content\/uploads\/2016\/04\/7.-2-layer-op.png\" rel=\"attachment wp-att-24628\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-24628\" src=\"http:\/\/i0.wp.com\/www.analyticsvidhya.com\/wp-content\/uploads\/2016\/04\/7.-2-layer-op.png?resize=912%2C834\" sizes=\"(max-width: 912px) 100vw, 912px\" srcset=\"http:\/\/i2.wp.com\/www.analyticsvidhya.com\/wp-content\/uploads\/2016\/04\/7.-2-layer-op.png?w=912 912w, http:\/\/i0.wp.com\/www.analyticsvidhya.com\/wp-content\/uploads\/2016\/04\/7.-2-layer-op.png?resize=300%2C274 300w, http:\/\/i1.wp.com\/www.analyticsvidhya.com\/wp-content\/uploads\/2016\/04\/7.-2-layer-op.png?resize=768%2C702 768w, http:\/\/i2.wp.com\/www.analyticsvidhya.com\/wp-content\/uploads\/2016\/04\/7.-2-layer-op.png?resize=850%2C777 850w\" alt=\"7. 2 layer op\" width=\"750\" height=\"686\" \/><\/a><\/p>\n<p>We can see that our network has successfully learned the XNOR function. Also, the cost of the model has reduced to reasonable limit. With this, we have successfully implemented a 2-layer network.<\/p>\n<p>&nbsp;<\/p>\n<h2>End Notes<\/h2>\n<p>In this article, we understood the basics of Theano package in Python and how it acts as a programming language. We also implemented some basic neural networks using Theano. I am sure that implementing Neural Networks on Theano will enhance your understanding of NN on the whole.<\/p>\n<p>If hope you have been able to follow till this point, you really deserve a pat on your back. I can understand that Theano is not a traditional plug and play system like most of sklearn\u2019s ML models. But the beauty of neural networks lies in their flexibility and an approach like this will allow you a high degree of\u00a0customization in models. Some high-level wrappers of Theano do exist like Keras and Lasagne which you can check out. But I believe knowing the core of Theano will help you in using them.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>By Aarshay Jain Source: http:\/\/www.analyticsvidhya.com\/ &hellip; <a href=\"http:\/\/blog.softwareclues.com\/zh\/practical-guide-to-implementing-neural-networks-in-python-using-theano\" class=\"more-link\">\u7ee7\u7eed\u9605\u8bfb<span class=\"screen-reader-text\">\u201cPractical Guide to implementing Neural Networks in Python (using Theano)\u201d<\/span><\/a><\/p>\n","protected":false},"author":2,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"jetpack_post_was_ever_published":false,"_jetpack_newsletter_access":"","_jetpack_dont_email_post_to_subs":false,"_jetpack_newsletter_tier_id":0,"_jetpack_memberships_contains_paywalled_content":false,"_jetpack_memberships_contains_paid_content":false,"footnotes":"","jetpack_publicize_message":"","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":false,"jetpack_social_options":{"image_generator_settings":{"template":"highway","enabled":false}}},"categories":[3],"tags":[56,78],"translation":{"provider":"WPGlobus","version":"2.12.2","language":"zh","enabled_languages":["en","zh"],"languages":{"en":{"title":true,"content":true,"excerpt":false},"zh":{"title":true,"content":false,"excerpt":false}}},"jetpack_publicize_connections":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v22.7 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Practical Guide to implementing Neural Networks in Python (using Theano) - \u8f6f\u4ef6\u542f\u793a\u5f55<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"http:\/\/blog.softwareclues.com\/zh\/practical-guide-to-implementing-neural-networks-in-python-using-theano\" \/>\n<meta property=\"og:locale\" content=\"zh_CN\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Practical Guide to implementing Neural Networks in Python (using Theano) - \u8f6f\u4ef6\u542f\u793a\u5f55\" \/>\n<meta property=\"og:url\" content=\"http:\/\/blog.softwareclues.com\/zh\/practical-guide-to-implementing-neural-networks-in-python-using-theano\" \/>\n<meta property=\"og:site_name\" content=\"\u8f6f\u4ef6\u542f\u793a\u5f55\" \/>\n<meta property=\"article:published_time\" content=\"2016-05-07T21:44:58+00:00\" \/>\n<meta property=\"og:image\" content=\"http:\/\/i1.wp.com\/www.analyticsvidhya.com\/wp-content\/uploads\/2016\/04\/1-1.jpg?resize=500%2C280\" \/>\n<meta name=\"author\" content=\"Editorial Team\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"\u4f5c\u8005\" \/>\n\t<meta name=\"twitter:data1\" content=\"Editorial Team\" \/>\n\t<meta name=\"twitter:label2\" content=\"\u9884\u8ba1\u9605\u8bfb\u65f6\u95f4\" \/>\n\t<meta name=\"twitter:data2\" content=\"15 \u5206\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"WebPage\",\"@id\":\"http:\/\/blog.softwareclues.com\/zh\/practical-guide-to-implementing-neural-networks-in-python-using-theano\",\"url\":\"http:\/\/blog.softwareclues.com\/zh\/practical-guide-to-implementing-neural-networks-in-python-using-theano\",\"name\":\"Practical Guide to implementing Neural Networks in Python (using Theano) - \u8f6f\u4ef6\u542f\u793a\u5f55\",\"isPartOf\":{\"@id\":\"http:\/\/blog.softwareclues.com\/#website\"},\"primaryImageOfPage\":{\"@id\":\"http:\/\/blog.softwareclues.com\/zh\/practical-guide-to-implementing-neural-networks-in-python-using-theano#primaryimage\"},\"image\":{\"@id\":\"http:\/\/blog.softwareclues.com\/zh\/practical-guide-to-implementing-neural-networks-in-python-using-theano#primaryimage\"},\"thumbnailUrl\":\"http:\/\/i1.wp.com\/www.analyticsvidhya.com\/wp-content\/uploads\/2016\/04\/1-1.jpg?resize=500%2C280\",\"datePublished\":\"2016-05-07T21:44:58+00:00\",\"dateModified\":\"2016-05-07T21:44:58+00:00\",\"author\":{\"@id\":\"http:\/\/blog.softwareclues.com\/#\/schema\/person\/4c47e4e97a658930b6c0e90f4a4eda82\"},\"breadcrumb\":{\"@id\":\"http:\/\/blog.softwareclues.com\/zh\/practical-guide-to-implementing-neural-networks-in-python-using-theano#breadcrumb\"},\"inLanguage\":\"zh-Hans\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"http:\/\/blog.softwareclues.com\/zh\/practical-guide-to-implementing-neural-networks-in-python-using-theano\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"zh-Hans\",\"@id\":\"http:\/\/blog.softwareclues.com\/zh\/practical-guide-to-implementing-neural-networks-in-python-using-theano#primaryimage\",\"url\":\"http:\/\/i1.wp.com\/www.analyticsvidhya.com\/wp-content\/uploads\/2016\/04\/1-1.jpg?resize=500%2C280\",\"contentUrl\":\"http:\/\/i1.wp.com\/www.analyticsvidhya.com\/wp-content\/uploads\/2016\/04\/1-1.jpg?resize=500%2C280\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"http:\/\/blog.softwareclues.com\/zh\/practical-guide-to-implementing-neural-networks-in-python-using-theano#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"http:\/\/blog.softwareclues.com\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Practical Guide to implementing Neural Networks in Python (using Theano)\"}]},{\"@type\":\"WebSite\",\"@id\":\"http:\/\/blog.softwareclues.com\/#website\",\"url\":\"http:\/\/blog.softwareclues.com\/\",\"name\":\"\u8f6f\u4ef6\u542f\u793a\u5f55\",\"description\":\"\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"http:\/\/blog.softwareclues.com\/?s={search_term_string}\"},\"query-input\":\"required name=search_term_string\"}],\"inLanguage\":\"zh-Hans\"},{\"@type\":\"Person\",\"@id\":\"http:\/\/blog.softwareclues.com\/#\/schema\/person\/4c47e4e97a658930b6c0e90f4a4eda82\",\"name\":\"Editorial Team\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"zh-Hans\",\"@id\":\"http:\/\/blog.softwareclues.com\/#\/schema\/person\/image\/\",\"url\":\"http:\/\/2.gravatar.com\/avatar\/e4fb391d9f5bb29583ed9579324a5e17?s=96&d=mystery&r=g\",\"contentUrl\":\"http:\/\/2.gravatar.com\/avatar\/e4fb391d9f5bb29583ed9579324a5e17?s=96&d=mystery&r=g\",\"caption\":\"Editorial Team\"},\"url\":\"http:\/\/blog.softwareclues.com\/zh\/author\/admin\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Practical Guide to implementing Neural Networks in Python (using Theano) - \u8f6f\u4ef6\u542f\u793a\u5f55","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"http:\/\/blog.softwareclues.com\/zh\/practical-guide-to-implementing-neural-networks-in-python-using-theano","og_locale":"zh_CN","og_type":"article","og_title":"Practical Guide to implementing Neural Networks in Python (using Theano) - \u8f6f\u4ef6\u542f\u793a\u5f55","og_url":"http:\/\/blog.softwareclues.com\/zh\/practical-guide-to-implementing-neural-networks-in-python-using-theano","og_site_name":"\u8f6f\u4ef6\u542f\u793a\u5f55","article_published_time":"2016-05-07T21:44:58+00:00","og_image":[{"url":"http:\/\/i1.wp.com\/www.analyticsvidhya.com\/wp-content\/uploads\/2016\/04\/1-1.jpg?resize=500%2C280"}],"author":"Editorial Team","twitter_card":"summary_large_image","twitter_misc":{"\u4f5c\u8005":"Editorial Team","\u9884\u8ba1\u9605\u8bfb\u65f6\u95f4":"15 \u5206"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"WebPage","@id":"http:\/\/blog.softwareclues.com\/zh\/practical-guide-to-implementing-neural-networks-in-python-using-theano","url":"http:\/\/blog.softwareclues.com\/zh\/practical-guide-to-implementing-neural-networks-in-python-using-theano","name":"Practical Guide to implementing Neural Networks in Python (using Theano) - \u8f6f\u4ef6\u542f\u793a\u5f55","isPartOf":{"@id":"http:\/\/blog.softwareclues.com\/#website"},"primaryImageOfPage":{"@id":"http:\/\/blog.softwareclues.com\/zh\/practical-guide-to-implementing-neural-networks-in-python-using-theano#primaryimage"},"image":{"@id":"http:\/\/blog.softwareclues.com\/zh\/practical-guide-to-implementing-neural-networks-in-python-using-theano#primaryimage"},"thumbnailUrl":"http:\/\/i1.wp.com\/www.analyticsvidhya.com\/wp-content\/uploads\/2016\/04\/1-1.jpg?resize=500%2C280","datePublished":"2016-05-07T21:44:58+00:00","dateModified":"2016-05-07T21:44:58+00:00","author":{"@id":"http:\/\/blog.softwareclues.com\/#\/schema\/person\/4c47e4e97a658930b6c0e90f4a4eda82"},"breadcrumb":{"@id":"http:\/\/blog.softwareclues.com\/zh\/practical-guide-to-implementing-neural-networks-in-python-using-theano#breadcrumb"},"inLanguage":"zh-Hans","potentialAction":[{"@type":"ReadAction","target":["http:\/\/blog.softwareclues.com\/zh\/practical-guide-to-implementing-neural-networks-in-python-using-theano"]}]},{"@type":"ImageObject","inLanguage":"zh-Hans","@id":"http:\/\/blog.softwareclues.com\/zh\/practical-guide-to-implementing-neural-networks-in-python-using-theano#primaryimage","url":"http:\/\/i1.wp.com\/www.analyticsvidhya.com\/wp-content\/uploads\/2016\/04\/1-1.jpg?resize=500%2C280","contentUrl":"http:\/\/i1.wp.com\/www.analyticsvidhya.com\/wp-content\/uploads\/2016\/04\/1-1.jpg?resize=500%2C280"},{"@type":"BreadcrumbList","@id":"http:\/\/blog.softwareclues.com\/zh\/practical-guide-to-implementing-neural-networks-in-python-using-theano#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"http:\/\/blog.softwareclues.com\/"},{"@type":"ListItem","position":2,"name":"Practical Guide to implementing Neural Networks in Python (using Theano)"}]},{"@type":"WebSite","@id":"http:\/\/blog.softwareclues.com\/#website","url":"http:\/\/blog.softwareclues.com\/","name":"\u8f6f\u4ef6\u542f\u793a\u5f55","description":"","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"http:\/\/blog.softwareclues.com\/?s={search_term_string}"},"query-input":"required name=search_term_string"}],"inLanguage":"zh-Hans"},{"@type":"Person","@id":"http:\/\/blog.softwareclues.com\/#\/schema\/person\/4c47e4e97a658930b6c0e90f4a4eda82","name":"Editorial Team","image":{"@type":"ImageObject","inLanguage":"zh-Hans","@id":"http:\/\/blog.softwareclues.com\/#\/schema\/person\/image\/","url":"http:\/\/2.gravatar.com\/avatar\/e4fb391d9f5bb29583ed9579324a5e17?s=96&d=mystery&r=g","contentUrl":"http:\/\/2.gravatar.com\/avatar\/e4fb391d9f5bb29583ed9579324a5e17?s=96&d=mystery&r=g","caption":"Editorial Team"},"url":"http:\/\/blog.softwareclues.com\/zh\/author\/admin"}]}},"jetpack_featured_media_url":"","jetpack_sharing_enabled":true,"jetpack_shortlink":"https:\/\/wp.me\/paLJfj-4s","jetpack-related-posts":[],"_links":{"self":[{"href":"http:\/\/blog.softwareclues.com\/zh\/wp-json\/wp\/v2\/posts\/276"}],"collection":[{"href":"http:\/\/blog.softwareclues.com\/zh\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"http:\/\/blog.softwareclues.com\/zh\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"http:\/\/blog.softwareclues.com\/zh\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"http:\/\/blog.softwareclues.com\/zh\/wp-json\/wp\/v2\/comments?post=276"}],"version-history":[{"count":2,"href":"http:\/\/blog.softwareclues.com\/zh\/wp-json\/wp\/v2\/posts\/276\/revisions"}],"predecessor-version":[{"id":278,"href":"http:\/\/blog.softwareclues.com\/zh\/wp-json\/wp\/v2\/posts\/276\/revisions\/278"}],"wp:attachment":[{"href":"http:\/\/blog.softwareclues.com\/zh\/wp-json\/wp\/v2\/media?parent=276"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"http:\/\/blog.softwareclues.com\/zh\/wp-json\/wp\/v2\/categories?post=276"},{"taxonomy":"post_tag","embeddable":true,"href":"http:\/\/blog.softwareclues.com\/zh\/wp-json\/wp\/v2\/tags?post=276"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}