AI: Difference between revisions
Jump to navigation
Jump to search
Line 23: | Line 23: | ||
** [https://www.manning.com/books/deep-learning-with-python Deep Learning with Python] by François Chollet, 2017 (available on safaribooksonline) | ** [https://www.manning.com/books/deep-learning-with-python Deep Learning with Python] by François Chollet, 2017 (available on safaribooksonline) | ||
** [https://github.com/janishar/mit-deep-learning-book-pdf Deep Learning] by Ian Goodfellow and Yoshua Bengio and Aaron Courville | ** [https://github.com/janishar/mit-deep-learning-book-pdf Deep Learning] by Ian Goodfellow and Yoshua Bengio and Aaron Courville | ||
* | * Deep Learning Glossary | ||
** | ** http://www.wildml.com/deep-learning-glossary/ | ||
** | ** [https://www.quora.com/What-is-an-epoch-in-deep-learning What is an epoch in deep learning?] | ||
== Keras == | |||
* Derivative of a tensor operation: the gradient | * Derivative of a tensor operation: the gradient | ||
** Define loss_value = f(W) = dot(W, x) | ** Define loss_value = f(W) = dot(W, x) | ||
** W1 = W0 - step * gradient(f)(W0) | ** W1 = W0 - step * gradient(f)(W0) | ||
* Stochastic gradient descent | * Stochastic gradient descent | ||
* | * Tensor operations: | ||
** | ** relu(x) = max(0, x) | ||
** Each neural layer from our first network example transforms its input data:'''output = relu(dot(W, input) + b)''' where W and b are the ''weights'' or ''trainable parameters'' of the layer. | |||
Training process | |||
# Draw a batch of X and Y | |||
# Run the network on x (a step called the forward pass) to obtain predictions y_pred. | |||
# Compute the loss of the network on the batch | |||
# Update all weights of the network in a way that slightly reduces the loss on this batch. | |||
Keras (in order to use Keras, you need to install TensorFlow or CNTK or Theano): | |||
# Define your training data: input tensors and target tensors. | |||
# Define a network of layers (or model). Two ways to define a model: | |||
## using the '''keras_model_sequential()''' function (only for linear stacks of layers, which is the most common network architecture by far) or <syntaxhighlight lang='rsplus'> | |||
model <- keras_model_sequential() %>% | |||
layer_dense(units = 32, input_shape = c(784)) %>% | |||
layer_dense(units = 10, activation = "softmax") | |||
</syntaxhighlight> | |||
## the functional API (for directed acyclic graphs of layers, which let you build completely arbitrary architectures) <syntaxhighlight lang='rsplus'> | |||
input_tensor <- layer_input(shape = c(784)) | |||
output_tensor <- input_tensor %>% | |||
layer_dense(units = 32, activation = "relu") %>% | |||
layer_dense(units = 10, activation = "softmax") | |||
model <- keras_model(inputs = input_tensor, outputs = output_tensor) | |||
</syntaxhighlight> | |||
# Configure the learning process by choosing a loss function, an optimizer, and some metrics to monitor. <syntaxhighlight lang='rsplus'> | |||
model %>% compile( | |||
optimizer = optimizer_rmsprop(lr = 0.0001), | |||
loss = "mse", | |||
metrics = c("accuracy") | |||
) | |||
</syntaxhighlight> | |||
# Iterate on your training data by calling the fit() method of your model. <syntaxhighlight lang='rsplus'> | |||
model %>% fit(input_tensor, target_tensor, batch_size = 128, epochs = 10) | |||
</syntaxhighlight> | |||
The following examples can be found at [https://github.com/jjallaire/deep-learning-with-r-notebooks R Markdown Notebooks for "Deep Learning with R"] | |||
== Some examples == | |||
* [https://jjallaire.github.io/deep-learning-with-r-notebooks/notebooks/3.4-classifying-movie-reviews.nb.html Binary data] | |||
* [https://jjallaire.github.io/deep-learning-with-r-notebooks/notebooks/3.5-classifying-newswires.nb.html Multi class data] | |||
* [https://jjallaire.github.io/deep-learning-with-r-notebooks/notebooks/3.6-predicting-house-prices.nb.html Regression data] | |||
= PyTorch = | = PyTorch = | ||
[https://longhowlam.wordpress.com/2018/12/17/an-r-shiny-app-to-recognize-flower-species/ An R Shiny app to recognize flower species] | [https://longhowlam.wordpress.com/2018/12/17/an-r-shiny-app-to-recognize-flower-species/ An R Shiny app to recognize flower species] |
Revision as of 13:36, 1 January 2019
Applications
- 人何時走完全未知?美研發AI預測臨終準確度達90%
- 美國FDA首次批准AI醫療儀器上市,能自動即時偵測糖尿病視網膜病變
- 在家養老-科技幫大忙
- 病理研究有新幫手,Google以AR顯微鏡結合深度學習即時發現癌細胞
- This New App Is Like Shazam for Your Nature Photos. Seek App.
- Draw This camera prints crappy drawings of the things you photograph (DIY) with Google's quickdraw.
- What Are Machine Learning Algorithms? Here’s How They Work
- Google的人工智慧開源神器三歲了,它被用在很多你想不到的地方 Nov 2018
TensorFlow
- https://www.tensorflow.org/
- https://tensorflow.rstudio.com/
- R interface to Keras. I followed the instruction for the installation but got an error of illegal operand. The solution is to use an older version of tensorflow; see here. library(keras); install_keras(tensorflow = "1.5") (Ubuntu 16.04, Phenom(tm) II X6 1055T)
- https://rviews.rstudio.com/2018/04/03/r-and-tensorflow-presentations/, Slides
- https://hub.docker.com/r/andrie/tensorflowr/ (outdated)
- Deep Learning on Biowulf
- Raspberry Pi
- Books
- Deep Learning with R by François Chollet with J. J. Allaire, 2018. ISBN-10: 161729554X (available on safaribooksonline)
- Deep Learning with Python by François Chollet, 2017 (available on safaribooksonline)
- Deep Learning by Ian Goodfellow and Yoshua Bengio and Aaron Courville
- Deep Learning Glossary
Keras
- Derivative of a tensor operation: the gradient
- Define loss_value = f(W) = dot(W, x)
- W1 = W0 - step * gradient(f)(W0)
- Stochastic gradient descent
- Tensor operations:
- relu(x) = max(0, x)
- Each neural layer from our first network example transforms its input data:output = relu(dot(W, input) + b) where W and b are the weights or trainable parameters of the layer.
Training process
- Draw a batch of X and Y
- Run the network on x (a step called the forward pass) to obtain predictions y_pred.
- Compute the loss of the network on the batch
- Update all weights of the network in a way that slightly reduces the loss on this batch.
Keras (in order to use Keras, you need to install TensorFlow or CNTK or Theano):
- Define your training data: input tensors and target tensors.
- Define a network of layers (or model). Two ways to define a model:
- using the keras_model_sequential() function (only for linear stacks of layers, which is the most common network architecture by far) or
model <- keras_model_sequential() %>% layer_dense(units = 32, input_shape = c(784)) %>% layer_dense(units = 10, activation = "softmax")
- the functional API (for directed acyclic graphs of layers, which let you build completely arbitrary architectures)
input_tensor <- layer_input(shape = c(784)) output_tensor <- input_tensor %>% layer_dense(units = 32, activation = "relu") %>% layer_dense(units = 10, activation = "softmax") model <- keras_model(inputs = input_tensor, outputs = output_tensor)
- using the keras_model_sequential() function (only for linear stacks of layers, which is the most common network architecture by far) or
- Configure the learning process by choosing a loss function, an optimizer, and some metrics to monitor.
model %>% compile( optimizer = optimizer_rmsprop(lr = 0.0001), loss = "mse", metrics = c("accuracy") )
- Iterate on your training data by calling the fit() method of your model.
model %>% fit(input_tensor, target_tensor, batch_size = 128, epochs = 10)
The following examples can be found at R Markdown Notebooks for "Deep Learning with R"