AI: Difference between revisions

From 太極
Jump to navigation Jump to search
Line 73: Line 73:


== Some examples ==
== Some examples ==
* [https://jjallaire.github.io/deep-learning-with-r-notebooks/notebooks/3.4-classifying-movie-reviews.nb.html Binary data]
* [https://jjallaire.github.io/deep-learning-with-r-notebooks/notebooks/3.4-classifying-movie-reviews.nb.html Binary data].
** The final layer will use a ''sigmoid'' activation so as to output a probability (a score between 0 and 1, indicating how likely the sample is to have the target “1”.
** A ''relu'' (rectified linear unit) is a function meant to zero-out negative values, while a ''sigmoid'' “squashes” arbitrary values into the [0, 1] interval, thus outputting something that can be interpreted as a probability.
:<syntaxhighlight lang='rsplus'>
library(keras)
imdb <- dataset_imdb(num_words = 10000)
c(c(train_data, train_labels), c(test_data, test_labels)) %<-% imdb
 
# Preparing the data
vectorize_sequences <- function(sequences, dimension = 10000) {...}
x_train <- vectorize_sequences(train_data)
x_test <- vectorize_sequences(test_data)
y_train <- as.numeric(train_labels)
y_test <- as.numeric(test_labels)
 
# Build the network
## Two intermediate layers with 16 hidden units each
## The final layer will output the scalar prediction
model <- keras_model_sequential() %>%
  layer_dense(units = 16, activation = "relu", input_shape = c(10000)) %>%
  layer_dense(units = 16, activation = "relu") %>%
  layer_dense(units = 1, activation = "sigmoid")
model %>% compile(
  optimizer = "rmsprop",
  loss = "binary_crossentropy",
  metrics = c("accuracy")
)
model %>% fit(x_train, y_train, epochs = 4, batch_size = 512)
## Error in py_call_impl(callable, dots$args, dots$keywords) : MemoryError:
 
# Validation
results <- model %>% evaluate(x_test, y_test)
 
# Prediction on new data
model %>% predict(x_test[1:10,])
</syntaxhighlight>
* [https://jjallaire.github.io/deep-learning-with-r-notebooks/notebooks/3.5-classifying-newswires.nb.html Multi class data]
* [https://jjallaire.github.io/deep-learning-with-r-notebooks/notebooks/3.5-classifying-newswires.nb.html Multi class data]
* [https://jjallaire.github.io/deep-learning-with-r-notebooks/notebooks/3.6-predicting-house-prices.nb.html Regression data]
* [https://jjallaire.github.io/deep-learning-with-r-notebooks/notebooks/3.6-predicting-house-prices.nb.html Regression data]

Revision as of 15:17, 1 January 2019

Applications

TensorFlow

Keras

  • Derivative of a tensor operation: the gradient
    • Define loss_value = f(W) = dot(W, x)
    • W1 = W0 - step * gradient(f)(W0)
  • Stochastic gradient descent
  • Tensor operations:
    • relu(x) = max(0, x)
    • Each neural layer from our first network example transforms its input data:output = relu(dot(W, input) + b) where W and b are the weights or trainable parameters of the layer.

Training process

  1. Draw a batch of X and Y
  2. Run the network on x (a step called the forward pass) to obtain predictions y_pred.
  3. Compute the loss of the network on the batch
  4. Update all weights of the network in a way that slightly reduces the loss on this batch.

Keras (in order to use Keras, you need to install TensorFlow or CNTK or Theano):

  1. Define your training data: input tensors and target tensors.
  2. Define a network of layers (or model). Two ways to define a model:
    1. using the keras_model_sequential() function (only for linear stacks of layers, which is the most common network architecture by far) or
      model <- keras_model_sequential() %>%
        layer_dense(units = 32, input_shape = c(784)) %>%
        layer_dense(units = 10, activation = "softmax")
    2. the functional API (for directed acyclic graphs of layers, which let you build completely arbitrary architectures)
      input_tensor <- layer_input(shape = c(784))
      
      output_tensor <- input_tensor %>%
        layer_dense(units = 32, activation = "relu") %>%
        layer_dense(units = 10, activation = "softmax")
      
      model <- keras_model(inputs = input_tensor, outputs = output_tensor)
  3. Compile the learning process by choosing a loss function, an optimizer, and some metrics to monitor.
    model %>% compile(
      optimizer = optimizer_rmsprop(lr = 0.0001),
      loss = "mse",
      metrics = c("accuracy")
    )
  4. Iterate on your training data by calling the fit() method of your model.
    model %>% fit(input_tensor, target_tensor, batch_size = 128, epochs = 10)

The following examples can be found at R Markdown Notebooks for "Deep Learning with R"

Some examples

  • Binary data.
    • The final layer will use a sigmoid activation so as to output a probability (a score between 0 and 1, indicating how likely the sample is to have the target “1”.
    • A relu (rectified linear unit) is a function meant to zero-out negative values, while a sigmoid “squashes” arbitrary values into the [0, 1] interval, thus outputting something that can be interpreted as a probability.
library(keras)
imdb <- dataset_imdb(num_words = 10000)
c(c(train_data, train_labels), c(test_data, test_labels)) %<-% imdb

# Preparing the data
vectorize_sequences <- function(sequences, dimension = 10000) {...}
x_train <- vectorize_sequences(train_data)
x_test <- vectorize_sequences(test_data)
y_train <- as.numeric(train_labels)
y_test <- as.numeric(test_labels)

# Build the network
## Two intermediate layers with 16 hidden units each
## The final layer will output the scalar prediction
model <- keras_model_sequential() %>% 
  layer_dense(units = 16, activation = "relu", input_shape = c(10000)) %>% 
  layer_dense(units = 16, activation = "relu") %>% 
  layer_dense(units = 1, activation = "sigmoid")
model %>% compile(
  optimizer = "rmsprop",
  loss = "binary_crossentropy",
  metrics = c("accuracy")
)
model %>% fit(x_train, y_train, epochs = 4, batch_size = 512)
## Error in py_call_impl(callable, dots$args, dots$keywords) : MemoryError: 

# Validation
results <- model %>% evaluate(x_test, y_test)

# Prediction on new data
model %>% predict(x_test[1:10,])

PyTorch

An R Shiny app to recognize flower species