联系方式

  • QQ:99515681
  • 邮箱:99515681@qq.com
  • 工作时间:8:00-23:00
  • 微信:codehelp

您当前位置:首页 >> OS程序OS程序

日期:2023-04-10 09:14

CS 486/686 Winter 2023 Assignment 4

2 Neural Networks (65 marks)

In this part of the assignment, you will implement a feedforward neural network from scratch. Additionally, you will implement multiple activation functions, loss functions, and perfor- mance metrics. Lastly, you will train a neural network model to perform both a classification and a regression task.

2.1 Bank Note Forgery - A Classification Problem

The classification problem we will examine is the prediction of whether or not a bank note is forged. The labelled dataset included in the assignment was downloaded from the UCI Machine Learning Repository. The target y 2 {0, 1} is a binary variable, where 0 and 1 refer to fake and real respectively. The features are all real-valued. They are listed below:

Variance of the transformed image of the bank note ? Skewness of the transformed image of the bank note ? Curtosis of the transformed image of the bank note ? Entropy of the image

2.2 Red Wine Quality - A Regression Problem

The task is to predict the quality of red wine from northern Portugal, given some physical characteristics of the wine. The target y 2 [0, 10] is a continuous variable, where 10 is the best possible wine, according to human tasters. Again, this dataset was downloaded from the UCI Machine Learning Repository. The features are all real-valued. They are listed below:

Fixed acidity

Volatile acidity ? Citric acid

Residual sugar

Chlorides

Free sulfur dioxide ? Total sulfur dioxide ? Density

pH

Sulphates Alcohol

2.3 Training a Neural Network

In Lecture 15, you learned how to train a neural network using the backpropagation algo- rithm. In this assignment, you will apply the forward and backward pass to the entire dataset

Wenhu Chen 2022 v1.2 Page 4 of 8

CS 486/686 Winter 2023 Assignment 4

simultaneously (i.e. batch gradient descent, where one batch is the entire dataset). As a

result, your forward and backward passes will manipulate tensors, where the first dimension

is the number of examples in the training set, n. When updating an individual weight W(l), i,j

you will need to find the sum of partial derivatives @E across all examples in the training @W(l)
set to apply the update. Algorithm 1 gives the training algorithm in terms of functions that you will implement in this assignment. Further details can be found in the documentation for each function in the provided source code.

Algorithm 1 Gradient descent with backpropagation

Require: ? > 0 Require: nepochs 2 N+ Require: X 2 Rn?f Require: y 2 Rn

Initiate weight matrices W (l) randomly for each layer. fori2{1,2,...,nepochs}do

Avals,Zvals net.forwardpass(X) y? Z vals[-1]

L L ( y? , y ) Compute @ L(y?, y)

. Derivative of error with respect to predictions

@ y?

deltas backward pass(A vals, @ L(y?, y) )

. Backward pass @L for each weight

@ y? updategradients() . W(`)

end for

return trained weight matrices W(`)

2.4 Activation and Loss Functions

W(`) ?? i,j

P

. Learning rate . Number of epochs . Training examples with n examples and f features . Targets for training examples . Initialize net .Conductnepochs epochs . Forward pass . Predictions

You will implement the following activation functions and their derivatives:

Sigmoid

ReLU

g(x) = 1 1+e?kx

i,j

n @W(`) i,j

g(x) = max(0, x)

You will implement the following loss functions and their derivatives:

Cross entropy loss: for binary classification

Wenhu Chen 2022 v1.2 Page 5 of 8

CS 486/686 Winter 2023 Assignment 4

Compute the average over all the examples. Note that log() refers to the natural logarithm.

1 Xn

L(y?,y) = n i=1 ?(ylog(y?)+(1?y)log(1?y?))

Mean squared error loss: for regression

1 Xn

L ( y? , y ) = n

2.5 Implementation

We have provided three Python files. Please read the detailed comments in the provided files carefully. Note that some functions have already been implemented for you.

1. neural net.py:

2. operations.py:

Contains an implementation of a NeuralNetwork class. You must implement the forward_pass(), backward_pass(), and update_weights() methods in the NeuralNetwork class. Do not change the function signatures. Do not change anything else in this file!

Contains multiple classes for multiple activation functions, loss functions, and functions for performance metrics. The activation functions extend a base Activation class and the loss functions extend a base Loss class. You must implement all the blank functions as indicated in this file. Do not change the function signatures. Do not change anything else in this file!

3. train experiment.py:Provides a demonstration of how to define a NeuralNetwork object and train it on one of the provided datasets. Feel free to

change this file as you desire.

Please complete the following tasks.

(a) Implement the empty functions in neural_net.py and operations.py. Zip and sub- mit these two files on Marmoset.

Please do not invoke any numpy random operations in neural_net.py and operations.py. This may tamper with the automatic grading.
Wenhu Chen 2022 v1.2 Page 6 of

CS 486/686 Winter 2023 Assignment 4

Marking Scheme: (52 marks) Unit tests for neural network.py:

NeuralNetwork.forward_pass()

(1 public test + 1 secret test) * 6 marks = 12 marks

NeuralNetwork.backward_pass()

(1 public test + 1 secret test) * 6 marks = 12 marks

NeuralNetwork.update_weights()

(1 public test + 1 secret test) * 6 marks = 12 marks

Unit tests for operations.py:

Sigmoid.value()

(1 public test + 1 secret test) * 1 mark = 2 marks

Sigmoid.derivative()

(1 public test + 1 secret test) * 1 mark = 2 marks

ReLU.value()

(1 public test + 1 secret test) * 1 mark = 2 marks

ReLU.derivative()

(1 public test + 1 secret test) * 1 mark = 2 marks

CrossEntropy.value()

(1 public test + 1 secret test) * 1 mark = 2 marks

CrossEntropy.derivative()

(1 public test + 1 secret test) * 1 mark = 2 marks

MeanSquaredError.value()

(1 public test + 1 secret test) * 1 mark = 2 marks

MeanSquaredError.derivative()

(1 public test + 1 secret test) * 1 mark = 2 marks

Once you have implemented the functions, you can train the neural networks on the two provided datasets. The bank note forgery dataset is in

data/banknote authentication.csv and the wine quality dataset is in

data/wine quality.csv. In train_experiment.py, we have provided some code to instantiate a neural network and train on an entire dataset. Implement the required functions and then complete the next activities.

(b) Execute k-fold cross validation for the banknote forgery dataset with k = 5. Use the sigmoid activation function for your output layer. Report the number of layers, the number of neurons in each layer, and the activation functions you used for your hidden layers. Train for 1000 epochs in each trial and use ? = 0.01.

Wenhu Chen 2022 v1.2 Page 7 of 8

CS 486/686 Winter 2023 Assignment 4

To perform cross validation, randomly split the data into 5 folds. For each fold, train the model on the remaining data and determine the trained model’s accuracy on the validation set after training is complete. You can use

NeuralNetwork.evaluate() to determine the accuracy on the validation set (i.e. fold).

Produce a plot where the x-axis is the epoch number and the y-axis is the average training loss across all experiments for the current epoch. Report the average and standard deviation of the accuracy on the validation set over each experiment.

For example, for your first fold, 80% of the examples should be in the training set and 20% of the examples should be in the validation set (i.e. fold 1). You will require the loss obtained after executing the forward pass for each of the 1000 epochs. After model has trained, use the trained model to calculate the accuracy on the validation set. This is one experiment. You will need to run this experiment 5 times in total, plotting the average loss at epoch i for each epoch. You will report the average and standard deviation of the accuracy achieved on the validation set during each experiment.

(c) Execute k-fold cross validation for the wine quality dataset with k = 5. Use the Identity activation function for your output layer.Report the number of layers, the number of neurons in each layer, and the activation functions you used for your hidden layers. Train for 1000 epochs in each trial and use = 0.001.

To perform cross validation, randomly split the data into 5 folds. For each fold, train the model on the remaining data and determine the trained model’s mean absolute error on the fold. You can use NeuralNetwork.evaluate() to determine the mean absolute error on the validation set (i.e. fold).

Produce a plot where the x-axis is the epoch number and the y-axis is the average train- ing loss across all experiments for the current epoch. Report the average and standard deviation of the mean absolute error on the validation set over each experiment.

Marking Scheme: (6 marks)

(4 marks) Reasonably correct plot.

(2 marks) Reasonable accuracy (average and standard deviation)

Marking Scheme: (7 marks)

(5 marks) Reasonably correct plot.

(2 marks) Reasonable mean absolute error (average and standard devia- tion)

相关文章

版权所有:留学生编程辅导网 2021,All Rights Reserved 联系方式:QQ:99515681 电子信箱:99515681@qq.com
免责声明:本站部分内容从网络整理而来,只供参考!如有版权问题可联系本站删除。