Question Paper Name: 5324 Neural Networks and Deep Learning 30th June 2019 Shift 1
Subject Name: Neural Networks and Deep Learning
Creation Date: 2019-06-30 13:01:46
--- Content provided by FirstRanker.com ---
Duration: 180
Total Marks: 100
Display Marks: Yes
Neural Networks and Deep Learning
Group Number: 1
--- Content provided by FirstRanker.com ---
Group Id: 489994230
Group Maximum Duration: 0
Group Minimum Duration: 120
Revisit allowed for view?: No
Revisit allowed for edit?: No
--- Content provided by FirstRanker.com ---
Break time: 0
Group Marks: 100
Neural Networks and Deep Learning
Section Id: 489994286
Section Number: 1
--- Content provided by FirstRanker.com ---
Section type: Online
Mandatory or Optional: Mandatory
Number of Questions: 29
Number of Questions to be attempted: 29
Section Marks: 100
--- Content provided by FirstRanker.com ---
Display Number Panel: Yes
Group All Questions: No
Sub-Section Number: 1
Sub-Section Id: 489994312
Question Shuffling Allowed: Yes
--- Content provided by FirstRanker.com ---
Question Number: 1 Question Type: MCQ Option Shuffling: No Display Question Number: Yes Single Line Question Option: No Option Orientation: Vertical
Correct Marks: 1 Wrong Marks: 0
What is the secret of success of deep learning?
- Vanishing gradient problem does not exist anymore
- Number of weights associated with the network have become in the order of billions
- Brute computing power using GPUs have become manifold
- Machine intelligence has almost become equivalent to human intelligence
--- Content provided by FirstRanker.com ---
Options:
Question Number: 2 Question Type: MCQ Option Shuffling: No Display Question Number: Yes Single Line Question Option: No Option Orientation: Vertical
--- Content provided by FirstRanker.com ---
Correct Marks: 1 Wrong Marks: 0
When is the cell said to be fired?
- If potential of the cell body reaches a steady threshold value
- If there is impulse reaction
- During upbeat of heart
- None of the above
--- Content provided by FirstRanker.com ---
Options:
1.1
2.2
3.3
--- Content provided by FirstRanker.com ---
4.4
Question Number: 3 Question Type: MCQ Option Shuffling: No Display Question Number: Yes Single Line Question Option: No Option Orientation: Vertical
Correct Marks: 1 Wrong Marks: 0
Who developed the first learning machine in which connection strengths could be adapted automatically?
- McCulloch-pits
- Marvin Minsky
- Hopfield
- Rosenblatt
--- Content provided by FirstRanker.com ---
Options:
1.1
--- Content provided by FirstRanker.com ---
2.2
3.3
4.4
Question Number: 4 Question Type: MCQ Option Shuffling: No Display Question Number: Yes Single Line Question Option: No Option Orientation: Vertical
Correct Marks: 1 Wrong Marks: 0
--- Content provided by FirstRanker.com ---
Who proposed the first perceptron model in 1958?
- McCulloch-pits
- Marvin Minsky
- Hopfield
- Rosenblatt
--- Content provided by FirstRanker.com ---
Options:
1.1
2.2
3.3
4.4
--- Content provided by FirstRanker.com ---
Question Number: 5 Question Type: MCQ Option Shuffling: No Display Question Number: Yes Single Line Question Option: No Option Orientation: Vertical
Correct Marks: 1 Wrong Marks: 0
sign of weight indicates?
- Excitatory input
- Inhibitory input
- Can be either excitatory or inhibitory as such
- None of the Above
--- Content provided by FirstRanker.com ---
Options:
1.1
2.2
--- Content provided by FirstRanker.com ---
3.3
4.4
Question Number: 6 Question Type: MCQ Option Shuffling: No Display Question Number: Yes Single Line Question Option: No Option Orientation: Vertical
Correct Marks: 1 Wrong Marks: 0
Competitive learning net is used for?
--- Content provided by FirstRanker.com ---
- Pattern grouping
- Pattern storage
- Pattern grouping or storage
- None of the above
Options:
--- Content provided by FirstRanker.com ---
1.1
2.2
3.3
4.4
Question Number: 7 Question Type: MCQ Option Shuffling: No Display Question Number: Yes Single Line Question Option: No Option Orientation: Vertical
--- Content provided by FirstRanker.com ---
Correct Marks: 1 Wrong Marks: 0
Which loss function out of the following is best for image classification (one class out of many) using deep neural network?
- Categorical cross entropy
- Mean squared error loss
- Binary cross entropy
- Hinge loss
--- Content provided by FirstRanker.com ---
Options:
1.1
2.2
3.3
--- Content provided by FirstRanker.com ---
4.4
Question Number: 8 Question Type: MCQ Option Shuffling: No Display Question Number: Yes Single Line Question Option: No Option Orientation: Vertical
Correct Marks: 1 Wrong Marks: 0
The problem that a CNN with 50 layers performs worse than a CNN with 30 layers on both training and test data can be effectively resolved by using
- Regularization
- Validation set
- Resnet architecture
- Softmax activation
--- Content provided by FirstRanker.com ---
Options:
1.1
--- Content provided by FirstRanker.com ---
2.2
3.3
Question Number: 9 Question Type: MCQ Option Shuffling: No Display Question Number: Yes Single Line Question Option: No Option Orientation: Vertical
Correct Marks: 1 Wrong Marks: 0
--- Content provided by FirstRanker.com ---
In Delta Rule for mean squared error minimization, weights are adjusted proportional to
- the changes in the output vector
- the difference between desired output and actual output
- the difference between input and output of the multi-layered perceptron
- None of the above
--- Content provided by FirstRanker.com ---
Options:
1.1
2.2
3.3
4.4
--- Content provided by FirstRanker.com ---
Question Number: 10 Question Type: MCQ Option Shuffling: No Display Question Number: Yes Single Line Question Option: No Option Orientation: Vertical
Correct Marks: 1 Wrong Marks: 0
What is a "hidden layer" of a neural network?
- The last layer of neurons, which is hidden from the inputs.
- One of the middle layers, which aren't directly connected to either inputs or outputs.
- A group of neurons with zero weights, which have effectively hidden themselves from the rest of the network.
- A layer of neurons that's in control of turning inputs on or off, which affects how training is accomplished.
--- Content provided by FirstRanker.com ---
Options:
1.1
2.2
--- Content provided by FirstRanker.com ---
3.3
4.4
Question Number: 11 Question Type: MCQ Option Shuffling: No Display Question Number: Yes Single Line Question Option: No Option Orientation: Vertical
Correct Marks: 1 Wrong Marks: 0
Given variables: (i) the present input; (ii) the previous cell state; (iii) the previous output; (iv) the previous hidden state; the response of a traditional LSTM depends on:
--- Content provided by FirstRanker.com ---
- (i), (ii), (iii)
- (i), (ii), (iii),(iv)
- (i), (ii),(iv)
- (i), (iii), (iv)
Options:
--- Content provided by FirstRanker.com ---
1.1
2.2
3.3
Question Number: 12 Question Type: MCQ Option Shuffling: No Display Question Number: Yes Single Line Question Option: No Option Orientation: Vertical
--- Content provided by FirstRanker.com ---
Correct Marks: 1 Wrong Marks: 0
Which of the following is correct regarding Gated Recurrent Units (GRUs)?
- It combines the forget and input gates into a single "update gate."
- It merges the cell state and hidden state
- It takes the previous hidden state multiplied by reset gate as an input.
- All of these
--- Content provided by FirstRanker.com ---
Options:
1.1
2.2
3.3
--- Content provided by FirstRanker.com ---
4.4
Question Number: 13 Question Type: MCQ Option Shuffling: No Display Question Number: Yes Single Line Question Option: No Option Orientation: Vertical
Correct Marks: 1 Wrong Marks: 0
Please mark the correct weight update law for RBM. Here, W stands for weights of RBM, b stands for hidden unit bias vector of RBM, c stands for visible layer bias vector of RBM and ? = 1 is the learning rate.
- W = W + ?(h(x(t))x(t) - h(x(t))x(t)); b = b + ?(h(x(t)) - h(x(t)); c = c + (x(t) - x(t)))
- W = W - ?(h(x(t))x(t) - h(x(t))x(t)); b = b + ?(h(x(t)) - h(x(t)); c = c - ?(x(t) - x(t)))
- W = W + ?(h(x(t))x(t) - h(x(t))x(t)); b = b - ?(h(x(t)) - h(x(t)); c = c + ?(x(t) - x(t)))
- W = W - ?(h(x(t))x(t) - h(x(t))x(t)); b = b - ?(h(x(t)) - h(x(t)); c = c - (x(t) - x(t)))
--- Content provided by FirstRanker.com ---
Options:
1.1
--- Content provided by FirstRanker.com ---
2.2
3.3
4.4
Question Number: 14 Question Type: MCQ Option Shuffling: No Display Question Number: Yes Single Line Question Option: No Option Orientation: Vertical
Correct Marks: 1 Wrong Marks: 0
--- Content provided by FirstRanker.com ---
Which deep network was used for the image classification for the first time in image-net challenge?
- VGG
- Googlenet
- ZFnet
- Alexnet
--- Content provided by FirstRanker.com ---
Options:
1.1
2.2
3.3
--- Content provided by FirstRanker.com ---
Question Number: 15 Question Type: MCQ Option Shuffling: No Display Question Number: Yes Single Line Question Option: No Option Orientation: Vertical
Correct Marks: 1 Wrong Marks: 0
Which deep network was used for the image classification for the first time in image-net challenge?
- VGG
- Googlenet
- ZFnet
- Alexnet
--- Content provided by FirstRanker.com ---
In the following figure, please mark the correct expression for the back-error propagated for the 1st layer
This download link is referred from the post: ARPIT Last 10 Years 2011-2021 Previous Question Papers
--- Content provided by FirstRanker.com ---