Go Back on H13-311_V3.0 Exam
Available in 1, 3, 6 and 12 Months Free Updates Plans
PDF: $15 $60

Test Engine: $20 $80

PDF + Engine: $25 $99

H13-311_V3.0 Practice Test


Page 10 out of 74 Pages

Voice recognition refers to the recognition of audio data as text data.


A.

TRUE


B.

FALSE





A.
  

TRUE



enter 32*32 Image with size 5*5 The step size of the convolution kernel is 1 Convolution calculation, output image Size is:


A.

28*23


B.

28*28


C.

29*29


D.

23*23





B.
  

28*28



Faced with the challenge of achieving efficient distributed training for ultra-large scale models, MindSpore is handled as?


A.

Automatic parallel


B.

Serial


C.

Manual parallel





A.
  

Automatic parallel



On-Device Execution, that is, the entire image is offloaded and executed, and the computing power of the Yiteng chip can be fully utilized, which can greatly reduce the interaction overhead, thereby increasing the accelerator occupancy rate. On Device The following description is wrong?


A.

MindSpore Realize decentralized autonomy through adaptive graph optimization driven by gradient data A11 Reduce, Gradient aggregation is in step, and calculation and communication are fully streamlined.


B.

Challenges of model execution under super chip computing power: Memory wall
problems, high interaction overhead, and difficulty in data supply. Partly in Host Executed, partly in Device Execution, interaction overhead is even much greater than execution overhead, resulting in low accelerator occupancy.


C.

MindSpore Through the chip-oriented depth map optimization technology, the
synchronization wait is less, and the "data computing communication" is maximized. The parallelism of “trust”, compared with training performance Host Side view scheduling method is flat.


D.

The challenge of distributed gradient aggregation under super chip computing power:ReslNet50 Single iteration 20ms Time will be generated The synchronization overhead of heart control and the communication overhead of frequent synchronization. Traditional methods require 3 Synchronization completed A11 Reduce, Data-driven method autonomy A11 Reduce, No control overhead.





C.
  

MindSpore Through the chip-oriented depth map optimization technology, the
synchronization wait is less, and the "data computing communication" is maximized. The parallelism of “trust”, compared with training performance Host Side view scheduling method is flat.



PyTorch Which of the following functions does not have?


A.

Inline keras


B.

Support dynamic graph


C.

Automatic derivative


D.

GPU accelerate





A.
  

Inline keras




Page 10 out of 74 Pages
Previous