인공지능 대학원 전공 면접 준비자료(약 150문항)
cxxxxxx
다운로드
장바구니
소개글
"인공지능 대학원 전공 면접 준비자료(약 150문항)"에 대한 내용입니다.목차
ML/DL 이론선형대수학
통계
본문내용
자세한 목차는 다음과 같습니다.AI, ML, DL
언제 머신러닝/딥러닝을 사용하는가?
왜 딥러닝이 주목받는가?
CPU VS GPU
Bias/Variance
Inductive Bias
The curse of dimensionality
Train, Valid, Test를 나누는 이유는 무엇인가?
k-fold cross validation
Role of weight and bias
chain rule
Partial derivative
derivative, gradient, Jacobian, Hessian
Overfitting
Underfitting
Regularization
Optimizer
dropout
Supervised learning VS Unsupervised learning
Semi-Supervised Learning
Self-supervised learning
Parametric VS Non-Parametric
Parameter VS Hyperparameter
Backpropagation
Gradient Descent
Batch Gradient Descent
Stochastic Gradient Descent
momentum
Mini-Batch SGD
Vanishing/Exploding Gradient
Data Normalization
Batch Normalization
Data augmentation
Principal Component Analysis (PCA)
Loss function
KL-divergence
Entropy
Cross-Entropy
Activation function
sigmoid
Softmax
why softmax uses exponential
Why Logarithms Are So Important In Machine Learning
ReLu
Leacky ReLu
Weight Initialization
Confusion Matrix
Accuracy
Recall(Sensitivity)
Precision
Recall-Precision trade-off
F1-score
Classification VS Regression
Binary classification VS Multiclass
Linear regression
Logistic regression
kNN
Clustering, K-means
Decision Tree
Support Vector Machine
Ensemble
bagging, boosting, stacking
Bootstrapping
CNN
RNN
GRU
LSTM
GAN
Seq2seq
Attention
Transformer
Vision Transformer ( ViT )
ELMo(Embedding from Language Model)
BERT(BiDirectional Encoder Representations from Transformers)
GPT VS BERT
Vector VS Matrix
Norm
Transpose Matrix
Identity Matrix
Inverse Matrix
Similar Matrix
가우스 조던 소거법(Gauss-Jordan Elimination)
LU decomposition
역행렬을 어떻게 구하는가?
Vector Space(벡터 공간), Subspace(부분 공간)
Unit Vector VS Basis Vector
Linear Combination
Span
Linearly dependent
Linearly Independent
Basis
Dimension
Inner Product(Dot product)
Cross product
Linear Transformation
Null space(Kernel)
Column Space
Determinant
Rank
Matrix multiplication
Orthogonal, Orthonormal
Four Fundamental Subspaces
Eigen vector, Eigen Value
Least Square
Gram-Schmidt Orthogonalization
QR 분해
EVD(spectral decomposition) - 고윳값 분해
diagonalizable
SVD(Singular Value Decomposition) - 특잇값 분해
Covariance matrix
Principal Component Analysis (PCA)
확률(Probability) VS 가능도(likelihood)
시도(trial), 사건(event), 표본 공간(sample space)
기댓값(Expected Value)
확률 변수(Random Variable)
이항 분포(Binomial Probability)
정규 분포(가우시안 분포)
베르누이 시행(Bernoulli Trial)
독립 VS 종속
uncorrelated VS independence
확률 밀도 함수(probability density function)
조건부 확률(Conditional Distribution, conditional pdf)
베이즈 정리(Bayesian rule)
사전 확률(Prior Probability), 사후 확률(Posterior Probability)
주변 확률(marginal probability)
결합확률분포(Joint Probability Distribution)
평균과 중앙값의 차이
Standard deviation VS Variance
Covariance VS Correlation
Z-score(표준 점수)
큰 수의 법칙(Law of Large Numbers)
중심극한정리(Central Limit Theorem)
큰 수의 법칙과 중심극한정리의 차이
최대 우도 추정(Maximum Likelihood Estimation, MLE)