• Skip to primary navigation
  • Skip to content
  • Skip to footer
Try harder to be better Try harder to be better
  • Home
  • About
  • Category
  • Tags
    HK Park

    HK Park

    영, 차!

    • Republic of Korea
    • Email
    • GitHub
    • 📂 Total Posts : 43 Posts
    • Deep Learning
      • Review (12)
      • Optimizer (1)
      Study
      • PRML (2)
      • Pandas (1)
      • Quantization (9)
      • 백준 (12)
      • NLP (4)
      Blog Posting
      • Blog (1)

    [밑시딥2] RNN

    August 15, 2024 최대 1 분 소요

    RNN (Recurrent Neural Network) 에서의 Recurrent의 의미를 생각하면
    모델의 구조가 계속해서 순환하는 구조임을 대략적으로 유추할 수 있습니다



    태그: NLP, 밑시딥

    카테고리: NLP

    업데이트: August 15, 2024

    공유하기

    Twitter Facebook LinkedIn
    이전 글  [밑시딥2] 진짜 word2vec 가장 최근 글입니다

    댓글남기기

    참고

    [Review] Data-Free Quantization via Pseudo-label Filtering, CVPR, 2024

    September 11, 2024 1 분 소요

    Zero Shot Quantization

    [Review] GENIE: Show Me the Data for Quantization, CVPR, 2023

    September 10, 2024 2 분 소요

    Zero Shot Quantization

    [Review] Model soups: averaging weights of multiple fine-tuned models improves accuracy without increasing inference time, ICML, 2022

    August 19, 2024 최대 1 분 소요

    Model soups

    [Review] Similarity of Neural Architectures using Adversarial Attack Transferability, ECCV, 2024

    August 17, 2024 최대 1 분 소요

    Model Similarity

    • 팔로우:
    • Email
    • GitHub
    • 피드
    © 2024 HK Park. Powered by Jekyll & Minimal Mistakes.