Optimization
It is always important what kind of optimization algorithm to use for training a deep learning model. According to the optimization algorithm we use, the mod...
It is always important what kind of optimization algorithm to use for training a deep learning model. According to the optimization algorithm we use, the mod...
It is always important what kind of optimization algorithm to use for training a deep learning model. According to the optimization algorithm we use, the mod...
It is always important what kind of optimization algorithm to use for training a deep learning model. According to the optimization algorithm we use, the mod...
“BING: Binarized Normed Gradients for Objectness Estimation at 300fps” is a an objectness classifier using binarized normed gradient and linear classifier, w...
“Learning Deep Features for Discriminative Localization” proposed a method to enable the convolutional neural network to have localization ability despite be...
“Hide-and-Seek: Forcing a Network to be Meticulous for Weakly-supervised Object and Action Localization” proposed a weakly-supervised framework to improve ob...
“CBAM: Convolutional Block Attention Module” proposes a simple and effective attention module for CNN which can be seen as descendant of Sqeeze and Excitatio...
Learn the basics about Recurrent Neural Network (RNN), its detail and case models of RNN.
“Super-Convergence: very fast training of neural networks using large learning rates” suggests a different learning rate policy called ‘one cycle policy’ whi...
“Network In Network” is one of the most important study related convoutional neural network because of the concept of 1 by 1 convolution and global average p...
Learn the basics about Convolutional Neural Network (CNN), its detail and case models of CNN.
Cpasule Network is a new types of neural network proposed by Geoffrey Hinton and his team and presented in NIPS 2017. As Geoffrey Hinton is Godfathers of Dee...
Sharing an answer code of mine about MinMaxDivision problem of Codility lesson 14.
Sharing an answer code of mine about NumberSolitaire problem of Codility lesson 17.
Sharing an answer code of mine about TieRopes problem of Codility lesson 16.
Sharing an answer code of mine about MinAbsSumOfTwo problem of Codility lesson 15.
Sharing an answer code of mine about MaxProfit problem of Codility lesson 9.
Sharing an answer code of mine about EquiLeader problem of Codility lesson 8.
Sharing an answer code of mine about MaxProductOfThree problem of Codility lesson 6.
Sharing an answer code of mine about PassingCars problem of Codility lesson 5.
Sharing an answer code of mine about MissingInteger problem of Codility lesson 4.
Sharing an answer code of mine about FrogJmp problem of Codility lesson 3.
Sharing an answer code of mine about CyclicRotation problem of Codility lesson 2.
Sharing an answer code of mine about BinaryGap problem of Codility lesson 1.
Autoencoder is an artificial neural network used for unsupervised learning of efficient codings. The aim of an autoencoder is to learn a representation (enco...
Dropout technique
Dropout technique
Learn the basics about Convolutional Neural Network (CNN), its detail and case models of CNN.
Generative Adversarial Networks (GAN) is a framework for estimating generative models via an adversarial process by training two models simultaneously. A gen...
“Learning Deep Features for Discriminative Localization” proposed a method to enable the convolutional neural network to have localization ability despite be...
Learn the basics about Recurrent Neural Network (RNN), its detail and case models of RNN.
“Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning” is an advanced version of famous vision model ‘inception’ from Google. It...
Learn about probability which are the basics of artificial intelligence and deep learning.
Regularization is important technique for preventing overfitting problem while training a learning model.
Regularization is important technique for preventing overfitting problem while training a learning model.
Learn the basics about Recurrent Neural Network (RNN), its detail and case models of RNN.
“MS-RMAC: Multiscale Regional Maximum Activation of Convolutions for Image Retrieval” improves current Maximum Activation of Convolutions (MAC) feature for i...
Learn about probability which are the basics of artificial intelligence and deep learning.
Learn about probability which are the basics of artificial intelligence and deep learning.
It is always important what kind of optimization algorithm to use for training a deep learning model. According to the optimization algorithm we use, the mod...
“Network In Network” is one of the most important study related convoutional neural network because of the concept of 1 by 1 convolution and global average p...
The paper “Neural Machine Translation By Jointly Learning To Align And Translate” introduced in 2015 is one of the most famous deep learning paper related na...
The paper “Attention is all you need” from google propose a novel neural network architecture based on a self-attention mechanism that believe to be particul...
“BING: Binarized Normed Gradients for Objectness Estimation at 300fps” is a an objectness classifier using binarized normed gradient and linear classifier, w...
Autoencoder is an artificial neural network used for unsupervised learning of efficient codings. The aim of an autoencoder is to learn a representation (enco...
Sharing answer codes of mine about Programmers Level5. Set Align.
Sharing answer codes of mine about Programmers Level5. change124.
Sharing answer codes of mine about Programmers Level4. expressions.
Sharing answer codes of mine about Programmers Level4. findLargestSquare.
Sharing answer codes of mine about Programmers Level3. nlcm.
Sharing answer codes of mine about Programmers Level2. productMatrix.
Faster R-CNN is an object detecting network proposed in 2015, and achieved state-of-the-art accuracy on several object detection competitions.
Autoencoder is an artificial neural network used for unsupervised learning of efficient codings. The aim of an autoencoder is to learn a representation (enco...
It is always important what kind of optimization algorithm to use for training a deep learning model. According to the optimization algorithm we use, the mod...
Learn the basics about Recurrent Neural Network (RNN), its detail and case models of RNN.
The eXplainable Artificial Intelligence (XAI) is an artificial intelligence model that is able to explain its decisions and actions to human users.
“Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning” is an advanced version of famous vision model ‘inception’ from Google. It...
“Re-ID done right: towards good practices for person re-identification” proposes a different approach to use deep network on person re-identification task. I...
“Deep image retrieval: learning global representations for image search” proposes an approach for instance-level image retrieval. It was presented in the ECC...
“Squeeze-and-Excitation Networks” suggests simple and powerful layer block to improve general convolutional neural network. It was presented in the conferenc...
It is always important what kind of optimization algorithm to use for training a deep learning model. According to the optimization algorithm we use, the mod...
TensorFlow team launched the new customizable TensorBoard API on Sept 11, 2017. As the previous TensorBoard API did not include reusable APIs, it was difficu...
TensorFlow team launched the new customizable TensorBoard API on Sept 11, 2017. As the previous TensorBoard API did not include reusable APIs, it was difficu...
Theano will not be maintained after the 1.0 release, announced by the MILA group.
“U-Net: Convolutional Networks for Biomedical Image Segmentation” is a famous segmentation model not only for biomedical tasks and also for general segmentat...
The eXplainable Artificial Intelligence (XAI) is an artificial intelligence model that is able to explain its decisions and actions to human users.
‘YOLO9000: Better, Faster, Stronger’ proposed an improved version of YOLO which was presented at IEEE Conference on Computer Vision and Pattern Recognition i...
‘You Only Look Once: Unified, Real-Time Object Detection’ (YOLO) proposed an object detection model which was presented at IEEE Conference on Computer Vision...
“Gradient Acceleration in Activation Functions” argues that the dropout is not a regularizer but an optimization technique and propose better way to obtain t...
Let’s talk about activation function in artificial neural network and some questions related of it.
Learn about Binary Search which is a simple and very useful algorithm whereby many linear algorithms can be optimized to run in logarithmic time.
Learn about Dynamic Programming which is a famous and important algorithm for solving problems.
Learn about merge-sort, quick-sort, other sorting algorithms and their running time.
Sharing answer codes of mine about HackerRank: Fibonacci Modified.
Sharing answer codes of mine about HackerRank: Bear and Steady Gene.
Sharing answer codes of mine about HackerRank: Yet Another Minimax Problem.
Approximate inference methods make it possible to learn realistic models from big data by trading off computation time for accuracy, when exact learning and ...
Sharing answer codes of mine about HackerRank: Array Manipulation.
“Dual Attention Network for Scene Segmentation” improves scene segmentation tasks performance by attaching self-attention mechanism. It is in arxiv yet and t...
The paper “Attention is all you need” from google propose a novel neural network architecture based on a self-attention mechanism that believe to be particul...
“Harmonious Attention Network for Person Re-Identification” suggests a joint learning of soft pixel attention and hard regional attention for person re-ident...
Autoencoder is an artificial neural network used for unsupervised learning of efficient codings. The aim of an autoencoder is to learn a representation (enco...
This post will be about artificial intelligence related terms including linear algebra, probability distribution, machine learning and deep learning
‘Batch Normalization’ is an basic idea of a neural network model which was recorded the state-of-the art (4.82% top-5 test error) in the ImageNet competition...
Learn about tree, tree traversal, binary heap, trie and their running time.
Learn about Binary Search which is a simple and very useful algorithm whereby many linear algorithms can be optimized to run in logarithmic time.
Sharing answer codes of mine about HackerRank: Is This a Binary Search Tree.
Sharing answer codes of mine about HackerRank: Is This a Binary Search Tree.
Sharing answer codes of mine about HackerRank: Yet Another Minimax Problem.
“Extreme clicking for efficient object annotation” proposes a better way to annotate object bounding boxes with four clicks on the object. It is a further re...
“Training object class detectors with click supervision” proposes efficient way of annotating bounding boxes for object class detectors. It was presented in ...
Cpasule Network is a new types of neural network proposed by Geoffrey Hinton and his team and presented in NIPS 2017. As Geoffrey Hinton is Godfathers of Dee...
“Learning Deep Features for Discriminative Localization” proposed a method to enable the convolutional neural network to have localization ability despite be...
“Extreme clicking for efficient object annotation” proposes a better way to annotate object bounding boxes with four clicks on the object. It is a further re...
“Training object class detectors with click supervision” proposes efficient way of annotating bounding boxes for object class detectors. It was presented in ...
There has been a lot of attempt to combine between Convolutional Neural Network (CNN) and Recurrent Neural Network (RNN) for image-based sequence recognition...
“Improved deep metric learning with multi-class N-pair loss objective” proposes a way to handle the slow convergence problem of contrastive loss and triplet ...
When we solve machine learning problem, we have to optimize a certain objective function. One of the case of it is convex optimization problem which is a pro...
Learn the basics about Convolutional Neural Network (CNN), its detail and case models of CNN.
When we train a deep learning model, we need to set a loss function for minimizing the error. The loss function indicates how much each variable contributes ...
Learn about graph, graph representations, graph traversals and their running time.
Learn about tree, tree traversal, binary heap, trie and their running time.
Sharing answer codes of mine about HackerRank: Game of Two Stacks.
Sharing answer codes of mine about HackerRank: Recursive Digit Sum.
Sharing answer codes of mine about HackerRank: Array Manipulation.
Learn about stack, queue, dequeue, its implementation and running time.
Learn about hash table, hash function, hash code, its implementation and running time.
Sharing answer codes of mine about HackerRank: Find the Running Median.
TensorFlow is a machine learning framework that Google created and used to design, build, and train deep learning models. It supports complex and heavy numer...
Keras is a high-level neural networks API, written in Python and capable of running on top of TensorFlow, CNTK, or Theano. This article is about summary and ...
“DeepLab: Deep Labelling for Semantic Image Segmentation” is a state-of-the-art deep learning model from Google for sementic image segmentation task, where t...
Learn about stack, queue, dequeue, its implementation and running time.
“Gradient Acceleration in Activation Functions” argues that the dropout is not a regularizer but an optimization technique and propose better way to obtain t...
Regularization is important technique for preventing overfitting problem while training a learning model.
Learn about Dynamic Programming which is a famous and important algorithm for solving problems.
Sharing answer codes of mine about HackerRank: Fibonacci Modified.
Learn about probability which are the basics of artificial intelligence and deep learning.
Learn the basics about Recurrent Neural Network (RNN), its detail and case models of RNN.
Faster R-CNN is an object detecting network proposed in 2015, and achieved state-of-the-art accuracy on several object detection competitions.
Faster R-CNN is an object detecting network proposed in 2015, and achieved state-of-the-art accuracy on several object detection competitions.
The Feed-Forward Neural Network (FFNN) is the simplest and basic artificial neural network we should know first before talking about other complicated networ...
“Network In Network” is one of the most important study related convoutional neural network because of the concept of 1 by 1 convolution and global average p...
Regularization is important technique for preventing overfitting problem while training a learning model.
“Learning Deep Features for Discriminative Localization” proposed a method to enable the convolutional neural network to have localization ability despite be...
It is always important what kind of optimization algorithm to use for training a deep learning model. According to the optimization algorithm we use, the mod...
Learn about graph, graph representations, graph traversals and their running time.
Sharing answer codes of mine about HackerRank: Short Palindrome.
Sharing answer codes of mine about HackerRank: Lily’s Homework.
Sharing answer codes of mine about HackerRank: Game of Two Stacks.
Sharing answer codes of mine about HackerRank: Fibonacci Modified.
Sharing answer codes of mine about HackerRank: Is This a Binary Search Tree.
Sharing answer codes of mine about HackerRank: Bear and Steady Gene.
Sharing answer codes of mine about HackerRank: Recursive Digit Sum.
Sharing answer codes of mine about HackerRank: Array Manipulation.
Sharing answer codes of mine about HackerRank: Yet Another Minimax Problem.
Sharing answer codes of mine about HackerRank: Find the Running Median.
Learn about hash table, hash function, hash code, its implementation and running time.
Sharing answer codes of mine about HackerRank: Find the Running Median.
“Hide-and-Seek: Forcing a Network to be Meticulous for Weakly-supervised Object and Action Localization” proposed a weakly-supervised framework to improve ob...
“MS-RMAC: Multiscale Regional Maximum Activation of Convolutions for Image Retrieval” improves current Maximum Activation of Convolutions (MAC) feature for i...
This post is a summary and paper skimming on regularization and optimization. So, this post will be keep updating by the time.
This post is a summary and paper skimming on image retrieval related research. So, this post will be keep updating by the time.
“Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning” is an advanced version of famous vision model ‘inception’ from Google. It...
Keras is a high-level neural networks API, written in Python and capable of running on top of TensorFlow, CNTK, or Theano. This article is about summary and ...
“Super-Convergence: very fast training of neural networks using large learning rates” suggests a different learning rate policy called ‘one cycle policy’ whi...
Sharing an answer code of mine about 2. Add Two Numbers of LeetCode.
When we train a deep learning model, we need to set a loss function for minimizing the error. The loss function indicates how much each variable contributes ...
There has been a lot of attempt to combine between Convolutional Neural Network (CNN) and Recurrent Neural Network (RNN) for image-based sequence recognition...
Sharing an answer code of mine about 2. Add Two Numbers of LeetCode.
“Improved deep metric learning with multi-class N-pair loss objective” proposes a way to handle the slow convergence problem of contrastive loss and triplet ...
“Improved deep metric learning with multi-class N-pair loss objective” proposes a way to handle the slow convergence problem of contrastive loss and triplet ...
“Extreme clicking for efficient object annotation” proposes a better way to annotate object bounding boxes with four clicks on the object. It is a further re...
“Hide-and-Seek: Forcing a Network to be Meticulous for Weakly-supervised Object and Action Localization” proposed a weakly-supervised framework to improve ob...
‘YOLO9000: Better, Faster, Stronger’ proposed an improved version of YOLO which was presented at IEEE Conference on Computer Vision and Pattern Recognition i...
‘You Only Look Once: Unified, Real-Time Object Detection’ (YOLO) proposed an object detection model which was presented at IEEE Conference on Computer Vision...
Faster R-CNN is an object detecting network proposed in 2015, and achieved state-of-the-art accuracy on several object detection competitions.
“Mining Objects: Fully Unsupervised Object Discovery and Localization From a Single Image” focus on performing unsupervised object discovery and localization...
Research on several vision techniques such as pixel difference and optical flow.
It is always important what kind of optimization algorithm to use for training a deep learning model. According to the optimization algorithm we use, the mod...
The paper “Neural Machine Translation By Jointly Learning To Align And Translate” introduced in 2015 is one of the most famous deep learning paper related na...
This post is a summary and paper skimming on regularization and optimization. So, this post will be keep updating by the time.
This post is a summary and paper skimming on image retrieval related research. So, this post will be keep updating by the time.
This post is a summary and paper skimming on rotation invariance and equivariance related research. So, this post will be keep updating by the time.
This post is a summary and paper skimming on detection and segmentation related research. So, this post will be keep updating by the time.
Research on several vision techniques such as pixel difference and optical flow.
Learn about probability which are the basics of artificial intelligence and deep learning.
Learn about stack, queue, dequeue, its implementation and running time.
“Harmonious Attention Network for Person Re-Identification” suggests a joint learning of soft pixel attention and hard regional attention for person re-ident...
Sharing answer codes of mine about HackerRank: Recursive Digit Sum.
Regularization is important technique for preventing overfitting problem while training a learning model.
‘Batch Normalization’ is an basic idea of a neural network model which was recorded the state-of-the art (4.82% top-5 test error) in the ImageNet competition...
“Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning” is an advanced version of famous vision model ‘inception’ from Google. It...
There has been a lot of attempt to combine between Convolutional Neural Network (CNN) and Recurrent Neural Network (RNN) for image-based sequence recognition...
This post is a summary and paper skimming on rotation invariance and equivariance related research. So, this post will be keep updating by the time.
This post is a summary and paper skimming on rotation invariance and equivariance related research. So, this post will be keep updating by the time.
“Dual Attention Network for Scene Segmentation” improves scene segmentation tasks performance by attaching self-attention mechanism. It is in arxiv yet and t...
Sharing answer codes of mine about HackerRank: Short Palindrome.
“U-Net: Convolutional Networks for Biomedical Image Segmentation” is a famous segmentation model not only for biomedical tasks and also for general segmentat...
“DeepLab: Deep Labelling for Semantic Image Segmentation” is a state-of-the-art deep learning model from Google for sementic image segmentation task, where t...
Scale-Invariant Feature Transform (SIFT) is an old algorithm presented in 2004, D.Lowe, University of British Columbia. However, it is one of the most famous...
Learn about merge-sort, quick-sort, other sorting algorithms and their running time.
Sharing answer codes of mine about HackerRank: Lily’s Homework.
Today, I am going to introduce interesting project, which is ‘Multi-Speaker Tacotron in TensorFlow’. It is a speech synthesis deep learning model to generate...
Sharing answer codes of mine about HackerRank: Game of Two Stacks.
Learn about stack, queue, dequeue, its implementation and running time.
Sharing answer codes of mine about HackerRank: Bear and Steady Gene.
“Super-Convergence: very fast training of neural networks using large learning rates” suggests a different learning rate policy called ‘one cycle policy’ whi...
TensorFlow is a machine learning framework that Google created and used to design, build, and train deep learning models. It supports complex and heavy numer...
TensorFlow is a machine learning framework that Google created and used to design, build, and train deep learning models. It supports complex and heavy numer...
Keras is a high-level neural networks API, written in Python and capable of running on top of TensorFlow, CNTK, or Theano. This article is about summary and ...
Learn about tree, tree traversal, binary heap, trie and their running time.
Learn about tree, tree traversal, binary heap, trie and their running time.
Sharing answer codes of mine about HackerRank: Is This a Binary Search Tree.
Learn about tree, tree traversal, binary heap, trie and their running time.
“Improved deep metric learning with multi-class N-pair loss objective” proposes a way to handle the slow convergence problem of contrastive loss and triplet ...
“Mining Objects: Fully Unsupervised Object Discovery and Localization From a Single Image” focus on performing unsupervised object discovery and localization...
Learn the basics about Recurrent Neural Network (RNN), its detail and case models of RNN.
Approximate inference methods make it possible to learn realistic models from big data by trading off computation time for accuracy, when exact learning and ...
Research on several vision techniques such as pixel difference and optical flow.
“Hide-and-Seek: Forcing a Network to be Meticulous for Weakly-supervised Object and Action Localization” proposed a weakly-supervised framework to improve ob...