Volume 33, Issue 15 e5583
SPECIAL ISSUE PAPER

Deep hashing based on triplet labels and quantitative regularization term with exponential convergence

Zhuotong Liu

Zhuotong Liu

School of Software Engineering, Xi'an Jiaotong University, Xi'an, China

Search for more papers by this author
Chen Li

Corresponding Author

Chen Li

School of Software Engineering, Xi'an Jiaotong University, Xi'an, China

Chen Li, School of Software Engineering, Xi'an Jiaotong University, Xi'an 710049, China.

Email: [email protected]

Search for more papers by this author
Lihua Tian

Lihua Tian

School of Software Engineering, Xi'an Jiaotong University, Xi'an, China

Search for more papers by this author
First published: 01 December 2019
Citations: 1

Summary

Due to the outstanding performance of the deep network architecture–based hash for data storage and retrieval in recent years, it has been widely applied in massive image retrieval. Most previous approaches have not paid attention to the significant effect on the hash model of quantization error during the learning process. Furthermore, the saturated loss function may result in similar hash codes being generated by large-difference images. The underuse of classification information in the training process also brings about poor performance by hash codes in retrieval assignments. In this paper, we propose a novel quantitative regularization term with an exponential convergence rate to minimize the impact of quantization error on the model and accelerate the convergence speed of the network. In the training process, to resolve the dilemma caused by saturation loss functions, a new sigmoid function with a slope parameter that can be changed automatically according to the number of iterations is proposed. For the sufficient application of classification information, triplet labels and image labels are used in parallel under the same framework by integrating image labels into the output layer. The experiment results indicate that our algorithm is superior to several advanced hash methods on two standard datasets.

The full text of this article hosted at iucr.org is unavailable due to technical difficulties.