SpringBoot+SeetaFace6搭建人脸识别平台

cnblogs 2024-10-08 14:39:00 阅读 53

前言

最近多个项目需要接入人脸识别功能,之前的方案是使用百度云api集成,但是后续部分项目是内网部署及使用,考虑到接入复杂程度及收费等多种因素,决定参考开源方案自己搭建,保证服务的稳定性与可靠性

项目地址:https://gitee.com/code2roc/fastface

设计

经过检索对别多个方案后,使用了基于seetaface6+springboot的方式进行搭建,能够无缝接入应用

seetaface6是中科视拓最新开源的商业正式版本,包含人脸识别的基本能力:人脸检测、关键点定位、人脸识别,同时增加了活体检测、质量评估、年龄性别估计

官网地址:https://github.com/SeetaFace6Open/index

使用对接的sdk是tracy100大神的封装,支持 jdk8-jdk14,支持windows和Linux,无需考虑部署问题,直接使用jar包实现业务即可,内部同时封装了bean对象spring能够开箱即用

官网地址:https://github.com/tracy100/seetaface6SDK

系统目标实现人脸注册,人脸比对,人脸查找基础功能即可

实现

引用jar包

<dependency>

<groupId>com.seeta.sdk</groupId>

<artifactId>seeta-sdk-platform</artifactId>

<scope>system</scope>

<version>1.2.1</version>

<systemPath>${project.basedir}/lib/seetaface.jar</systemPath>

</dependency>

bean对象注册

FaceDetectorProxy为人脸检测bean,能够检测图像中是否有人脸

FaceRecognizerProxy为人脸比对bean,能够比对两张人脸的相似度

FaceLandmarkerProxy为人脸关键点bean,能够检测人脸的关键点,支持5个点和68个点

@Configuration

public class FaceConfig {

@Value("${face.modelPath}")

private String modelPath;

@Bean

public FaceDetectorProxy faceDetector() throws FileNotFoundException {

SeetaConfSetting detectorPoolSetting = new SeetaConfSetting(

new SeetaModelSetting(0, new String[]{modelPath + File.separator + "face_detector.csta"}, SeetaDevice.SEETA_DEVICE_CPU));

FaceDetectorProxy faceDetectorProxy = new FaceDetectorProxy(detectorPoolSetting);

return faceDetectorProxy;

}

@Bean

public FaceRecognizerProxy faceRecognizer() throws FileNotFoundException {

SeetaConfSetting detectorPoolSetting = new SeetaConfSetting(

new SeetaModelSetting(0, new String[]{modelPath + File.separator + "face_recognizer.csta"}, SeetaDevice.SEETA_DEVICE_CPU));

FaceRecognizerProxy faceRecognizerProxy = new FaceRecognizerProxy(detectorPoolSetting);

return faceRecognizerProxy;

}

@Bean

public FaceLandmarkerProxy faceLandmarker() throws FileNotFoundException {

SeetaConfSetting detectorPoolSetting = new SeetaConfSetting(

new SeetaModelSetting(0, new String[]{modelPath + File.separator + "face_landmarker_pts5.csta"}, SeetaDevice.SEETA_DEVICE_CPU));

FaceLandmarkerProxy faceLandmarkerProxy = new FaceLandmarkerProxy(detectorPoolSetting);

return faceLandmarkerProxy;

}

}

在使用相关bean对象时,需要进行library的本地注册,指定cpu还是gpu模式

LoadNativeCore.LOAD_NATIVE(SeetaDevice.SEETA_DEVICE_CPU)

人脸检测

public FaceEnum.CheckImageFaceStatus getFace(BufferedImage image) throws Exception {

SeetaImageData imageData = SeetafaceUtil.toSeetaImageData(image);

SeetaRect[] detects = faceDetectorProxy.detect(imageData);

if (detects.length == 0) {

return FaceEnum.CheckImageFaceStatus.NoFace;

} else if (detects.length == 1) {

return FaceEnum.CheckImageFaceStatus.OneFace;

} else {

return FaceEnum.CheckImageFaceStatus.MoreFace;

}

}

人脸比对

public FaceEnum.CompareImageFaceStatus compareFace(BufferedImage source, BufferedImage compare) throws Exception {

float[] sourceFeature = extract(source);

float[] compareFeature = extract(compare);

if (sourceFeature != null && compareFeature != null) {

float calculateSimilarity = faceRecognizerProxy.calculateSimilarity(sourceFeature, compareFeature);

System.out.printf("相似度:%f\n", calculateSimilarity);

if (calculateSimilarity >= CHECK_SIM) {

return FaceEnum.CompareImageFaceStatus.Same;

} else {

return FaceEnum.CompareImageFaceStatus.Different;

}

} else {

return FaceEnum.CompareImageFaceStatus.LostFace;

}

}

人脸关键点

private float[] extract(BufferedImage image) throws Exception {

SeetaImageData imageData = SeetafaceUtil.toSeetaImageData(image);

SeetaRect[] detects = faceDetectorProxy.detect(imageData);

if (detects.length > 0) {

SeetaPointF[] pointFS = faceLandmarkerProxy.mark(imageData, detects[0]);

float[] features = faceRecognizerProxy.extract(imageData, pointFS);

return features;

}

return null;

}

人脸数据库

  • 注册

public long registFace(BufferedImage image) throws Exception {

long result = -1;

SeetaImageData imageData = SeetafaceUtil.toSeetaImageData(image);

SeetaRect[] detects = faceDetectorProxy.detect(imageData);

if (detects.length > 0) {

SeetaPointF[] pointFS = faceLandmarkerProxy.mark(imageData, detects[0]);

result = faceDatabase.Register(imageData, pointFS);

faceDatabase.Save(dataBasePath);

}

return result;

}

  • 查找

public long queryFace(BufferedImage image) throws Exception {

long result = -1;

SeetaImageData imageData = SeetafaceUtil.toSeetaImageData(image);

SeetaRect[] detects = faceDetectorProxy.detect(imageData);

if (detects.length > 0) {

SeetaPointF[] pointFS = faceLandmarkerProxy.mark(imageData, detects[0]);

long[] index = new long[1];

float[] sim = new float[1];

result = faceDatabase.QueryTop(imageData, pointFS, 1, index, sim);

if (result > 0) {

float similarity = sim[0];

if (similarity >= CHECK_SIM) {

result = index[0];

} else {

result = -1;

}

}

}

return result;

}

  • 删除

public long deleteFace(long index) throws Exception {

long result = faceDatabase.Delete(index);

faceDatabase.Save(dataBasePath);

return result;

}

拓展

集成了face-api.js,实现简单的张张嘴,摇摇头活体检测,精确度不是很高,作为一个参考选项

官网地址:https://github.com/justadudewhohacks/face-api.js

加载模型

Promise.all([

faceapi.loadFaceDetectionModel('models'),

faceapi.loadFaceLandmarkModel('models')

]).then(startAnalysis);

function startAnalysis() {

console.log('模型加载成功!');

var canvas1 = faceapi.createCanvasFromMedia(document.getElementById('showImg'))

faceapi.detectSingleFace(canvas1).then((detection) => {

if (detection) {

faceapi.detectFaceLandmarks(canvas1).then((landmarks) => {

console.log('模型预热调用成功!');

})

}

})

}

打开摄像头

<video muted playsinline></video>

function AnalysisFaceOnline() {

var videoElement = document.getElementById('video');

// 检查浏览器是否支持getUserMedia API

if (navigator.mediaDevices.getUserMedia) {

navigator.mediaDevices.getUserMedia({ video: { facingMode: "user" } }) // 请求视频流

.then(function(stream) {

videoElement.srcObject = stream; // 将视频流设置到<video>元素

videoElement.play();

})

.catch(function(err) {

console.error("获取摄像头错误:", err); // 处理错误

});

} else {

console.error("您的浏览器不支持getUserMedia API");

}

}

捕捉帧计算关键点

function vedioCatchInit() {

video.addEventListener('play', function() {

function captureFrame() {

if (!video.paused && !video.ended) {

// 设置canvas的尺寸与视频帧相同

canvas.width = 200;

canvas.height = 300;

// 绘制当前视频帧到canvas

context.drawImage(video, 0, 0, canvas.width, canvas.height);

// 将canvas内容转换为data URL

//outputImage.src = canvas.toDataURL('image/png');

// 可以在这里添加代码将data URL发送到服务器或进行其他处理

faceapi.detectSingleFace(canvas).then((detection) => {

if (detection) {

faceapi.detectFaceLandmarks(canvas).then((landmarks) => {

})

} else {

console.log("no face")

}

})

// 递归调用以持续捕获帧

setTimeout(captureFrame, 100); // 每500毫秒捕获一次

}

}

captureFrame(); // 开始捕获帧

});

}



声明

本文内容仅代表作者观点,或转载于其他网站,本站不以此文作为商业用途
如有涉及侵权,请联系本站进行删除
转载本站原创文章,请注明来源及作者。