Face Attendance System in Python

Rohit Raj
3 min readMay 16, 2024

--

In this tutorial, I will show how to build a face attendance system in Python using open-source models.

I will use the yolov8 library for the detection of a person in the image. I will then use Deepface library in Python to create embeddings for face of the person in the frame of the video. This will be compared with face embeddings of employees of the company. Detection time of times will be saved in a SQLite database.

Building a Face detection system in Python involves the following steps:

  1. First import all necessary libraries
import cv2
import os
import sqlite3
from ultralytics import YOLO
from PIL import Image
from deepface import DeepFace
from deepface.modules import verification
import numpy as np
import os
from PIL import Image

2. I used opencv library to read the video file


video_path = r'path to video file'

cap = cv2.VideoCapture(video_path)
fps = cap.get(cv2.CAP_PROP_FPS) #fps of video

I extract frames per second of the video since I will check for a person every second of the video.

3. Create a sqlite3 database to store detections


# Create/Connect to the Database
conn = sqlite3.connect('mydatabase.db')
cursor = conn.cursor()

# Create a Table
cursor.execute('''
CREATE TABLE IF NOT EXISTS attendance (
id INTEGER PRIMARY KEY AUTOINCREMENT,
name TEXT,
TIME INT
)
''')

4. Save images of all employees in a folder. Name the filenames with names of employees

5. Save embeddings of employees in a list along with their names

#save target embeddings
model_name = 'Facenet512'

testfolder = r'/ path to folder containing employees images'
emlist = []
for file in os.listdir(testfolder):
if '.jpg' in file or '.jpeg' in file:
emlist.append([file.split('.')[0],DeepFace.represent(os.path.join(testfolder,file), model_name=model_name)[0]['embedding']])

6. I wrote the following function to compare the crop of person in video with embeddings of faces of employees to detect the presence of employees

def calldeepface(frame):
try:
inputembedding = DeepFace.represent(frame, model_name=model_name)[0]['embedding']
for filename,targetembedding in emlist:
distance = verification.find_cosine_distance(inputembedding, targetembedding)
distance = np.float64(distance)
threshold = verification.find_threshold(model_name, 'cosine')
print(distance, threshold)
if distance <= threshold:
return filename
return None
except:
return None

7. Next I iterate over frames of video in the following code. One frame per second is analyzed. The frame is first used in the Yolov8 object detection model. For each detection of aperson in aframe of the video, the image of the person is cropped from the frame. Face embedding of that person is compared with saved target embeddings. In case of a match, details are saved in the sqlite3 database.

frame_count = 0
second_count = 0
oldperson = None
while cap.isOpened():
ret, frame = cap.read()
if not ret:
break

if frame_count % int(fps) == 0:
results = model([frame],classes =[0], conf = 0.8) # return a list of Results objects only for bus class with min confidence of 0.8
if len (results)>0:
# Process results list
for result in results:
boxes = result.boxes # Boxes object for bounding box outputs
if len(boxes)==0:
continue
# img = Image.fromarray(frame)
# crop = img.crop(boxes.xyxy[0].numpy())
# person = calldeepface(np.array(crop))
boxes = boxes.xyxy[0].numpy()
crop = frame[int(boxes[1]):int(boxes[3]),int(boxes[0]):int(boxes[2])]
person = calldeepface(crop)
if person is not None and person !=oldperson:
cursor.execute('INSERT INTO attendance (name, TIME) VALUES (?, ?)', (person, second_count))
conn.commit()
oldperson = person
print(person, second_count)
second_count += 1
if second_count >100:
break

frame_count += 1

cap.release()

The complete code is given below

I am analyzing one frame per second, but you can increase it based on your hardware. Instead of reading from a saved video, you can modify easily to read from a CCTV camera.

--

--

Rohit Raj

Studied at IIT Madras and IIM Indore. Love Data Science