In collaboration with Dr. Alok Sen and his team from one of the top Indian eye hospitals, Sadguru Netra Chikitsalaya (SNC) - Chitrakoot, we have developed a web-based platform for automated analysis of OCT eye images. In that web-based application, the code runs in a remote location, and doctors from the hospital or anyone from around the world can access it to upload OCT eye images and get the disease analysis results. The web-based application can also be run locally within the hospital without direct access to the internet. Here, we have only used the latest available GAN-based SOTA models that are publicly available and not any specific dataset-based training.
Currently, the application runs based on the administrator's given username and password. There is no provision for signing up for the application, which helps simplify application development. We have used the Streamlit library to build the web application and a simple username-password-based authentication to get into the actual web application. Also, the username-password is saved in an encrypted format, which helps secure the web application's access even if the server containing the code is hacked.
The code to create the encrypted authentication pkl file is as follows:
import pickle
from pathlib import Path
import streamlit_authenticator as stauth
users = ["admin"]
usernames = ["admin"]
passwords = ["admin"]
#convert plain text passwords to hashed passwords
hashed_passwords= stauth.Hasher(passwords).generate()
# streamlit_authenticator used bcrypt for password hashing
file_path=Path(__file__).parent / "hashed_passwords.pkl"
with file_path.open("wb") as file:
pickle.dump(hashed_passwords,file)
Our input OCT eye images will be like the ones below
(shared by the medical team)
From here, we need to find the white spots in the upper black zone, medically only known as vitreous humor.
For this analysis, we have sub-divided our work into mainly three parts
- Denoising the image
- Segmentation of the specific area of interest zone
- Threshold the disease-specific pixels and count their total underlying area.
As the original OCT eye images captured by the medical device are too noisy, as you can see, there is a need to denoise the image in a pre-processing stage. Rather than building something from scratch here, we have used Generative AI-based, one of the latest state-of-the-art architecture approaches known as Dense Residual Swin Transformer for Image Denoising.
REF:@inproceedings{li2023ntire_dn50, title={NTIRE 2023 Challenge on Image Denoising: Methods and Results}, author={Li, Yawei and Zhang, Yulun and Van Gool, Luc and Timofte, Radu and others}, booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops}, year={2023} }
Next, for segmentation of the specific area of interest zone, i.e., vitreous humor, we have used the Generative AI-based state-of-the-art META Segment Anything SAM Model with ROI specification.
REF:@misc{kirillov2023segment, title={Segment Anything}, author={Alexander Kirillov and Eric Mintun and Nikhila Ravi and Hanzi Mao and Chloe Rolland and Laura Gustafson and Tete Xiao and Spencer Whitehead and Alexander C. Berg and Wan-Yen Lo and Piotr Dollár and Ross Girshick}, year={2023}, eprint={2304.02643}, archivePrefix={arXiv}, primaryClass={cs.CV} }
Next, on that specific segmented zone, we have used high-intensity-based part detections, which are medically known as hyper-reflective dots, using pixel-specific intensity values alongside some of the basic morphology operations to filter out unwanted parts to be detected as disease-specific hyperreflective dots. To calculate the total area of consideration for the diseased part, we use the number of white pixels within the segmented binary image.
The whole code performing the task is as follows:
import streamlit as st
import requests
import os.path
import torch
from PIL import Image
import utils_model
import utils_image as util
import numpy as np
import cv2
import pandas as pd
import pickle
from pathlib import Path
import streamlit_authenticator as stauth
from streamlit_option_menu import option_menu
import streamlit as st
users = ["admin"]
usernames = ["admin"]
file_path=Path(__file__).parent / "hashed_passwords.pkl"
with file_path.open("rb") as file:
hashed_passwords=pickle.load(file)
authenticator = stauth.Authenticate(users, usernames, hashed_passwords, "demo_auth", "rkey1", cookie_expiry_days=30)
name, authentication_status, username = authenticator.login("Login", "main")
if authentication_status == False:
st.error("Username/password is incorrect")
if authentication_status == None:
st.warning("Please enter your username and password")
if authentication_status:
selected = option_menu(
menu_title=None,
options=["OCT"],
icons=["book"],
menu_icon="cast",
default_index=0,
orientation="horizontal",
)
if selected == "OCT":
torch.cuda.empty_cache()
st.write('The method uses State of the art DeepLearning Models for Image Denoising followed by ROI based segmentation for finding the disease detection on OCT Eye images.')
upfile = st.file_uploader("Upload an image", type=["png", "jpg", "jpeg"])
device = st.radio('Select run in CPU or CUDA GPU', ('cpu','cuda'), index=1, horizontal=True)
st.write('You selected:', device)
option1 = st.radio('Select the Denoising Model:', ('Version 1','other'), horizontal=True)
st.write('You selected:', option1)
option2 = st.radio('Select the Segmentation Model:', ('Version 1', 'Version 2','Version 3'), horizontal=True)
st.write('You selected:', option2)
proceed = st.button('Proceed')
if upfile is not None:
if proceed == True:
torch.cuda.empty_cache()
img = Image.open(upfile)
img.save('input.png')
n_channels = 3
img_U = util.imread_uint('input.png', n_channels=n_channels)
st.write("Original Input Image")
st.image(upfile)
image_original=img_U
model_name = 'team15_SAKDNNet.pth'
#device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
from team15_SAKDNNet import SAKDNNet as net
model = net(in_nc=n_channels,config=[4,4,4,4,4,4,4],dim=64)
model.load_state_dict(torch.load(model_name), strict=True)
model.eval()
for k, v in model.named_parameters():
v.requires_grad = False
model = model.to(device)
img_N = util.uint2tensor4(img_U)
img_N = img_N.to(device)
img_DN = utils_model.inference(model, img_N, refield=64, min_size=512, mode=2)
img_DN = model(img_N)
img_DN = util.tensor2uint(img_DN)
st.write("Denoised Image")
st.image(img_DN)
from segment_anything import sam_model_registry, SamPredictor
#sam_checkpoint = option2+'.pth'
#model_type = option2
option2 = "vit_b"
sam = sam_model_registry[option2](checkpoint=option2+'.pth')
sam.to(device=device)
predictor = SamPredictor(sam)
predictor.set_image(image_original)
input_point = np.array([[200, 200], [900, 80]])
input_label = np.array([1, 1])
masks, scores, logits = predictor.predict(point_coords=input_point,point_labels=input_label, multimask_output=False)
mask_input = logits[np.argmax(scores), :, :]
new_masks = np.moveaxis(masks, [0], [2] )
newarr = masks.reshape(masks.shape[1], masks.shape[2])
from scipy import ndimage
import numpy as np
kernel=cv2.getStructuringElement(cv2.MORPH_ELLIPSE,(5,5))
newarr=ndimage.binary_opening(newarr,structure=np.ones((10,10)))
#newarr = cv2.morphologyEx(newarr, cv2.MORPH_CLOSE, kernel)
#display(Image.fromarray(newarr))
new_image1 = np.zeros_like(newarr)
for j in range(newarr.shape[1]):
for i in reversed(range(newarr.shape[0])):
if newarr[i,j] == True:
new_image1[0:i,j] = True
#display(Image.fromarray(new_image1))
new_image2=new_image1.astype(np.uint8)
new_image2=new_image2*255
M2 = np.float32([[1, 0, 0], [0, 1, -50]])
new_image2 = cv2.warpAffine(new_image2, M2, (new_image2.shape[1], new_image2.shape[0]))
#display(Image.fromarray(new_image2))
new_image3=new_image1.astype(np.uint8)
new_image3=new_image3*255
M3 = np.float32([[1, 0, 0], [0, 1, -70]])
new_image3 = cv2.warpAffine(new_image3, M3, (new_image3.shape[1], new_image3.shape[0]))
#display(Image.fromarray(new_image3))
img_DN = cv2.cvtColor(img_DN, cv2.COLOR_BGR2GRAY)
new_image = np.zeros_like(img_DN)
total_segmented_zone_pixel = 0
for i in range(img_DN.shape[0]):
for j in range(img_DN.shape[1]):
if new_image1[i,j] == True:
total_segmented_zone_pixel = total_segmented_zone_pixel + 1;
new_image[i,j] = img_DN[i,j]
ret,thresh1 = cv2.threshold(new_image,60,255,cv2.THRESH_BINARY)
st.write("Disease ROI Segmented Image")
st.image(thresh1)
number_of_white_pix = np.sum(thresh1 == 255)
st.write("No of Disease Specific Pixels:")
st.write(number_of_white_pix)
torch.cuda.empty_cache()
authenticator.logout("Logout", "sidebar")
To run the code, we need to clone the given GitHub repo, download the model files, and install the dependent libraries as follows:-
(base) parikshit@parikshit:~/oct$ git clone https://github.com/Parikshit01/oct.git
Cloning into 'oct'...
remote: Enumerating objects: 19, done.
remote: Counting objects: 100% (19/19), done.
remote: Compressing objects: 100% (16/16), done.
remote: Total 19 (delta 2), reused 0 (delta 0), pack-reused 0
Receiving objects: 100% (19/19), 816.61 KiB | 1.55 MiB/s, done.
Resolving deltas: 100% (2/2), done.
(base) parikshit@parikshit:~/oct$ cd oct/
(base) parikshit@parikshit:~/oct$ gdown 1nkPqglbnNo7gQ6fGmS6vlibQOybgnqhZ
Downloading...
From (original): https://drive.google.com/uc?id=1nkPqglbnNo7gQ6fGmS6vlibQOybgnqhZ
From (redirected): https://drive.google.com/uc?id=1nkPqglbnNo7gQ6fGmS6vlibQOybgnqhZ&confirm=t&uuid=5068b740-a355-4827-9da6-b8007e14de5a
To: /home/parikshit/oct/vit_h.pth
100%|███████████████████████████████████████████████████████████| 2.56G/2.56G [03:50<00:00, 11.1MB/s]
(base) parikshit@parikshit:~/oct$ gdown 1Fsw2G-HFFMgFa-Kzi-MmhrjLSjcvgydm
Downloading...
From (original): https://drive.google.com/uc?id=1Fsw2G-HFFMgFa-Kzi-MmhrjLSjcvgydm
From (redirected): https://drive.google.com/uc?id=1Fsw2G-HFFMgFa-Kzi-MmhrjLSjcvgydm&confirm=t&uuid=9d3a98a9-44d7-4256-8f19-9285d52c3d76
To: /home/parikshit/oct/vit_b.pth
100%|█████████████████████████████████████████████████████████████| 375M/375M [00:34<00:00, 11.0MB/s]
(base) parikshit@parikshit:~/oct$ gdown 1NUjLkEI6d0e0KuFC00PNHX_aFYJWSaJf
Downloading...
From (original): https://drive.google.com/uc?id=1NUjLkEI6d0e0KuFC00PNHX_aFYJWSaJf
From (redirected): https://drive.google.com/uc?id=1NUjLkEI6d0e0KuFC00PNHX_aFYJWSaJf&confirm=t&uuid=8178607f-1175-48c5-b838-d7105ffcced1
To: /home/parikshit/oct/vit_l.pth
100%|███████████████████████████████████████████████████████████| 1.25G/1.25G [01:55<00:00, 10.8MB/s]
(base) parikshit@parikshit:~/oct$ gdown 1wiNby1oIEPFohgeOdrtg4ZHy5f_2mhei
Downloading...
From (original): https://drive.google.com/uc?id=1wiNby1oIEPFohgeOdrtg4ZHy5f_2mhei
From (redirected): https://drive.google.com/uc?id=1wiNby1oIEPFohgeOdrtg4ZHy5f_2mhei&confirm=t&uuid=30139596-0fbc-4d6e-8bda-6ffa144af975
To: /home/parikshit/oct/team15_SAKDNNet.pth
100%|███████████████████████████████████████████████████████████| 72.0M/72.0M [00:06<00:00, 10.6MB/s]
(base) parikshit@parikshit:~/oct$ pip3 install -r requirements.txt
Finally, for running the web application, we have to run the below command
(base) parikshit@parikshit:~/oct$ streamlit run basic.py
You can now view your Streamlit app in your browser.
Local URL: http://localhost:8501
Network URL: http://192.168.0.175:8501
It will open the web application. One needs to upload the specific type of input images (which can be found in the shared GitHub repo), select a specific model selection from the given radio button-based selection for different sub-algorithm parts, and click proceed. The sub-step-wise results will be the output of the web application. One can get the overall ideas of the web application as well as the outputs from the below images.
Right now, the medical team is using it to get results based on different stages of the disease. After that, we will create a classifier module based on the disease-specific zone's area, and stage detection will be automated. The disease-specific pixel areas of the three shared OCT eye images are 71, 1263, and 972, respectively.
We plan to do additional statistical parameter calculations for inclusion in the final automated classifier addition stage.
The whole application runs in the NVIDIA Jetson AGX Orin Developer Kit in real-time and, in the final stage, can be directly integrated with the actual OCT machine as an add-on within a compact environment.
Comments