This project born from a conversation in discord during the contest buildtogether,
where it was expressed the need of a solution, for people with wheelchair and limited visibility and head mobility, to safely drive the wheelchair on the ramps, most of the time moving backward. As my project was an intercom display (M5stack Core3) mounted on the wheelchair already, by adding some cameras, I can help on this need.
DescriptionAs mentioned, there is a M5stack Core3 who act as main controller. The cameras are supposed to be mounted on the side of the wheelchair, on top of the wheels, looking backwards, to offer a good visibility of the path. In some occasion, a pan/tilt solution can help to cover some hidden spot.
I used on this project two cameras I have in my boxes: one M5Camera and one Unit Camera DIY Kit. The last one is very cheap but doesn't offer PSRAM, so any software on it is very limited. Also, the DIY kit can mount a wide angle sensor, to offer a visibility of 160 degree, impressive, but I'm not sure can help the need, will think about the idea of adding this wide angle sensor for additional scopes.I want to give some explanation about the protocol used on this project, the I2C:I'm aware it is not the best protocol to send a lot of data, and to keep busy the bus. Wifi, espnow or Uart port could offer a higher bitrate, connecting directly two OV2640 modules to an esp32 (or stm32) which control also the display could have offered a more efficient solution, but I had some constraint.- The WiFi was out of scope, as it's required for the main project (intercom) or not available outside the house or far from the outer.
- The Uart is not possible as I need at least 2 cameras, while I don't have enough free pins to set up a second Uart.
- I can't connect the ov2640 module directly to the dashboard board, as the M5 Core3 doesn't have all the 20 pins required exposed.The cameras I'm using, offers only a groove port, which I can easily use for I2C and to power up the cam.
Last but not the least, I'm curious, and I wanted to experiment the capability of using I2C for images
Implementation - Camera softwareThe software is derived directly from the standard CameraWebServer, where I removed the WiFi part and customized on some setting to works without PSRAM. As my display is 320x240 large, in order to visualize both cameras, I need 2 images large 160x240, which I will display side by side. The OV2640 module doesn't offer this resolution, and I don't want to get more pixels (and bytes) than required and drop them afterward, wasting memory and CPU cycles, so by using the windows function, the OV2640 can provide the required resolution by selecting, in hardware, only a small portion from the original image, large the required 160x240, from an offset I can configure (parameters pan & tilt), to select a frame from any point in the image.This is the code:
s->set_res_raw(s, res_id, 0, 0, 0, pan, tilt, 160, 240, 160, 240, true, false);
where res_id can be:
- 1 → 400x300
- 2→ 800x600
- 3→ 1600x1200
as the frame is always 160x240, on the highest resolution I get only 1/10 of the image width, simulating a x10 zoom.
During the boot, on setup function, the camera set up the I2C communication as slave, with addresses 0x61 and 0x62 respectively, and configure the two call-back functions on the events onReceive and onRequest.onReceive is delegated to receive the commands from the master, and these commands are for sending the frame metadata, specially the number of bytes, or to set some resolution or change the offset of the frame to simulate pan and tilt.
onRequest, which by definition is supposed to be executed when the master requires some data, just set a flag to inform the data is sent.
There is often a misunderstanding on how this function is working. On all examples, when this function is called, there are some Wire.write to send the data to the master.The problem here is about the I2c implementation: when this function is called, the I2C device will send immediately all the data already in the buffer, and will not wait for any additional data. This means the data we put inside the function will not be sent until the next iteration. This is not what we want.In my implementation, when the function is trigger, I reset a flag to inform the main loop to prepare new data for the buffer.The images we get from the module are around 6KB, and I send these images in batches of 100 bytes, to avoid issues on the I2C bus, requiring then around 60 iterations, depending on the size send with the Metadata.. Likely the bus runs at 1Mhz, so the transmission is quite fast.
Configuration
It’s required to configure the type of camera and the I2C pin connected to the groove port.On my example, I use CAMERA_MODEL_M5STACK_UNITCAM for the Unit Camera DIY Kit and CAMERA_MODEL_M5STACK_WIDE for the M5Camera, by removing the comment on the proper #define line.
// ===================
// Select camera model
// ===================
//#define CAMERA_MODEL_WROVER_KIT // Has PSRAM
//#define CAMERA_MODEL_ESP_EYE // Has PSRAM
//#define CAMERA_MODEL_ESP32S3_EYE // Has PSRAM
//#define CAMERA_MODEL_M5STACK_PSRAM // Has PSRAM
//#define CAMERA_MODEL_M5STACK_V2_PSRAM // M5Camera version B Has PSRAM
#define CAMERA_MODEL_M5STACK_WIDE // M5Camera with PSRAM
// Has PSRAM
//#define CAMERA_MODEL_M5STACK_ESP32CAM // No PSRAM
//#define CAMERA_MODEL_M5STACK_UNITCAM // No PSRAM
//#define CAMERA_MODEL_AI_THINKER
// Has PSRAM
//#define CAMERA_MODEL_TTGO_T_JOURNAL // No PSRAM
//#define CAMERA_MODEL_XIAO_ESP32S3 // Has PSRAM
// ** Espressif Internal Boards **
//#define CAMERA_MODEL_ESP32_CAM_BOARD
//#define CAMERA_MODEL_ESP32S2_CAM_BOARD
//#define CAMERA_MODEL_ESP32S3_CAM_LCD
//#define CAMERA_MODEL_DFRobot_FireBeetle2_ESP32S3 // Has PSRAM
//#define CAMERA_MODEL_DFRobot_Romeo_ESP32S3 // Has PSRAM
For the I2C pins, I set them, along with the slave Address and the Clock ( 1Mhz) during the I2C initialization.
Wire.begin(address, 4, 13, 1000000);
//Wire.begin(address, 17, 16, 1000000);
During the code deploy, it's required to use two different address, the controller expect 0x61 and 0x62
#define address 0x62
Implementation - Display and controllerThis code is quite simple and requires just 200 lines.On every loop:
- Check the I2C controller for an command or button pushed.
- If any, send the command to both cameras and wait 200ms
- Requires to both camera, the image's size of the next frame
- For every camera, loop on the image's size to fill the respective buffer
- Draw the Jpeg on the respective position, with a red line in the middle to show the separation
- Wait for some delay dependent on the resolution to get a new frame
M5.Lcd.drawJpg(buffer1, len_1, 0, 0);
M5.Lcd.drawJpg(buffer2, len_2, 161, 0);
M5.Lcd.drawFastVLine(160, 0, 240, 0xF800);
delay(delay_frame[res_id]);
On the I2C initialization, I set the speed to 1Mhz, actually the maximum speed we can configure on the ESP32, any higher speed will be lowered to this max value by the Espressif Libraries.
Wire.setClock(1000000);
Wire.begin();
Final notesI didn't really test this solution in a wheelchair to find the best sensor for the scope, if a wide angle or a narrow one. The DIY Camera offers both sensors on the box, so it's easy to test. Didn't mention any power source for the M5Stack Core, as most of the electric wheelchair offer USB ports, I'm not sure about the size of the display, 2.4". Eventually, the controller can be replaced with a custom board with a bigger screen, like a 4" 480x320. That Display allow getting larger frames, or keep some space for touch controls, considering also the other functions offered by the intercom
Comments