We plan to develop a self-learning software prototype for human interactions using various models/techniques such as AI/ML, perceptive analysis, and natural language processing, and protocols running on IoT based devices (sensors, cameras, microphones, etc.). The main objective of the project is to develop an intelligent software that learns from past interactions and should be able to self assess the environment and perform appropriate action/reaction/recommendation automatically without having any explicit instructions from users.