Find all our security, data, and privacy policies below.
Thank you for visiting CoNote. Please feel free to reach out with any questions you may have!
Data Minimization and Purpose Limitation:
a. Minimize Data Collection:
b. Purpose Specification:
Depending on your elected settings, data collection can be edited as desired to include more or less data. Again, unless you explicitly enable or authorize collection, no additional data collection will occur.
Data Security and Confidentiality:
a. Encryption: CoNote employs strong encryption techniques to protect confidential data during storage, transmission, and processing.
b. Access Controls: Based on user tier, robust access controls are available to limit data access to authorized personnel only and prevent unauthorized use or disclosure.
c. Anonymization and De-Identification: We apply best practice techniques to anonymize or de-identify personal data used in AI training and processing, ensuring privacy protection.
d. Any external APIs used with customer data have been vetted, are SOC2 type II certified, and do not use private data to train anything external on the API provider end.
User Rights and Consent:
a. Consent Mechanisms: Data will not be used between users, or even within one user’s tenant space, to train and improve engine results. Consent mechanisms will be implemented so that users can opt in to allow their own model to be trained on past data that they have submitted.
b. User Rights: CoNote respects individuals rights, such as the right to access, correct, and delete their personal data. We make every effort to make this clear and easy to understand within the application and also easy to contact the CoNote team if any concerns arise.
Fairness and Non-discrimination: We strive to ensure that AI systems are developed and deployed in a manner that is fair and unbiased. We are committed to avoiding discriminatory outcomes based on factors such as race, gender, age, or socioeconomic status.
Human Autonomy and Oversight: We recognize the importance of human autonomy and decision-making in the use of AI. We believe that AI systems should be designed to augment human capabilities, rather than replace or undermine them. Human oversight and intervention should be available when significant decisions are made by AI systems.
Safety and Reliability: We prioritize the development and deployment of AI systems that are safe and reliable. This includes rigorous testing, continuous monitoring, and ongoing assessment of potential risks and harms associated with AI technologies.
Social and Environmental Well-being: We consider the broader social and environmental impacts of AI systems. We strive to minimize negative consequences and actively work towards promoting positive societal outcomes, such as addressing social inequalities and minimizing environmental harm.
Collaboration and Multi-stakeholder Engagement: We believe that addressing the ethical challenges of AI requires collaboration and engagement among various stakeholders, including researchers, developers, policymakers, civil society, and affected communities. We actively seek diverse perspectives and input to inform our policies and practices.
Education and Awareness: We are committed to promoting education and awareness about AI technologies and their ethical implications.
Continuous Improvement: We acknowledge that AI technologies and ethical considerations are continually evolving. We commit to learning from experience, conducting regular assessments, and adapting our policies and practices to align with emerging ethical standards and societal expectations.