THE EQUATION • Issue no. III

 

the equation • tech ethics quarterly • issue no. III

Investing in Ethical AI

the untapped opportunity of responsible tech

 
 
  • Letter from the Editor • Olivia Gambelin

  • From Hunch to Reality: The Evolution of Responsible Tech • Sarah Drinkwater

  • Ethical AI Governance: A Call for Action • EAIGG

  • Through the Eyes of a Responsible Tech Founder: The Gap in Investor Knowledge • Anna Felländer

  • Value Alignment and its Impact on Investing • Helena Ward

 

letter from the editor • olivia gambelin

 

Dear Reader,

This issue holds a special place in my heart, which I am sure has caught you off-guard to read.

 

Why would an ethicist be enthralled by investments and artificial intelligence? Ethics seems to be a shaky connection to the two at best, and in direct conflict of interest at worst. As you are about to settle in to read though, this couldn’t be further from the truth. Ethics, AI, and investing go nicely hand-in-hand, however this still isn’t the reason for my excitement. 

I am thrilled to publish this issue because of what it signifies: the moment in time in which the conversation around AI innovation has begun to change. 

It has long been understood that the relationship between founders and investors has significant influence over the direction of the company and its technology. Founders build the technology that investors will fund, and investors in turn affect the pace, growth and direction of the founder’s technology. This means that if we are to have any significant impact on how startups build and develop AI, that change must become embedded into this revered founder/investor relationship. 

I have spent a significant amount of time in my career as an ethicist trying to change the conversation around AI development to one that includes ethics at its core. Although I had made progress with founders, I had, until now, been met by a complete standstill on the side of the investors. However, as you will find echoing throughout this issue, that is no longer the case, as the vanguards in investment are beginning to realize the true potential of ethical AI. 

We still have a long way to go in instilling ethics into the core of the founder/investor relationship. But for the first time since the start of the Responsible Tech industry, we have both sides listening and engaged. 

I hope you find as much hope in reading this issue as I have had in editing it. 

 

Happy reading,

 
 

from hunch to reality: the evolution of responsible tech • sarah drinkwater

 

“I had absolute faith in both the responsible tech scene becoming a thing but also the idea of responsibility becoming a powerful moat in driving

customer trust, employee love, and more.”

 

Sarah Drinkwater is undoubtedly a leading figure in the world of responsible tech. She comes from a rich background - from heading Google for Entrepreneurs in London, to leading the Tech and Society Solutions Lab at Omidyar Network, to now being an angel investor with specific interest in web3, she has seen it all. One clear theme throughout her experience though has been that of responsible tech, which we were grateful to have the opportunity to sit down with Sarah and discuss the evolution of the industry as she has seen taken place over the years.

 
 

 
 

Ethical Intelligence: How did you come to this focus on responsible tech? Is it something you set out to do from the very beginning, or one that has grown over time?

Sarah Drinkwater: I began working in the technology industry in the late 00s when there wasn’t a developed notion of the power the field held. I’d grown up with a self-taught developer dad and been a very early internet user, so in my household, there was a lot of idealism around the possibilities tech brought which echoed so much of what I felt as a startup and then Big Tech employee.

But after time spent in product teams, it became clear to me that, despite our good intentions (I rarely saw otherwise), mistakes were still made. Intentions weren’t enough. I then moved to run a space for entrepreneurs in London at the same time the techlash began bubbling up; there was an oppositional dynamic brewing between the tech community and the media, who had previously adored the industry and now began asking questions. I left Google to join Omidyar in early 2018 to establish their responsible tech team.

EI: How have you seen the responsible tech space grow and mature over the past five years?


SD: When I made that move, it was seen as a pretty wacky decision but I had absolute faith in both the responsible tech scene becoming a thing but also the idea of responsibility becoming a powerful moat in driving customer trust, employee love, and more. It’s been very satisfying to see this belief come true; we now have multiple household name companies with ethics boards, job boards like All Tech is Human, specific conferences such as Women in AI Ethics and a plethora of interesting jobs that help to operationalize all the work that lives below principles.

And outside of those who work in the field, we’ve seen their tech worker colleagues hold them accountable through the tech labour movement, we’ve seen LPs and VCs increasingly ask deeper questions about practices (but not enough yet) and we’ve seen companies who refuse to engage in ethics washing and take hard choices win. 

 

“So, whether it’s advocating internally or voting with your feet and leaving, tech workers are essential in building the right systems.”

 

EI: You’ve worked with entrepreneurs, investors, and even tech workers on responsible tech, so it’s safe to say you have an in-depth view of the different perspectives in the ecosystem. How are tech workers driving the motivation and need for ethics in technology?

SD: When I first began at Omidyar, we spent deep time thinking through the tech ecosystem and where intervention points were for us to fund. As long term impact investors, we obviously started with VCs and LPs but, as a former tech worker, I kept thinking about how hard to hire certain roles are. You look at the narratives of many of the largest firms employed in the 00s, focused on their incredible utility, and how proud workers felt to work for X firm. That’s changed in the case of certain big companies, at the same time as those firms have grown, requiring more talent, and at the same time as there are many more places those workers can go. Shame is an incredible incentive; at its very core, people want to do the right thing and be proud of where they work. So, whether it’s advocating internally or voting with your feet and leaving, tech workers are essential in building the right systems. 

 

EI: What role do entrepreneurs play in developing responsible tech? 

SD: Systems might be built and held accountable collectively, but so much of the culture and direction of a company comes from the top. Salesforce made the first big tech company move on creating the chief humane use officer because Marc Benioff cared. But, beyond that, there are multiple choices made early in company creation, from business model to equity pool to choice of investors and partners, that can make a real and true difference. 

EI: What can investors do to support entrepreneurs and tech workers in the growth of the responsible tech movement? 

SD: My core concern is that the venture model only works when companies are exponentially successful, and it’s hard for companies to scale so large without systems breaking or shortcuts taken. The right investor can be an incredible steward and help the founders and teams avoid this, but it’s on them also to recognise that sustainable growth can be as exciting as exponential growth. 

 
 

EI: Now as an angel investor yourself, how would you describe your approach to investing in technology?

SD: I’m early in my journey but I was excited by the chance to work first-hand with founders, as I did at Google and then at Omidyar, in helping operationalize the work of building responsible companies. I’ve tended to focus on companies with community-directed products, which is what I know best, with diverse founding teams and rock solid values alignment. 

EI: What role do values play in this approach?

SD: Values are critical, and I suspect not enough founders or investors spend enough time getting these down on paper. For me, there are simple things such as: I don’t invest in ad-based models, or businesses that are free to users but rest on selling data. I like products that give both utility and delight. And I’m particularly interested in companies with strong governance, especially if that’s shared with their

community - such as Anyone with their external ethics board which you convene or Library of Things with their innovative governance structure. 

EI: How does a startup’s mission and values impact its success?

SD: In Mind Foundry’s case, where I’m an non-executive director, they’re working on an important mission; real world AI for high stakes environments such as the public sector.

“It would be impossible to do that work well - to build the necessary trust with partners, or to hire the very specific in demand talent needed, without a very strong mission and values.”

 

EI: Finally, looking to the future of ethics, responsible tech, and investment, what opportunities unique to the responsible tech space do you see? 

SD: I’m still unsatisfied with the phrase responsible tech; it makes me want to immediately ask responsible to whom and for what. I see a strong need for us to pull together so much of the work that’s been done underneath a broader narrative that’s exciting and future-facing. 

EI: How do you imagine the relationship between web3 and responsible tech developing? 

SD: I was sceptical about crypto for so long, especially during the pre-Ethereum era when it felt so money focused. Now, I’m active in several really interesting communities with strong shared ethos around collective ownership and regeneration but, much like web2, there’s a gap between the high ideals and much of the reality. There’s an exciting amount of work to be done. 


EI.

 
 

GLOBAL BILL OF RIGHTS FOR AI

At The Global AI Bill of Rights, we’re developing a comprehensive set of standards, practices, and rights for all users in the age of artificial intelligence. From ethical data practices to ethical algorithms, we’re focused on creating a world where AI empowers everyone to live their best life.

ethical AI governance: a call for action • eaigg

We believe

the process of developing AI Ethics best practices and knowledge bases should be a collaborative initiative

bridging startups, investors and corporate enterprises in a joint effort to make these resources openly accessible for the betterment of the AI industry as a whole, and society at large.

 

Olivia Gambelin and Alayna Kennedy (Ethical Intelligence), Anik Bose and Venkat Raghavan (BGV) and Emmanuel Benhamou (EAIGG) share a cutting edge framework to guide startups on their journey towards building billion dollar responsible AI companies. The AI Ethics Maturity Continuum was developed based on widely accepted industry best practices, in-depth enterprise and academic research, and AI policy guidelines. The development was supported by input from the recently formed Ethical AI Governance Group (EAIGG), a grassroots community of startups, big tech exemplars and academics coming together to democratize the development of ethical AI governance. Consistent with the group’s charter, we are sharing this continuous assessment framework as an open source tool for AI startups and entrepreneurs, as well as for investors and tech incubators interested in performing due diligence and health diagnostics on their portfolio companies.

 
 

 
 

The AI Promise

AI is consistently cited as one of the top macro-trends poised to dominate the next decade of innovation. Chatbots, conversational systems, banking apps, smartphones, self-driving cars and even home devices, like Alexa, have rapidly become part of our daily lives. As intelligent automation technology becomes increasingly pervasive across all industries, McKinsey Research predicts it could unlock $3.5T - $5.8T in annual value, roughly equivalent to 40% of all data analytics techniques.  

This promise has unleashed an unprecedented level of VC investment in AI.  An OECD study analyzed VC rounds in 8,300 AI startups worldwide, covering transactions between 2012 and 2020 that were documented by capital market analysis firm Preqin.  According to the findings, the global annual value of VC investments in AI startups grew from $3B in 2012 to nearly $75B in 2020 with startups in the US and China capturing over 80% of the investments.  The EU followed with 4%, trailed by the U.K. and Israel at 3%.  In October of 2021 it was reported that 78 AI startups had reached unicorn status (exceeding valuations of $1B) including Argo AI, Anduril, Open AI, 6Sense, Preferred Networks and many others. 

 

AI at a Crossroads

In spite of the promise of AI, many view the technology as a double-edged sword.  Consumer attitudes towards the development of high-level machine intelligence is marked by fear, doubt and skepticism. The public is evenly split on whether this is a positive or negative development for humanity, or whether these technologies should even be developed in the first place. Confidence falls much farther when it comes to trusting major tech corporations like Facebook, Amazon, Microsoft, Google, or Apple for the development and stewardship of these innovations. While Skynet remains the stuff of SciFi novelty, the contemporary mainstreaming of AI across a broad variety of industries, from healthcare and financial services to autonomous vehicles and manufacturing, is now driving a unique set of legitimate concerns around Ethical AI Governance. 

The convergence of three trends is hindering broader AI market adoption:.  

  1. Deep and anchored distrust in black box AI outputs; 

  2. Growing concerns over the lack of strong AI oversight due to minimal standards and  regulation;

  3. The perception that AI development does not represent a coherent industry community, but rather a Big Tech oligarchy.

As a consequence, customers and end-users lack full confidence that AI models can be deployed without harm or unintended consequences.  This skepticism, in turn, implicates enterprises and AI operators that develop these solutions.

 

Our Perspective

As an industry, we are only just beginning to understand and develop best practices for AI Ethics and governance. This knowledge is, and will continue to be, essential in accelerating the adoption of responsible and trustworthy AI. At EAIGG we believe this process of developing best practices and knowledge bases should be a collaborative initiative, bridging  startups, investors and corporate enterprises in a joint effort to make these resources openly accessible for the betterment of the AI industry as a whole, and society at large.  Against this backdrop, it is incumbent upon AI practitioners and executives to lay out a shared set of principles, frameworks and standards to apply to AI technology development, deployment and operations to mitigate and address ethical AI governance concerns. The Ethics Maturity Continuum is the first of such, an openly accessible foundational framework designed to meet this growing need. 

Because venture capitalists occupy a unique role in shaping the arc of innovation, we unveil this framework primarily as a due diligence tool that investors can use to evaluate a company’s health with regard to the ethical development and deployment of AI technology solutions.  We believe Ethical AI will soon receive similar treatment as ESG issues in the public eye.  Venture funds will have to answer to their LPs, and to various stakeholders, when justifying their investments, and this tool provides a practical diagnostic health check to evaluate those decisions.  Business executives may also be interested in evaluating their own progress when it comes to these issues, to assess their progress over time, and to diagnose areas of improvement or of excellence.  As AI development touches a broader array of stakeholders, we anticipate an increasing need for ongoing due diligence in these critical areas.

 
 

Maturity Continuum Research Background

"AI ethics is a set of values, principles, and techniques that employ widely accepted standards of right and wrong to guide moral conduct in the development and use of AI technologies." 

- Alan Turing Institute

In recent years, the impressive advances in the capabilities and applications of AI systems have brought the debate  of AI’s impact on society into sharper focus. In order to maximize the benefits and mitigate the risks of these new solutions, the field of AI Ethics has quickly emerged, propagating new tools, frameworks, and Ethics as a Service products designed specifically to ensure that algorithms are built with ethical principles at the core. 

To aid in this adoption, the majority of global technology companies have established ethical frameworks and policies, while multiple governments have released similar high-level frameworks. As enterprise and regulation begin to mandate  ethical technology practices, an imperative has surfaced for startups to understand how to build these practices into the company and technology from the very outset. 

The AI Ethics Maturity Continuum consolidates and synthesizes these global frameworks and policies to determine the key factors for ethical technology development for early and late stage startups. These components have then been translated into a comprehensive model that provides a method for assessing a startup’s ethical development maturity along with clear actions points for improvement.

 

The AI Ethics Maturity Framework

The framework provides a granular approach to ensure AI first startups can implement best practices around the dimensions of accountability, intentional design, fairness, social impact, trust and transparency into their own products and processes (see framework below)..  Successfully operationalizing these techniques yield increased brand loyalty and customer engagement, accelerate market adoption, enhance regulatory readiness, attract stronger talent and eventually eliminate obstacles to going public or being acquired.  The benefits are manifest, and will be discussed in greater detail in a later section.  


Given the early nature of the AI industry and the lack of clear and consistent regulation, we believe this framework can play a crucial role in helping AI first startups navigate their responsible value creation journey.  While the five core principles remain the same across both early stage and late stage startups, they differ in what the relative maturity steps look like.  Each of the two stages is mapped below, with ethical AI maturity steps described on the X-axis below: 

 
 

 
 

Benefits for Startups

Responsible AI, when used to its full potential, is a source of competitive advantage. Building ethics into technology and product management has a number of business benefits, including:

  • Enhanced Product Quality - Incorporating responsible, independently verifiable AI practices throughout the product development lifecycle results in better alignment with customer wants and needs.

  • Employee Retention - Ethical practices significantly benefit talent acquisition, retention, and engagement, especially in the current competitive labor market.

  • Sustainability - Focus on value-sensitive design leads to better data management, which reduces digital waste and increases sustainable practices.

  • Regulation Readiness - Companies must stay ahead of AI regulation to remain competitive and active on a global scale, particularly where innovation eclipses governmental action.

  • Increased Growth -Ethics improves top-and bottom-line growth by increasing customer engagement, broadening revenue streams, and offering procurement advantages in competitive bidding processes.

  • Brand Loyalty -Heightened focus on the sector has increased the importance of customer loyalty to brands consumers believe in, and trust.

Benefits for the Wider Community

In addition to startups, this framework is also a valuable tool for larger enterprises and customers. For large corporations, it enables assessment of ethical practices within a prospective company during an acquisition process. For customers, it creates a transparent process to analyze the values of the company providing them with AI solutions, which in turn creates customer trust in the product. Overall, the application of this framework stimulates confidence in the adoption of responsible AI technologies for the community at large, fulfilling the promise of these breakthrough technologies while mitigating the risks.

 

The AI Ethics Maturity Continuum has been designed as a due diligence tool for investors and business executives to quickly assess a company’s level of ethics maturity and identify areas for improvement. It prioritizes agility and action, enabling users to build concrete strategies for sustainable AI systems and track development over time. Most importantly, it empowers startups to embed ethics from the very beginning of the product life cycle, resulting in stronger offerings, happier customers and more favorable exits.   If you are the founder of an AI first startup and would like to build a billion dollar responsible AI company then take this five minute survey to assess how you compare to your peers and where you need to improve.  Alternatively, if you are an investor interested in screening prospective companies, or evaluating the health of your portfolio, we welcome you to use the tool or to connect with us to learn more.


EI.

 
 

ETHICS MATURITY CONTINUUM

Designed for startups and investors to assess a company's maturity in operationalizing Al Ethics

In this day and age, ethics has emerged as a competitive advantage in AI-driven startups. By placing ethics at the core of product design, startups can maximize their impact, mitigate the risks, and continue to grow in a market environment where values are becoming increasingly important to success. To aid in this process, the Ethics Maturity Continuum will assess your company’s level of ethics maturity and identify areas for improvement. How does your company stack up?

 

through the eyes of a responsible tech founder: the gap in investor knowledge  • anna felländer

It’s important that investors understand the responsible tech space so as not to miss

the opportunity of

profitably scaling trustworthy and ethical businesses.

 

Most VC’s are trapped in the tech-perspective for fast and profitable scaling. However, it’s important that investors understand the responsible tech space so as not to miss the opportunity of profitably scaling trustworthy and ethical businesses, too. The lag in suitable responsible screening tools for VC’s expose them to costly ethical and societal risks. Investors and business leaders alike have a  joint responsibility to demystify the black box of AI, in order to accelerate ethical AI for innovations humans can trust. 

 
 

 
 

When we first started,  my CEO Chris and I figured this seed funding journey was going to be a rather smooth one. We were both confident in our mission. We were solving the challenges of almost every large organization – wanting to do the right thing in the data driven AI era and reporting to stakeholders and customers about it. We figured that the ESG’s, Taxonomy, and upcoming EU-regulations on top of the exploding social concerns were leading organizations to aim for transparency and reporting metrics on their ethical AI. We were paving the way for organizations pioneering transparency, and tools evaluating the ethics of AI.

That might be the case, but our journey was difficult. We started to blame VC’s for not “walking the talk” on responsible investments. Investors still seemed to prioritize trade-offs between profitability and sustainability, despite increasing concerns over trust and the push towards responsible AI. They appear to lack the understanding of the ethical and societal pitfalls and risk exposures in AI driven tech scale ups.

Without using an ethical filter, they are investing in the companies lacking long term and sustainable business models. 

Because of investors' lack of understanding of the ethical AI space, we failed to explain to the receivers of our pitch deck and presentations what a sustainable AI business model means, and how we create value meeting the next generation’s commitment to responsible and trustworthy technology. We failed in our initial attempts to explain that we are at a crossroad in the AI industry. 

Eventually though we did succeed in finding our lead investor, BGV, that truly aligns with our core values and business strategies, and most importantly understood the opportunity of responsible tech. 

We learned a lot over our seed funding journey, important insight into investing in Ethical AI that I now want to share with you. 

 

AI for sustainability is not the same as Sustainable AI

The lack of understanding in the investment space meant that we were unable to explain to VC’s the difference between AI for Sustainability and Sustainable AI.  Typically, VCs engaged in responsible investments screen tech start-ups for AI solutions targeted at achieving one of the UN SDG’s. But there are responsible investments which don’t clearly map onto these. At anch.AI we are meeting these SDG’s indirectly, but it is not our business model. 

What is the difference between the two? Well, you can have an AI solution for sustainability, without it being sustainable AI. For example, an AI solution for crisis management in distressed conflict areas might have an algorithm specifically helping mothers and children get information and access to shelter and aid. However, the algorithm is built on data that is risky to obtain and secure, as well as takes mass amounts of energy to maintain. This would be an AI solution for sustainability but without Sustainable AI. 

 

Without our ethical filter, this AI for Sustainability might be exposed to ethical risks and legal breaches such as misuse of the data and the AI solution. Also, there might be biases of the coder due to cultural aspects harming or misleading the users. Sustainable AI is how all AI across industries, civil society and no matter size or maturity of AI is conforming to their values and complying to regulation. And that is the business we are in. Sustainable business models have an ethical filter to avoid costly re-investments as well as reputational and harmful risks. 

 
 

Most VCs are still in the tech silo-perspective when it comes to ethical AI governance

We are introducing a cross-functional approach to the ethical AI governance landscape by being an orchestrator of ethical AI. This means activating the tech, legal and business teams to align on critical ethical and legal considerations and trade-offs. This is our unique differentiation. 

Why is this important?

For example, fairness and explainability have different meanings based on if you are working in the tech, legal or business teams. Lawyers are not trained to understand code; coders are not educated in the organizational ethical values and principles, nor can they code them in coherence.

Operating in a technology silo means opening the door to costly societal risks and legal breaches.

We failed to explain that existing AI governance platforms claiming ethical AI governance only assess and govern AI from the tech silo/perspective, which gives a false sense of security. Because investors did not understand this gap, they were unaware of the need for such a tool.

Lagging VC screening tools

You can have fast scaling and huge profits in a short time. Black box AI solutions or AI solutions with mitigation tools from the tech silo/perspective will fail to dismantle ethical risk exposure in a short time. With this perspective VC’s will be exposed to fail financially in the long-term perspective. Our platform would be a strong tool for VCs to screen their investments for exposure to ethical risks and ensure their own business model to be sustainable. 

In the middle of our seed investment journey, I got a call from Anik Bose, general partner at BGV. He did not know we were in the middle of raising seed funding, he contacted me to invite me to the non-profit Ethical AI Governance Group about to be founded in Silicon Valley. We ended up discussing a set-up for BGV to be lead investors.

For the BGV team we didn’t fail to explain anything, they already knew from the start, educated by Olivia Gambelin, that our product was strong and that the market for ethical AI governance was about to grow fast. BGV truly understood the responsible tech space, they could spot a trustworthy and ethical product, and understood the value of our product. The BGV team are truly committed to ethical AI governance combined with knowledge and experience in B2B SaaS. 

 

By educating VCs hoping to invest in the responsible tech space, investors will be able not only to analyze the ethical risks of the companies they are investing in, but also to notice a worthwhile ethical investment, paving the way for investments in innovations we can trust.

anch.AI are now able to fully commit to our clear purpose - being an independent validation for ethical AI to help organizations accelerate ethical and responsible AI across their organization. anch.AI’s goal is to ensure that the future world of AI is also a world with human values at the core.

-

Further reading

anch.AI ethical AI risk assessment methodology 

Would you invest in a medical chatbot that advised a patient to kill themselves




EI.

 
 

Let’s release the real power of AI.

To comply with upcoming EU regulation, avoid business risks and strengthen your competitive edge all-in-one, this way forward. Our Ethical AI Governance Platform is here.

Visit anch.AI for more information.

value alignment and its impact on investing • helena ward

With 90% of start-ups failing within their first 3 years,

finding the right investors can either make or break your company.

 

Yes, securing the first investment is essential, but given that your initial investors are in it for the long run, it’s much more important to find the right investor, than take any investment that comes your way.

Here’s how ethics can help…

 

Just how important is the first investment?

One of the biggest mistakes a founder can make is thinking that securing the initial investment is an immediate win. But it’s not just about the money; investors aren’t merely cash flow, they’re partners who are going to stick around. And just as with any partnership in life, you want to ensure this one is built on foundations that will last. However, it can happen that either side will want to take the company in a different direction, resulting in founders finding themselves in the difficult position of compromising on their vision and investors worried about the strength of the founding team.

The initial investment will have a huge impact on the future of the company, so it’s essential to take the time to find the right investor.

But how? And what can we do to avoid investor incompatibility?

 
 

Gaining a deeper understanding of your company’s ethics and values can help you to not only find investors, but find investors compatible with your own views.

 
 

Ethics: The Unexpected Solution

A significant part of investor incompatibility is simply that investors and founders want different things. One might for instance favour sustainability over profit, or scalability over user experience; even simple decisions over who to hire and fire can contribute to building tension. These differences in values and priorities create frustrations between investors and founders. So finding investors who prioritise the same values, expectations, and visions as yours will help to avoid investor incompatibility.

It’s that simple - understanding your own values, and making sure that key values are synonymous within your investors means you’re just less likely to disagree on key decisions.

 
 

Why ethics benefits founders

Understanding your ethics and values as a founder not only reduces investor incompatibilities, but it can create investment opportunities from the outset. 

How? Understanding your ethics and values shows in conversations with investors. It can give you a clear idea of how you want to see your company’s mission come to life and the ability to speak confidently to it, which in turn attracts investors that share a similar vision.

What does value alignment mean for investors?

Investing in companies that align with your values means you are investing in the long-term success of a company, founding team and project you believe in. From the outset, you will have a strong understanding of the issues that matter to the company and how a founding team will approach solving those issues, giving you further confidence in your investment. 

 

By bringing an understanding of the implications of your project, and how to direct the company mission avoiding potential risks, ethics can help manifest a direction for the company as a whole.

 

-

Finding the perfect investment for you

Relationships between founders and investors are complex, and whilst the venture ecosystem attempts to keep founders and investors working alongside each other in harmony, investor incompatibilities are common. Ethical understanding helps you find investors by empowering you with in-depth understanding of your company’s project and mission.

But more importantly, it helps you find investment that’s right for you — an investor who shares in your values and vision.

EI.

Investing in ethical AI with Anik Bose

Join Olivia Gambelin and Anik Bose, venture investor and general partner at BGV, in a conversation about what it means to invest in ethical tech and AI. What are the key aspects ethical investors look for in an emerging startup? How does an investor look out for global talent? Are investing and ethics two opposite poles, or can they actually go hand in hand? How can investing into ethical AI help in bringing the human back into the equation?

 thank you to our contributors

  • Sarah Drinkwater

    Sarah is a community builder, angel investor, fan of good entrepreneurship and generally curious person.

  • Anik Bose

    Anik has 15 years of active venture capital and corporate development experience, with particular emphasis on transaction structuring and strategic planning.

  • Alayna Kennedy

    Alayna is a data scientist, project manager, and researcher who's focused on how AI technology will affect society and policy.

  • Venkat Raghavan

    Venkat is a senior executive with proven mix of outstanding leadership, business, marketing and strategy underpinned by deep technical skills.

  • Emmanuel Benhamou

    Manny has 20+ years experience driving innovation in Ethical AI, Blockchain, Cryptocurrency and other emerging technology domains.

  • Anna Felländer

    Anna is the founder of anch.AI offering an ethical AI governance platform.

previous issues

 

I: Ethics as a Service

How the leaders of tomorrow’s technology are embracing ethics today

 

II: Ethics of Smart Home Tech

If, when, and how to bring technology home

 
 

 
 

Thank you for reading. Be sure to subscribe for future issues delivered straight to your inbox.  

© 2022 Ethical Intelligence Associates, Limited