Nothing has taken the enterprise world by storm more than AI over the past few months. It began with the launch of OpenAI’s ChatGPT last November, which dominated headlines for its impressive, human-like capabilities and incredible accuracy.
Now, AI is gripping Silicon Valley. Microsoft has so far invested over $11 billion into OpenAI as it hones in on integrating the technology into Microsoft products and services. Meanwhile, Google has issued a “code-red,” moving huge amounts of resources towards AI departments to defend its long-standing dominion on the search market
With the AI arms race truly upon us, how will AI innovation transform the way the enterprise communicates, collaborates and innovates? Are we in the midst of a generative AI revolution, or is the technology still clouded by ethical risks and challenges for it to be implemented into business operations?
EM360's Ellis Stewart spoke to DeepAI Founder Kevin Baragona about the rise of generative AI technologies, AI ethics, and the ‘legal minefield’ surrounding its implementation within the enterprise.
Ellis Stewart: Generative AI is an incredibly hot topic right now. Where can you see the technology going within the next five years? Will the buzz die off?
Kevin Baragona: Generative AI is just getting started. We’re going to see it continue to grow and affect nearly every area of domain expertise. We’re at the onset of the next major technology-driven transition which will fundamentally change the way we do business and live our daily lives.
In the next five years, I expect to see the output of generative AI be increasingly authentic compared to human-powered work. And, since it’s computer-based, it can scale much more fluidly than people do. When we launched the first online AI text-to-image Generator in 2016, it was really more of a novelty item. People had fun with it, but there was a lot of variability in its results. Now when people come to DeepAI and use one of our AI generators, it typically does exactly what you ask it to.
The biggest leap for AI would be AGI - Artificial General Intelligence. That’s where AI meets or exceeds the capacities of a human. If this is developed, it would be likely the most disruptive technology of all time. Many of the same companies developing the current AI tools are also working on AGI.
Ellis: How can companies make use of text-to-image generators like DeepAI?
Kevin: Text-to-Image generators can be used for many of the same things conventional image creation and editing tools are being used for. You can create cover images for social posts or newsletters, or any application you need.
The big limitation with AI image generators is that you’re confined to the design styles that are available. In order for it to design something that’s on brand, you would need a design patch that mirrors that brand. Offering the ability to create your own base image styles is something DeepAI may explore in the relatively near future.
Beyond initial image creation, companies can use AI to edit existing images. DeepAI has multiple generators that edit images. These include a text-to-image editor, colorizer, sharpener, cartoonify, and image style transfer.
Ellis: Some text-to-image generators have recently found themselves in legal hot water due to their method of allegedly collating data from images taken from the internet without the creator's consent. How should businesses approach making use of these tools given the current legal uncertainty related to the technology?
Kevin: Running a business is not about being risk-free, but about risk management. Few businesses operate in a protected silo where there’s no threat of negative legal outcomes from their actions.
Legal implications will also likely vary by jurisdiction and are evolving rapidly. Just a few days ago the US Copyright Office issued a statement clarifying how to register works that contain material generated by the use of artificial intelligence technology. Essentially, the portions of the work that are AI-generated must be declared and are not covered under the copyright claim.
So US businesses that need to own the copyrights of their materials should not use AI-generated content. But that’s just in the US and may not reflect how other countries interpret the copyrights of AI-generated content. DeepAI’s policy is similar to that of the US Copyright Office - we say that content generated with DeepAI has no copyright and can be used for any legal purpose.
The other issue with copyright is whether the copyright holders of the works used to train the models can make a copyright claim on the AI-generated content. This is what the Getty Images lawsuit revolves around. The implications of that case could be far-reaching, but it’s not clear how long it could be until a ruling or settlement is reached.
The main considerations are how the resulting images are classified. For conventional images, currently, there’s legal precedence that derivative works are protected by the copyright of the original creator, but transformative works are not. Collages are somewhere in the middle and depend on how the collage is used. Where will AI-generated content fall? My guess is it will be a new, yet-to-be-defined category.
Ellis: AI ethics are a key point of discussion amongst AI optimists and scrutinizers. How can developers ensure their systems remain ethical? For instance, how can they overcome the challenge of AI bias?
Kevin: AI ethics is a complex question and the moral and legal implications are not yet clearly defined. The best way developers can ensure their systems are ethical is to follow the latest laws, rulings, and discussions.
For AI bias, there are many contributing factors, in particular the source materials that are used to train the model. If the source materials represent biases, that will come out in the AI-generated content. In the same way as society manually adjusts itself to account for inequality by instituting affirmative action programs, AI models trained on biased data need to be manually adjusted to account for the biases in the underlying data.
AI Generators are software products, and just like every other software product, they need to be tested to ensure the results are as intended. Just because an AI generator produces results that are less than remarkable, doesn’t mean that AI is bad. In software terms, we often just call that a bug or product enhancement and then work on fixing it.
DeepAI founder Kevin Baragona is a professional software engineer and product developer with more than a decade of experience. A veteran in the generative AI space, his goal in designing and developing DeepAI is to create a comprehensive platform that’s intuitive for general practitioners, useful for developers to integrate DeepAI into their projects and to introduce learners new to AI to its many and varied capabilities.
To read more about generative AI, visit our dedicated AI in the Enterprise Page.