top of page
Curtis Blair

AI Review: Hopeful Yet Cautious




Utilizing artificial intelligence as a partner is a novel idea that requires caution. AI, often seen as a persuasion tool rather than a fact-checking tool, can influence the direction of research and writing. AI tools help determine common knowledge and iterate on concepts, but they should be complemented with additional tools such as human fact-checking and peer review to validate suggestions and recommendations for accuracy.


Viewing AI as a fallible partner is useful. It sets expectations that the AI responses add another perspective to activities like generating ideas, summarizing text, and exploring concepts. To ensure reliability and validity, measures such as cross-referencing with human-generated content and using AI as a starting point rather than a definitive answer can be employed.


AI Limitations

  • AI lies continuously and well

  • Plausible Facts - AI can generate utterly convincing and entirely false content

  • Bias - AI is built around models that have built-in biases due to training on Internet data

  • AI is very good at volume; not everything generated will be helpful, good, or ethical


Overall, AI can be a valuable tool as a skilled collaborator, providing alternative expertise across domains and resulting in draft-quality starting point.


 


Insights Summaries

AI: Writing Partner

As a writing partner, AI provides another perspective during the writing process that assists with brainstorming, mixing ideas, developing outlines, summarizing data, revealing concepts, helping with grammar and punctuation, adjusting tone and style, etc. AI is not an author; it is a collection of tools assisting at each stage of the writing process, a partner. While AI is a valuable writing partner, it has some quirks due to being a persuasion tool that does not understand the content it is reviewing.


AI: Limitations as Writing Partner

AI has potential as a writing partner assisting with the various stages of writing, but further efforts are needed to develop a workflow process for iterating, analyzing vast amounts of suggestions, and determining which recommendations to pursue.


Improvement Areas

  • Prompt Engineering - explore iterating with various prompts to produce relevant results

  • Iterative feedback - explore asking for input from different perspectives

  • Human Perspective - how to avoid generating generic opinions and discover real experiences

  • Comparative Analysis - how to compare multiple results, review samples, and implement


AI: Research Assistant

The AI was able to summarize and organize the data but struggled with providing insight and actionable recommendations. The suggestions lacked nuance and validity, indicating that the offered recommendations were generic. The AI could not identify knowledge gaps, identify specific issues, and suggest intelligent defaults for implementing suggested strategies.


AI: Limitations as Research Assistant

AI is an interesting research assistant that collects resources and provides accurate summaries, but additional improvements are needed to validate the suggested deviations from common concepts.


Improvement Areas

  • Source - better align recommendations with sources and explain why it was presented

  • Scope - improve strategic approaches with specific tactical improvements

  • Bias - provide better tools revealing potential errors and bias in responses


 


AI Task Performance

Data Set Summary

The AI did a good job summarizing and organizing the data and seems reliable for extracting quotes and sources. Additional inquiries asking different questions appeared to reproduce the same formulaic responses. Conducting a conversation failed to produce different reactions from the AI; it simply offered up the same data regardless of the context of the question.


Compile Research Report

The AI presents a well-compiled and reasonably accurate report summarizing the dataset. However, it tends to provide generic strategies without specific tactical actions. For instance, the suggestion to implement flexible, customizable interfaces that adapt to individual user needs is a good strategy, but the AI does not offer any intelligent defaults for implementation or provide examples for defining a flexible interface. The report, while well-structured, can sometimes read like a collection of buzzwords, lacking actionable insight.


Determine Design Recommendations

The AI Design Recommendations are repeated items in the dataset. They are an excellent summary but need to reveal knowledge gaps or offer specific actions for implementing suggestions. Stating to reduce cognitive load by breaking tasks into smaller actions is not insightful; it is common knowledge that sounds good but produces no actions to take.


Suggest Further Research Topics

AI Recommendations lack nuance and guidance for addressing actual issues. Suggestions seem generic, recommending emerging tech, gender, age, etc. The AI could not identify gaps or suggest additional topics for further study.


Interview Participant

I utilized AI prompts to assign persona attributes and interviewed the AI as an additional data point. The conversation produced repeated responses and limited data, yet it did reveal common knowledge. I aligned key points of the interview with affinity mapping to summarize findings and then triangulated the data to synthesize design suggestions and additional recommendations.


Conclusion

Exploring AI's possibilities is interesting as it can expedite analysis tasks. However, it can also lead to false starts by producing superficial recommendations and ideas. AI is still a novel concept, and designers must understand that current limitations derive from a lack of contextual comprehension.







bottom of page