My Year with Grok: A Visual Creator's Annual AI Usage Report

Here’s an English version

of the report adapted for a blog post. It retains the key insights while presenting them in a narrative style suitable for readers.


My Year with Grok: A Visual Creator’s Annual Report

The data analysis code used in this article is open source. Repository:
https://github.com/twoken404/grok_and_deepseek_2025_analyze/

Report generated: December 31, 2025

📊 Year in Review

This year, my interaction with Grok has been dominated by visual creativity. Here’s a snapshot:

  • Total Conversations: 65
  • Total Messages: 844
  • My Messages: 415
  • Grok’s Replies: 429
  • Message Ratio: 415 : 429

🎨 A Picture is Worth a Thousand Prompts

  • Images Generated: 366
  • Image Generation Rate: 43.4% of all messages
  • Conversations with Images: 58
  • Average Prompt Length: 495 characters

I wasn’t just asking for images—I was crafting them. Nearly half of all my messages were dedicated to visual generation.

🤖 Models & Modes: How I Used Grok

  • grok-4-1-non-thinking-w-tool: 514 times (60.9%)
  • grok-4-auto: 109 times (12.9%)
  • grok-3: 111 times (13.2%)
  • grok-4: 108 times (12.8%)

Preferred Modes

  • grok-4-1 mode: 483 times
  • auto mode: 198 times
  • expert mode: 97 times

The numbers tell a clear story: I leaned heavily on the specialized image generation capabilities of grok-4-1, often in auto or expert mode to maximize quality.

⏰ When Did I Create?

By Time of Day

  • Evening (6–10 PM): 383 messages (45.4%)
  • Afternoon (2–6 PM): 165 (19.5%)
  • Late Night (10 PM–12 AM): 106 (12.6%)
  • Morning (9 AM–12 PM): 103 (12.2%)
  • Lunch (12–2 PM): 68 (8.1%)
  • Early Morning (0–6 AM): 15 (1.8%)
  • Dawn (6–9 AM): 4 (0.5%)

I’m definitely a night owl. The evening hours were my most productive creative window.

By Day of the Week

  • Monday: 187 messages
  • Sunday: 138
  • Saturday: 131
  • Thursday: 117
  • Tuesday: 103
  • Friday: 90
  • Wednesday: 78

Mondays were surprisingly active—perhaps starting the week with a burst of creativity.

💬 What Were We Talking About?

  • Image Generation: 58 convos (89.2%)
  • Technical Discussions: 3 (4.6%)
  • Knowledge Q&A: 2 (3.1%)
  • Creative Writing: 1 (1.5%)
  • Casual Chat: 1 (1.5%)

This was a visual year. Almost 9 out of 10 conversations involved generating images.

🔑 The Language of My Imagination

Here are the top 20 words that shaped my visual prompts:

  1. warm (350 times)
  2. cinematic (241)
  3. chinese (191)
  4. light (161)
  5. her (161)
  6. ratio (157)
  7. aspect (155)
  8. lighting (146)
  9. style (143)
  10. film (138)
  11. from (136)
  12. color (134)
  13. shot (129)
  14. soft (125)
  15. woman (121)
  16. man (119)
  17. atmosphere (113)
  18. golden (111)
  19. his (104)
  20. field (100)

The vocabulary reveals a focus on mood, composition, and cinematic quality—especially with a Chinese aesthetic.

🏷️ Conversation Titles: What Caught My Eye

  • cinematic: 25 times
  • scene: 12
  • image: 10
  • generation: 9
  • chinese: 9
  • hong / kong: 7 each
  • romance: 3
  • tension / tense: 3 each

Even my conversation titles leaned into filmic, location-based, and emotionally charged themes.

💡 Year-End Reflections

🎨 I Am a Visual Creator

This year, Grok was my co-pilot in visual storytelling. I used it not just as a tool, but as a creative partner—crafting detailed, atmospheric scenes with a strong cinematic sensibility.

⏰ My Creative Hours Are in the Evening

Most of my best prompts were written after 6 PM. There’s something about the quiet of the evening that unlocks my imagination.

📝 My Prompting Style

Looking back at my prompts, I notice a few patterns:

  • Detailed scene-setting—I don’t just describe; I build worlds.
  • Visual precision—lighting, ratio, color, and composition matter.
  • Emotional undercurrents—whether romance, tension, or nostalgia.
  • A preference for expert mode when quality couldn’t be compromised.

🧠 What’s Next?

In the coming year, I’d like to:

  • Experiment more with multi-prompt narratives.
  • Blend image generation with longer-form storytelling.
  • Use Grok for more technical and knowledge-based projects—not just visuals.

This report was generated using a custom Python analyzer I wrote for my Grok data. If you’re curious about the code, you can find it in the original post.

Here’s to another year of creating, prompting, and imagining—with AI as our canvas. 🎨🤖


Let me know if you’d like a version with more personal commentary, takeaways for other creators, or tips for writing effective AI image prompts.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
import json
from datetime import datetime
from collections import Counter, defaultdict
import re

class GrokDataAnalyzer:
def __init__(self, json_file_path):
"""Initialize analyzer with Grok data JSON file"""
with open(json_file_path, 'r', encoding='utf-8') as f:
self.data = json.load(f)

# Extract all conversations
self.conversations = self.data.get('conversations', [])

def parse_timestamp(self, timestamp_data):
"""Parse various timestamp formats from Grok data"""
if isinstance(timestamp_data, dict) and '$date' in timestamp_data:
# Handle MongoDB date format
date_data = timestamp_data['$date']
if isinstance(date_data, dict) and '$numberLong' in date_data:
timestamp_ms = int(date_data['$numberLong'])
return datetime.fromtimestamp(timestamp_ms / 1000)
elif isinstance(date_data, str):
return datetime.fromisoformat(date_data.replace('Z', '+00:00'))
elif isinstance(timestamp_data, str):
return datetime.fromisoformat(timestamp_data.replace('Z', '+00:00'))
return None

def analyze_basic_stats(self):
"""Calculate basic usage statistics"""
total_conversations = len(self.conversations)
total_messages = 0
human_messages = 0
assistant_messages = 0
image_generations = 0

# Model usage tracking
model_usage = Counter()
mode_usage = Counter()

# Time-based statistics
date_stats = defaultdict(lambda: {'messages': 0, 'images': 0})
hour_stats = defaultdict(int)

for conv_data in self.conversations:
conversation = conv_data.get('conversation', {})
responses = conv_data.get('responses', [])

# Get conversation creation time
create_time_str = conversation.get('create_time')
if create_time_str:
conv_date = self.parse_timestamp(create_time_str)
if conv_date:
date_key = conv_date.strftime('%Y-%m-%d')

for resp in responses:
response_data = resp.get('response', {})
total_messages += 1

# Sender statistics
sender = response_data.get('sender', '').lower()
if 'human' in sender:
human_messages += 1
elif 'assistant' in sender:
assistant_messages += 1

# Model statistics
model = response_data.get('model', 'unknown')
model_usage[model] += 1

# Mode statistics
metadata = response_data.get('metadata', {})
request_meta = metadata.get('request_metadata', {})
mode = request_meta.get('mode', 'default')
mode_usage[mode] += 1

# Image generation tracking
if response_data.get('query_type') == 'imagine':
image_generations += 1

# Time analysis
create_time = self.parse_timestamp(response_data.get('create_time'))
if create_time:
# Daily statistics
date_key = create_time.strftime('%Y-%m-%d')
date_stats[date_key]['messages'] += 1
if response_data.get('query_type') == 'imagine':
date_stats[date_key]['images'] += 1

# Hourly statistics
hour_key = create_time.hour
hour_stats[hour_key] += 1

return {
'total_conversations': total_conversations,
'total_messages': total_messages,
'human_messages': human_messages,
'assistant_messages': assistant_messages,
'image_generations': image_generations,
'image_generation_rate': (image_generations / total_messages * 100) if total_messages > 0 else 0,
'model_usage': dict(model_usage),
'mode_usage': dict(mode_usage),
'date_stats': dict(date_stats),
'hour_stats': dict(hour_stats)
}

def analyze_image_generations(self):
"""Analyze image generation patterns and prompts"""
image_prompts = []
image_count_by_conv = defaultdict(int)

for conv_data in self.conversations:
conv_id = conv_data.get('conversation', {}).get('id', 'unknown')
responses = conv_data.get('responses', [])

for resp in responses:
response_data = resp.get('response', {})
if response_data.get('query_type') == 'imagine':
query = response_data.get('query', '')
image_prompts.append(query)
image_count_by_conv[conv_id] += 1

# Analyze prompt characteristics
prompt_features = {
'total_prompts': len(image_prompts),
'avg_prompt_length': sum(len(p) for p in image_prompts) / len(image_prompts) if image_prompts else 0,
'common_keywords': self.extract_keywords(image_prompts),
'conversations_with_images': len(image_count_by_conv),
'images_per_conversation': dict(Counter(image_count_by_conv.values()))
}

return prompt_features

def extract_keywords(self, prompts, top_n=20):
"""Extract most frequent keywords from image prompts"""
all_text = ' '.join(prompts).lower()

# Extract English words
words = re.findall(r'\b[a-z]{3,}\b', all_text)

# Filter common stop words
stop_words = {'the', 'and', 'with', 'for', 'this', 'that', 'are', 'was', 'were', 'has', 'have', 'had'}
filtered_words = [w for w in words if w not in stop_words]

# Count frequencies
word_counts = Counter(filtered_words)

return dict(word_counts.most_common(top_n))

def analyze_time_patterns(self):
"""Analyze temporal usage patterns"""
weekday_names = ['Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday', 'Saturday', 'Sunday']

# Initialize counters
weekday_stats = {i: 0 for i in range(7)}
month_stats = {i: 0 for i in range(1, 13)}
time_slot_stats = {
'Late Night (0-6 AM)': 0,
'Morning (6-9 AM)': 0,
'Late Morning (9-12 PM)': 0,
'Lunch (12-2 PM)': 0,
'Afternoon (2-6 PM)': 0,
'Evening (6-10 PM)': 0,
'Night (10 PM-12 AM)': 0
}

for conv_data in self.conversations:
responses = conv_data.get('responses', [])
for resp in responses:
response_data = resp.get('response', {})
create_time = self.parse_timestamp(response_data.get('create_time'))

if create_time:
# Weekday statistics
weekday = create_time.weekday() # 0=Monday, 6=Sunday
weekday_stats[weekday] += 1

# Month statistics
month = create_time.month
month_stats[month] += 1

# Time slot statistics
hour = create_time.hour
if 0 <= hour < 6:
time_slot_stats['Late Night (0-6 AM)'] += 1
elif 6 <= hour < 9:
time_slot_stats['Morning (6-9 AM)'] += 1
elif 9 <= hour < 12:
time_slot_stats['Late Morning (9-12 PM)'] += 1
elif 12 <= hour < 14:
time_slot_stats['Lunch (12-2 PM)'] += 1
elif 14 <= hour < 18:
time_slot_stats['Afternoon (2-6 PM)'] += 1
elif 18 <= hour < 22:
time_slot_stats['Evening (6-10 PM)'] += 1
else:
time_slot_stats['Night (10 PM-12 AM)'] += 1

# Convert weekdays to names
weekday_stats_named = {weekday_names[i]: weekday_stats[i] for i in range(7)}

return {
'weekday_stats': weekday_stats_named,
'month_stats': month_stats,
'time_slot_stats': time_slot_stats
}

def analyze_conversation_content(self):
"""Analyze conversation topics and content patterns"""
conversation_types = Counter()
title_keywords = []

for conv_data in self.conversations:
conversation = conv_data.get('conversation', {})
responses = conv_data.get('responses', [])

title = conversation.get('title', '').lower()
title_keywords.append(title)

# Determine conversation type
has_images = any(
resp.get('response', {}).get('query_type') == 'imagine'
for resp in responses
)

if has_images:
conversation_types['Image Generation'] += 1
else:
# Check for specific keywords
all_text = ' '.join([
resp.get('response', {}).get('message', '').lower()
for resp in responses
])

if any(keyword in all_text for keyword in ['code', 'programming', 'python']):
conversation_types['Technical Discussion'] += 1
elif any(keyword in all_text for keyword in ['explain', 'what', 'why', 'how']):
conversation_types['Knowledge Q&A'] += 1
elif any(keyword in all_text for keyword in ['creative', 'story', 'poem']):
conversation_types['Creative Writing'] += 1
else:
conversation_types['General Conversation'] += 1

# Extract title keywords
all_titles = ' '.join(title_keywords)
title_words = re.findall(r'\b[a-z]{3,}\b', all_titles.lower())
title_word_counts = Counter(title_words)

return {
'conversation_types': dict(conversation_types),
'common_title_words': dict(title_word_counts.most_common(10))
}

def generate_report(self, output_file='grok_annual_report.md'):
"""Generate comprehensive annual usage report"""
basic_stats = self.analyze_basic_stats()
image_analysis = self.analyze_image_generations()
time_patterns = self.analyze_time_patterns()
content_analysis = self.analyze_conversation_content()

with open(output_file, 'w', encoding='utf-8') as f:
# Title
f.write("# My Year with Grok: Annual Usage Report\n\n")
f.write(f"*Report generated: {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}*\n\n")

# 1. Overall Statistics
f.write("## 📊 Year in Review\n\n")
f.write(f"- **Total Conversations**: {basic_stats['total_conversations']} \n")
f.write(f"- **Total Messages**: {basic_stats['total_messages']} \n")
f.write(f"- **My Messages**: {basic_stats['human_messages']} \n")
f.write(f"- **Grok's Replies**: {basic_stats['assistant_messages']} \n")
f.write(f"- **Message Ratio**: Me:{basic_stats['human_messages']} : {basic_stats['assistant_messages']} AI\n\n")

# 2. Image Generation Statistics
f.write("## 🎨 Image Generation Breakdown\n\n")
f.write(f"- **Images Generated**: {basic_stats['image_generations']} \n")
f.write(f"- **Image Generation Rate**: {basic_stats['image_generation_rate']:.1f}% of all messages\n")
f.write(f"- **Conversations with Images**: {image_analysis['conversations_with_images']} \n")
f.write(f"- **Average Prompt Length**: {image_analysis['avg_prompt_length']:.0f} characters\n\n")

# 3. Model Usage
f.write("## 🤖 Models & Modes\n\n")
for model, count in basic_stats['model_usage'].items():
f.write(f"- **{model}**: {count} times ({count/basic_stats['total_messages']*100:.1f}%)\n")

if basic_stats['mode_usage']:
f.write("\n### Preferred Modes\n")
for mode, count in basic_stats['mode_usage'].items():
if mode != 'default':
f.write(f"- **{mode} mode**: {count} times\n")

# 4. Time Patterns
f.write("\n## ⏰ When Did I Use Grok?\n\n")

# Time slots
f.write("### By Time of Day\n")
total_msgs = sum(time_patterns['time_slot_stats'].values())
for slot, count in time_patterns['time_slot_stats'].items():
if count > 0:
percentage = count / total_msgs * 100
bar = "█" * int(percentage / 2)
f.write(f"- {slot}: {bar} {count} ({percentage:.1f}%)\n")

# Weekday distribution
f.write("\n### By Day of the Week\n")
for day, count in time_patterns['weekday_stats'].items():
f.write(f"- {day}: {count} messages\n")

# 5. Conversation Types
f.write("\n## 💬 Conversation Topics\n\n")
total_conv = sum(content_analysis['conversation_types'].values())
for conv_type, count in content_analysis['conversation_types'].items():
percentage = count / total_conv * 100 if total_conv > 0 else 0
f.write(f"- **{conv_type}**: {count} times ({percentage:.1f}%)\n")

# 6. Image Prompt Keywords
if image_analysis['common_keywords']:
f.write("\n## 🔑 Top Image Prompt Keywords\n\n")
for i, (keyword, count) in enumerate(image_analysis['common_keywords'].items(), 1):
f.write(f"{i}. **{keyword}**: {count} times\n")

# 7. Conversation Title Keywords
if content_analysis['common_title_words']:
f.write("\n## 🏷️ Conversation Title Keywords\n\n")
for word, count in content_analysis['common_title_words'].items():
f.write(f"- {word}: {count} times\n")

# 8. Usage Insights
f.write("\n## 💡 Year-End Reflections\n\n")

# Determine primary usage pattern
image_ratio = basic_stats['image_generation_rate']
if image_ratio > 30:
f.write("### 🎨 Visual Creator Profile\n")
f.write("- Primarily used Grok for image generation\n")
f.write("- Detailed, artistic prompts\n")
f.write("- Strong visual storytelling focus\n")
elif 'Image Generation' in content_analysis['conversation_types']:
f.write("### 🎭 Mixed Usage Profile\n")
f.write("- Balanced between images and text conversations\n")
f.write("- Creative expression with practical queries\n")
f.write("- Diverse use cases\n")
else:
f.write("### 💬 Text Conversation Profile\n")
f.write("- Primarily text-based interactions\n")
f.write("- Focus on knowledge and problem-solving\n")
f.write("- Rich conversational content\n")

# Time pattern insight
top_time_slot = max(time_patterns['time_slot_stats'].items(), key=lambda x: x[1])[0]
f.write(f"\n### ⏰ Peak Usage Time: {top_time_slot}\n")

# Sample-based observations
f.write("\n### 📝 My Prompting Style\n")
f.write("Based on my prompts, I noticed:\n")
f.write("- Detailed scene descriptions and world-building\n")
f.write("- Strong focus on visual details (lighting, composition)\n")
f.write("- Preference for specific moods and atmospheres\n")
f.write("- Frequent use of expert mode for quality output\n")

print(f"Report generated: {output_file}")
return output_file

# Main execution
if __name__ == "__main__":
try:
print("Analyzing Grok usage data...")
analyzer = GrokDataAnalyzer('prod-grok-backend.json') # Replace with actual filename

# Generate report
report_file = analyzer.generate_report()

print(f"\n✅ Analysis complete!")
print(f"📄 Report file: {report_file}")

# Display summary statistics
stats = analyzer.analyze_basic_stats()
print(f"\n📊 Summary Statistics:")
print(f" Total conversations: {stats['total_conversations']}")
print(f" Total messages: {stats['total_messages']}")
print(f" Image generations: {stats['image_generations']}")
print(f" Most used model: {max(stats['model_usage'].items(), key=lambda x: x[1])[0]}")

except FileNotFoundError:
print("❌ Error: Data file not found")
print("Please ensure 'grok_data.json' exists in the current directory")
except json.JSONDecodeError:
print("❌ Error: Invalid data file format")
except Exception as e:
print(f"❌ Error during analysis: {e}")
import traceback
traceback.print_exc()