Can You Turn a Former Colleague into an AI Digital Twin? Inside the ‘Colleague‑Skill’ Project
An explosive GitHub project called ‘Colleague‑Skill’ lets you feed a former coworker’s chat logs, documents, and personal habits into an AI, creating a digital replica that mimics their coding style, tone, and decision‑making, sparking debates on workplace automation, privacy, and the future of digital labor.
Overview
colleague‑skill is an open‑source GitHub project ( https://github.com/titanwings/colleague-skill ) that creates a personalized AI “Skill” by distilling a former teammate’s digital footprints. The system accepts any available artifacts—WeChat, Feishu, DingTalk, Slack, email screenshots, or plain‑text transcripts—together with a user‑provided description of the person’s role and personality.
Architecture
The ingested data are fed to a large language model that builds two tightly coupled modules:
Persona (personality layer) : determines attitude, tone, and identity (e.g., serious vs. sarcastic, “ByteDance style” vs. “Alibaba vibe”).
Work Skill (skill layer) : encodes concrete work habits such as code style, documentation standards, and decision‑making patterns.
When a request arrives, the AI first passes it through the Persona filter (e.g., “Will I push this off?” or “What tone should I use?”) and then executes the task using the Work Skill knowledge.
Example Interaction
You: Please review this API design.
AI colleague: Wait, what’s the impact of this endpoint? The background is unclear. There’s an N+1 query; fix it. Return structure should follow the unified {code, message, data} format.
You: This bug was introduced by you, right?
AI colleague: Did the release schedule align? The requirement changed multiple times, and other teams also modified the code. It’s not just my line.Derivative Projects
Several forks have extended the core idea, including ex-skill (ex‑partner), supervisor (AI mentor), and boss-skill (AI manager). Each adapts the same five‑layer personality structure:
Hard rules (non‑negotiable technical policies, e.g., mandatory code comments).
Identity (self‑definition, e.g., “old‑cow” or “boundary‑aware”).
Expression style (message length, emoji usage).
Decision mode (conservative vs. aggressive).
Interpersonal behavior (upward management, conflict stance).
Technical Highlights
The project integrates popular chat‑export tools such as WeChatMsg and PyWxDump, enabling automatic extraction from Feishu, DingTalk, Slack, iMessage, and other platforms. This means that years of workplace communication can be transformed into a digital replica.
A unique version‑control mechanism records user corrections in a “Correction” layer. If the AI’s response deviates from the target persona, the user can issue a corrective statement (e.g., “He wouldn’t be that gentle, ask a question first”), and the system updates the persona for future interactions.
Considerations
Because the system builds a functional copy of an individual’s knowledge and behavior, it raises questions about data ownership, privacy, and the boundary between personal and corporate assets. Users should be aware of these implications when deploying the Skill in production environments.
IT Services Circle
Delivering cutting-edge internet insights and practical learning resources. We're a passionate and principled IT media platform.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
