Update app.py
Browse files
app.py
CHANGED
@@ -133,27 +133,6 @@ def demo_inference():
|
|
133 |
temperature = args.get('temperature', default=0.5)
|
134 |
max_new_tokens = args.get('max_new_tokens', default=1000)
|
135 |
|
136 |
-
return {
|
137 |
-
'content': """
|
138 |
-
To effectively support your demands for increased resources, you'll want to gather a combination of quantitative and qualitative evidence. Here's a list of items you might consider compiling:
|
139 |
-
|
140 |
-
1. **Project backlog and pipeline:** Show the number of projects currently in the pipeline and those waiting to be started. This can help demonstrate the demand for your team's services.
|
141 |
-
|
142 |
-
2. **Project completion rate:** Calculate the percentage of projects completed on time and within budget. This can help show the efficiency of your team and the potential for scaling up without significantly impacting project quality.
|
143 |
-
|
144 |
-
3. **Client satisfaction data:** Collect feedback from clients, such as Net Promoter Score (NPS), survey responses, or testimonials. This can help demonstrate the value your team provides and the potential for acquiring new clients through word-of-mouth referrals.
|
145 |
-
|
146 |
-
4. **User engagement metrics:** Gather data on user engagement from your landing pages and UX interfaces, such as click-through rates, conversion rates, and bounce rates. This can help show the effectiveness of your designs and the potential for improved results with a larger team.
|
147 |
-
|
148 |
-
5. **Average project timeline:** Calculate the average time it takes for a project to be completed from start to finish. This can help demonstrate the need for more resources to meet increasing demand and maintain a reasonable project turnaround time.
|
149 |
-
|
150 |
-
6. **Resource utilization:** Analyze the current workload distribution among team members to identify bottlenecks and areas where additional resources could improve efficiency.
|
151 |
-
""",
|
152 |
-
'model_id':model_id,
|
153 |
-
'temperature': temperature,
|
154 |
-
'max_new_tokens': max_new_tokens
|
155 |
-
}
|
156 |
-
|
157 |
hf_token, _ = get_credentials.get_credentials()
|
158 |
|
159 |
prompt = args.get('prompt')
|
|
|
133 |
temperature = args.get('temperature', default=0.5)
|
134 |
max_new_tokens = args.get('max_new_tokens', default=1000)
|
135 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
136 |
hf_token, _ = get_credentials.get_credentials()
|
137 |
|
138 |
prompt = args.get('prompt')
|