text
stringlengths 20
57.3k
| labels
class label 4
classes |
---|---|
Title: Custom metric for LinearRegression.gridsearch
Body: Hi!
I would like to pass this custom metric to gridsearch:
```
def asymmetric_custom_metric(y_true, y_pred, penalization_factor = 5):
"""
Custom loss function that penalizes predictions below the true value more than predictions above the value.
Parameters:
y_true (ndarray): Array of true values.
y_pred (ndarray): Array of predicted values.
Returns:
float: Custom loss value.
"""
# Calculate the difference between true and predicted values
diff = (y_pred - y_true).astype(float)
# Calculate the loss
loss = np.where(diff < 0, np.square(y_true - y_pred) * (1 + penalization_factor) , np.square(y_true - y_pred))
# Calculate the average loss
avg_loss = np.mean(loss)
return avg_loss
```
But gridsearch only accepts metrics comming directly from darts.metrics.
There is a Wrapper to transform sklearn or custom metrics into a darts metric?
Thanks!
Brian.
| 1medium
|
Title: KeyError: 'LLVMPY_AddSymbol'
Body: <!--
hello, I ran into a problem when using the numba library. when running any code using this library, the following error occurs:
`File "C:\Users\admin\AppData\Local\Programs\Python\Python312\Lib\site-packages\llvmlite\binding\ffi.py", line 141, in __getattr__
return self._fntab[name]
~~~~~~~~~~~^^^^^^
KeyError: 'LLVMPY_AddSymbol'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\admin\AppData\Local\Programs\Python\Python312\Lib\site-packages\llvmlite\binding\ffi.py", line 122, in _load_lib
self._lib_handle = ctypes.CDLL(str(lib_path))
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\admin\AppData\Local\Programs\Python\Python312\Lib\ctypes\__init__.py", line 379, in __init__
self._handle = _dlopen(self._name, mode)
^^^^^^^^^^^^^^^^^^^^^^^^^
FileNotFoundError: Could not find module 'C:\Users\admin\AppData\Local\Programs\Python\Python312\Lib\site-packages\llvmlite\binding\llvmlite.dll' (or one of its dependencies). Try using the full path with constructor syntax.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "C:\workp\test\main.py", line 3, in <module>
from numba.core import types
File "C:\Users\admin\AppData\Local\Programs\Python\Python312\Lib\site-packages\numba\__init__.py", line 73, in <module>
from numba.core import config
File "C:\Users\admin\AppData\Local\Programs\Python\Python312\Lib\site-packages\numba\core\config.py", line 17, in <module>
import llvmlite.binding as ll
File "C:\Users\admin\AppData\Local\Programs\Python\Python312\Lib\site-packages\llvmlite\binding\__init__.py", line 4, in <module>
from .dylib import *
File "C:\Users\admin\AppData\Local\Programs\Python\Python312\Lib\site-packages\llvmlite\binding\dylib.py", line 36, in <module>
ffi.lib.LLVMPY_AddSymbol.argtypes = [
^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\admin\AppData\Local\Programs\Python\Python312\Lib\site-packages\llvmlite\binding\ffi.py", line 144, in __getattr__
cfn = getattr(self._lib, name)
^^^^^^^^^
File "C:\Users\admin\AppData\Local\Programs\Python\Python312\Lib\site-packages\llvmlite\binding\ffi.py", line 136, in _lib
self._load_lib()
File "C:\Users\admin\AppData\Local\Programs\Python\Python312\Lib\site-packages\llvmlite\binding\ffi.py", line 130, in _load_lib
raise OSError("Could not find/load shared object file") from e
OSError: Could not find/load shared object file`
I run the code through paycharm 2024.2.1, on windows 11 pro. I installed this library via pip versions 25.0.1, 23.1, 23.2. I tried downloading the entire package, and 3 packages separately, checking the documentation.
https://numba.readthedocs.io/en/stable/user/installing.html#numba-support-info
I ran several versions of these packages, from the latest versions to versions that are supported on Python 3.9
. I tried to solve the problem on my own by reading various forums, checking the integrity and availability of files, and tried to run the code in different ways, but none of this helped. I really hope for your help
-->
## Reporting a bug
<!--
Before submitting a bug report please ensure that you can check off these boxes:
-->
- [x] I have tried using the latest released version of Numba (most recent is
visible in the release notes
(https://numba.readthedocs.io/en/stable/release-notes-overview.html).
- [ ] I have included a self contained code sample to reproduce the problem.
i.e. it's possible to run as 'python bug.py'.
<!--
Please include details of the bug here, including, if applicable, what you
expected to happen!
-->
| 2hard
|
Title: Duplicate Values when use relationship type one-to-many in a JoinConfig with get_multi_joined
Body: **Describe the bug or question**
I'm trying to get data from tables where a user (table 1) can be a part of a company (table 2) and have multiple posts (table 3) on the company's forum. using get_multi_joined and joins_config, I want to get each user's company info and a list (won't be more than 5 items) of their posts on the forum but the posts are returning duplicate values.
**To Reproduce**
```python
# Your code here
crud_user.get_multi_joined(
"db": db,
"is_deleted": False,
"schema_to_select": UserModelCompPostsRead,
"nest_joins": True,
"joins_config": [
JoinConfig(
model=CompanyInfo,
join_on=User.company_id == CompanyInfo.id,
join_prefix="company",
schema_to_select=CompanyInfoRead,
)
JoinConfig(
model=ForumPosts,
join_on=User.id == ForumPosts.user_id,
join_prefix="forum_posts",
schema_to_select=ForumPosts,
relationship_type="one-to-many",
)
]
)
```
**Description**
I'm receiving duplicates in the list of posts when the expected results should be an item in the list once it meets the join_on requirement.
expectation:
```
"data": [
{
"name": "John Kin",
"id": 2,
"company_id": 1,
"company": {
"name": "Company",
"created_at": "2024-07-11T18:58:41.460483Z"
},
"forum_posts": [
{
"title": "Money Talks",
"user_id": 2,
"id": 41,
},
{
"title": "Green Goblin vs Spiderman",
"user_id": 2,
"id": 42,
},
]
},
...
# other users
]
```
actual output:
```
"data": [
{
"name": "John Kin",
"id": 2,
"company_id": 1,
"company": {
"name": "Company",
"created_at": "2024-07-11T18:58:41.460483Z"
},
"forum_posts": [
{
"title": "Money Talks",
"user_id": 2,
"id": 41,
},
{
"title": "Green Goblin vs Spiderman",
"user_id": 2,
"id": 42,
},
# values above are returned below too (duplicates)
{
"title": "Money Talks",
"user_id": 2,
"id": 41,
},
{
"title": "Green Goblin vs Spiderman",
"user_id": 2,
"id": 42,
},
]
},
...
# other users
]
```
**Additional context**
fastcrud = "^0.13.1"
SQLAlchemy = "^2.0.25"
fastapi = "^0.109.1" | 1medium
|
Title: model.parameters() return [Parameter containing: tensor([], device='cuda:0', dtype=torch.bfloat16, requires_grad=True)] when using zero3
Body: ### System Info
transformers 4.44.2
accelerate 1.2.1
deepspeed 0.12.2
torch 2.2.2
torchaudio 2.2.2
torchvision 0.17.2
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
Try to print **model.parameters()** in transfomers trainer(), but get **Parameter containing: tensor([], device='cuda:0', dtype=torch.bfloat16, requires_grad=True)** for all layers
In fact, I am trying to return the correct **model.parameters()** in DeepSpeed Zero-3 mode and use the EMA model. Could you suggest any ways to solve the above issue, or any other methods to use the EMA model under Zero-3?
### Expected behavior
expect to see the gathered parameters | 2hard
|
Title: [BUG]
Body: On editing source on "text" plugin of cms loses some of the data and changes it to something else.
## Steps to reproduce
1. Create a cms page
2. Click on the add plugin button and add `Text` plugin.
3. Click on the source button and put the source below
```<ul style="color: white;">
<li>
<h4>
<a href="https://www.google.com" target="_blank">Google</a>
<a class="padding-left-5px" href="https://mail.google.com/" title="Gmail"> <i class="fa fa-envelope" aria-hidden="true"></i></a>
</h4>
</li>
</ul>
```
## Expected behaviour

## Actual behaviour

This works fine and as expected
Now if we do the above steps and
1. Click on `Text` plugin again and try to edit it. It removes the font-awesome code which was written earlier .
``` <ul style="color: white;">
<li>
<h4><a href="https://www.google.com" target="_blank">Google</a> <a class="padding-left-5px" href="https://mail.google.com/" title="Gmail"> </a></h4>
</li>
</ul>
```
It looks like this after that.
Now since the font awesome icon is removed if i save this again. It becomes something like below

So my question is why the fontawesome code is getting removed. | 1medium
|
Title: [BUG] Time series tabular model always uses fallback method (SeasonalNaive) for time series of length 1
Body: **Bug Report Checklist**
<!-- Please ensure at least one of the following to help the developers troubleshoot the problem: -->
- [x] I provided code that demonstrates a minimal reproducible example. <!-- Ideal, especially via source install -->
- [x] I confirmed bug exists on the latest mainline of AutoGluon via source install. <!-- Preferred -->
- [x] I confirmed bug exists on the latest stable version of AutoGluon. <!-- Unnecessary if prior items are checked -->
**Describe the bug**
The tabular time series models identify time series of length 1 as too short for inference, even when differencing is set to 0. These time series are thus predicted using the fallback method, SeasonalNaive. Specifically, [this line of code](https://github.com/autogluon/autogluon/blob/ea2e8ff7082454565fbae31ebd5653851d5b4601/timeseries/src/autogluon/timeseries/models/autogluon_tabular/mlforecast.py#L368) is causing the issue.
**Expected behavior**
When differencing isn't applied, I'd expect the tabular time series models to produce predictions for time series of length 1, rather than using a fallback method.
**To Reproduce**
Following simplified example reproduces the issue. Two time series (item_ids ['4123__23', '7510__21']) are of length 1 and are predicted using the fallback method (SeasonalNaive), although sufficient data is available for a tabular prediction.
```python
import pandas
from autogluon.timeseries import TimeSeriesPredictor
from pandas import Timestamp
data = [
{'item_id': '2527__18', 'timestamp': Timestamp('2008-01-01 00:00:00'), 'quantity_sold': 18.0},
{'item_id': '2527__18', 'timestamp': Timestamp('2009-01-01 00:00:00'), 'quantity_sold': 682.0},
{'item_id': '2572__16', 'timestamp': Timestamp('2006-01-01 00:00:00'), 'quantity_sold': 6.0},
{'item_id': '2572__16', 'timestamp': Timestamp('2007-01-01 00:00:00'), 'quantity_sold': 18.0},
{'item_id': '2572__16', 'timestamp': Timestamp('2008-01-01 00:00:00'), 'quantity_sold': 22.0},
{'item_id': '2572__16', 'timestamp': Timestamp('2009-01-01 00:00:00'), 'quantity_sold': 74.0},
{'item_id': '4123__23', 'timestamp': Timestamp('2009-01-01 00:00:00'), 'quantity_sold': 138.0},
{'item_id': '695__24', 'timestamp': Timestamp('2001-01-01 00:00:00'), 'quantity_sold': 4.0},
{'item_id': '695__24', 'timestamp': Timestamp('2002-01-01 00:00:00'), 'quantity_sold': 92.0},
{'item_id': '695__24', 'timestamp': Timestamp('2003-01-01 00:00:00'), 'quantity_sold': 40.0},
{'item_id': '695__24', 'timestamp': Timestamp('2004-01-01 00:00:00'), 'quantity_sold': 116.0},
{'item_id': '695__24', 'timestamp': Timestamp('2005-01-01 00:00:00'), 'quantity_sold': 48.0},
{'item_id': '695__24', 'timestamp': Timestamp('2006-01-01 00:00:00'), 'quantity_sold': 132.0},
{'item_id': '695__24', 'timestamp': Timestamp('2007-01-01 00:00:00'), 'quantity_sold': 6.0},
{'item_id': '695__24', 'timestamp': Timestamp('2008-01-01 00:00:00'), 'quantity_sold': 26.0},
{'item_id': '695__24', 'timestamp': Timestamp('2009-01-01 00:00:00'), 'quantity_sold': 6.0},
{'item_id': '7510__21', 'timestamp': Timestamp('2009-01-01 00:00:00'), 'quantity_sold': 56.0}
]
df = pandas.DataFrame.from_dict(data)
predictor = TimeSeriesPredictor(
target='quantity_sold',
prediction_length=2,
freq='YS'
)
predictor.fit(
train_data=df,
hyperparameters={
'DirectTabular': {},
}
)
predictor.predict(df)
```
**Screenshots / Logs**
<!-- If applicable, add screenshots or logs to help explain your problem. -->

**Installed Versions**
<!-- Please run the following code snippet: -->
<details>
```python
INSTALLED VERSIONS
------------------
date : 2024-08-07
time : 09:35:12.422961
python : 3.11.9.final.0
OS : Darwin
OS-release : 23.6.0
Version : Darwin Kernel Version 23.6.0: Fri Jul 5 17:53:24 PDT 2024; root:xnu-10063.141.1~2/RELEASE_ARM64_T6020
machine : arm64
processor : arm
num_cores : 12
cpu_ram_mb : 32768.0
cuda version : None
num_gpus : 0
gpu_ram_mb : []
avail_disk_size_mb : 871868
accelerate : 0.21.0
autogluon : 1.1.1
autogluon.common : 1.1.1
autogluon.core : 1.1.1
autogluon.features : 1.1.1
autogluon.multimodal : 1.1.1
autogluon.tabular : 1.1.1
autogluon.timeseries : 1.1.1
boto3 : 1.34.154
catboost : None
defusedxml : 0.7.1
evaluate : 0.4.2
fastai : 2.7.16
gluonts : 0.15.1
hyperopt : 0.2.7
imodels : None
jinja2 : 3.1.4
joblib : 1.4.2
jsonschema : 4.21.1
lightgbm : 4.3.0
lightning : 2.3.3
matplotlib : 3.9.1
mlforecast : 0.10.0
networkx : 3.3
nlpaug : 1.1.11
nltk : 3.8.1
nptyping : 2.4.1
numpy : 1.26.4
nvidia-ml-py3 : 7.352.0
omegaconf : 2.2.3
onnxruntime-gpu : None
openmim : 0.3.9
optimum : 1.17.1
optimum-intel : None
orjson : 3.10.6
pandas : 2.2.2
pdf2image : 1.17.0
Pillow : 10.4.0
psutil : 5.9.8
pytesseract : 0.3.10
pytorch-lightning : 2.3.3
pytorch-metric-learning: 2.3.0
ray : 2.10.0
requests : 2.32.3
scikit-image : 0.20.0
scikit-learn : 1.4.0
scikit-learn-intelex : None
scipy : 1.12.0
seqeval : 1.2.2
setuptools : 72.1.0
skl2onnx : None
statsforecast : 1.4.0
tabpfn : None
tensorboard : 2.17.0
text-unidecode : 1.3
timm : 0.9.16
torch : 2.3.1
torchmetrics : 1.2.1
torchvision : 0.18.1
tqdm : 4.66.5
transformers : 4.40.2
utilsforecast : 0.0.10
vowpalwabbit : None
xgboost : 2.0.3
```
</details>
| 1medium
|
Title: [Feature] window 客户端使用麦克风
Body: ### 产品版本
v4.1
### 版本类型
- [ ] 社区版
- [ ] 企业版
- [x] 企业试用版
### 安装方式
- [ ] 在线安装 (一键命令安装)
- [x] 离线包安装
- [ ] All-in-One
- [ ] 1Panel
- [ ] Kubernetes
- [ ] 源码安装
### ⭐️ 需求描述
我们有远程办公的需求,可使用公司電腦上的Telegram或Whatsapp做通話,麥克風的聲音會傳至公司的電腦透過公司電腦再傳送到對方那嘛(windows 远程桌面客户端)
### 解决方案
sorry
### 补充信息
_No response_ | 1medium
|
Title: I would like the option to disable the DNS Cache and do name resolution on every request
Body: ## Checklist
- [ ] I've searched for similar feature requests.
---
## Enhancement request
…
---
## Problem it solves
E.g. “I'm always frustrated when […]”, “I’m trying to do […] so that […]”.
---
## Additional information, screenshots, or code examples
…
| 1medium
|
Title: image Docker.io
Body: Message in English:
Hello Charles,
I noticed that the official Docker image for sqlite-web on Docker Hub is from 2020. Because of this, I encountered issues with missing Insert and Update buttons, despite these features being present in the latest version on GitHub. After building a new Docker image from the most recent GitHub code, everything worked perfectly. Could you please update the official Docker Hub image so that users can easily access the newest version of sqlite-web?
Thank you and best regards,
J.Scheuner | 1medium
|
Title: how to get the envirement mesh
Body: it is a great project ,I use the nerf garden dataset to reconstruction and i only get the table in the mesh. but i wanna get all environment in the mesh ,how can i get that ,is there anything i can do to achieve my target | 1medium
|
Title: Messagebus send isn't working
Body: ## Software:
* Picroft
* 19.2.13
## Problem
This used to work before, but I've seen some work with refactoring the messagebus code, and that probably broke it.
This is basically my code:
```python
from mycroft.messagebus.send import send
send("skill.communications.device.new", {"message": "10.0.1.7"})
```
Which sends the message `skill.communications.device.new` from the code in my (communications) skill which handles new devices, to the Mycroft skill, to be registered.
Now it is failing with the error:
```python-traceback
Traceback (most recent call last):
File "/usr/lib/python3.5/threading.py", line 914, in _bootstrap_inner
self.run()
File "/home/pi/mycroft-core/.venv/lib/python3.5/site-packages/zeroconf.py", line 1423, in run
handler(self.zc)
File "/home/pi/mycroft-core/.venv/lib/python3.5/site-packages/zeroconf.py", line 1363, in <lambda>
zeroconf=zeroconf, service_type=self.type, name=name, state_change=state_change
File "/home/pi/mycroft-core/.venv/lib/python3.5/site-packages/zeroconf.py", line 1250, in fire
h(**kwargs)
File "/home/pi/mycroft-core/.venv/lib/python3.5/site-packages/zeroconf.py", line 1335, in on_change
listener.add_service(*args)
File "/opt/mycroft/skills/communications-skill.linuss1/shippingHandling.py", line 107, in add_service
send_communication_to_messagebus("device", ip)
File "/opt/mycroft/skills/communications-skill.linuss1/shippingHandling.py", line 26, in send_communication_to_messagebus
send("skill.communications.{}.new".format(msg_type), {"message": "{}".format(str(msg))})
File "/home/pi/mycroft-core/mycroft/messagebus/send.py", line 70, in send
url = MessageBusClient.build_url(
AttributeError: type object 'MessageBusClient' has no attribute 'build_url'
``` | 1medium
|
Title: OpenLineage can silently lose Snowflake query_ids and can't support multiple query_ids
Body: ### Apache Airflow Provider(s)
openlineage
### Versions of Apache Airflow Providers
latest
### Apache Airflow version
2.X
### Operating System
macos
### Deployment
Virtualenv installation
### Deployment details
_No response_
### What happened
When using `SqlExecuteQueryOperator` with Snowflake, and running a query with multiple statements in it, OpenLineage will only include first `query_id` in `ExternalQueryRunFacet`.
This is problematic, as users don't have full control on how the statements are executed (when query consists of multiple statements and `split_statements=False` operator throws an error `snowflake.connector.errors.ProgrammingError: 000008 (0A000): 01bad84f-0000-4392-0000-3d95000110ce: Actual statement count 3 did not match the desired statement count 1.`). The only solution for users to retrieve all query_ids in OL events is to set `split_statements=False` and make sure each task runs a single statement, which is rarely a case.
In BQ, similar problem is solved by ["parent_query_job"](https://github.com/apache/airflow/blob/ab3a1869c57def3ee74a925709cece4c7e07b891/providers/google/src/airflow/providers/google/cloud/openlineage/mixins.py#L109) executing each statement within a "child_query_job" with a link to the parent job, so that it's easy to access all ids later on. I couldn't find a similar mechanism in Snowflake.
### What you think should happen instead
Ideally, from within a single task (SqlExecuteQueryOperator) we would emit a separate OL event for each statement run, containing parentRunFacet pointing to the Airflow task. This may however take some time to implement properly and may? (or not) need some adjustments from the consumers?
As a partial solution, we could extend `ExternalQueryRunFacet` with a new property that accepts multiple `externalQueryIds`. This requires some discussion from OL community as how it fits to the spec.
Another small note, right now we are already sending the entire sql query (with all the statements) in `SQLJobFacet`, regardless if they execute as separate "queries" or not. So it would probably need adjustment as well.
### How to reproduce
Run a sample query like:
```
USE WAREHOUSE COMPUTE_WH;
CREATE OR REPLACE TABLE test.public.result AS SELECT * FROM snowflake_sample_data.tpch_sf1.customer;
```
You can see in Snowflake that this resulted in two queries being run, with two separate query_ids and only first one is included in OpenLineage event.
### Anything else
_No response_
### Are you willing to submit PR?
- [x] Yes I am willing to submit a PR!
### Code of Conduct
- [x] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| 1medium
|
Title: Validation error on serializer used only for responses
Body: Note: This may be what https://github.com/axnsan12/drf-yasg/issues/51 was trying to get at...
Trying to do this, but getting a validation error ("... 'ssv': "Unresolvable JSON pointer: 'definitions/Detail'")
```
class DetailSerializer(serializers.Serializer):
detail = serializers.CharField()
```
```
class ManufacturerViewSet(viewsets.ModelViewSet):
serializer_class = ManufacturerSerializer
model = Manufacturer
queryset = Manufacturer.objects.all()
@swagger_auto_schema(responses={404: openapi.Response("Not found or Not accessible", DetailSerializer,
examples={
'Not found': DetailSerializer({'detail':'Not found'}).data,
'Not accessible': DetailSerializer({'detail':'Not accessible'}).data,
},
)})
def retrieve(self, request, *args, **kwargs):
return super().retrieve(self, request, *args, **kwargs)
```
However, if I add the serializer to recognized model, it does work, e.g.,
```
class ManufacturerSerializer(serializers.ModelSerializer):
status = DetailSerializer(many=False)
class Meta:
model = Manufacturer
fields = '__all__'
```
Full text of validation error...
```
{'flex': "'paths':\n"
" - '/manufacturers/{id}/':\n"
" - 'get':\n"
" - 'responses':\n"
" - '404':\n"
" - 'referenceObject':\n"
" - 'additionalProperties':\n"
' - "When `additionalProperties` is False, '
'no unspecified properties are allowed. The following unspecified '
"properties were found:\\n\\t`{'headers', 'description', 'examples', "
'\'schema\'}`"\n'
" - 'required':\n"
" - '$ref':\n"
" - 'This value is required'\n"
" - 'responseObject':\n"
" - 'schema':\n"
" - '$ref':\n"
" - 'The $ref `#/definitions/Detail` "
"was not found in the schema'",
'ssv': "Unresolvable JSON pointer: 'definitions/Detail'"}
``` | 1medium
|
Title: Accessing unselected columns should raise an error rather that return None
Body: Hello!
When one select columns with `Table.select()`, the returned object will have only part of attributes. Currently, gino (0.4.1) would return `None` when accessing attributes that weren't selected. However, accessing unselected attributes usually mean that there is a bug somewhere so it would be better to throw an appropriate error. | 1medium
|
Title: Logodds: difference between contributions plot and prediction box
Body: From the code

I built an explainer dashboard, which includes the following outputs:
<img width="497" alt="ExplainerDashboard_PredictionBox" src="https://github.com/oegedijk/explainerdashboard/assets/145360667/60cf5af5-0c9d-4f1f-acbc-54d9d5bbe82a">
<img width="491" alt="ExplainerDashboard_ContributionsPlot" src="https://github.com/oegedijk/explainerdashboard/assets/145360667/310decbc-dea6-4869-9a1b-bb99661f8e3a">
The underlying problem is one of multiclassification, with classes 0, 1 and 2. The model used was LGBM.
In the previous images, I chose class 2 as the positive class. My question is why the logodd predicted in the Contributions Plot is different from the logodd of class 2 in the Prediction box and how is the first logodd calculated?
| 1medium
|
Title: Visualization for model interpretation
Body: I took at look at AllenNLP Interpret https://arxiv.org/pdf/1909.09251.pdf, which implements the saliency map for important tokens, and adversarial attacks with input reduction or word hotflip. These methods seem to be quite useful in helping users understand what the model learns and when it fails. | 1medium
|
Title: `word2vec.doesnt_match` numpy vstack deprecation warning
Body: #### Problem description
I followed [this instruction](https://radimrehurek.com/gensim/scripts/glove2word2vec.html) to load GloVe model. When I run: `model.doesnt_match("breakfast cereal dinner lunch".split())` from the [tutorial](https://rare-technologies.com/word2vec-tutorial/), it produces FutureWarning on the `vstack` function. It seems that [I am not the first person to encounter this error as well](https://stackoverflow.com/questions/56593904/word2vec-doesnt-match-function-throws-numpy-warning). It might also be similar to [Issue 2432](https://github.com/RaRe-Technologies/gensim/issues/2432). The error reads:
> C:\Path_to_gensim\keyedvectors.py:877: FutureWarning: arrays to stack must be passed as a "sequence" type such as list or tuple. Support for non-sequence iterables such as generators is deprecated as of NumPy 1.16 and will raise an error in the future.
> vectors = vstack(self.word_vec(word, use_norm=True) for word in used_words).astype(REAL)
#### Steps/code/corpus to reproduce
```python
from gensim.test.utils import datapath, get_tmpfile
from gensim.models import KeyedVectors
from gensim.scripts.glove2word2vec import glove2word2vec
glove_file = datapath('test_glove.txt')
tmp_file = get_tmpfile("test_word2vec.txt")
_ = glove2word2vec(glove_file, tmp_file)
model = KeyedVectors.load_word2vec_format(tmp_file)
model.doesnt_match("breakfast cereal dinner lunch".split())
```
#### Versions
```python
Windows-10-10.0.17763-SP0
python 3.8.2 (tags/v3.8.2:7b3ab59, Feb 25 2020, 23:03:10) [MSC v.1916 64 bit (AMD64)]
Bits 64
NumPy 1.19.0
SciPy 1.5.2
gensim 3.8.3
FAST_VERSION 0
``` | 0easy
|
Title: Having problems using with fairseq
Body: ## ❓Question
The library [fairseq](https://github.com/facebookresearch/fairseq/) has built in support for aim, but I am struggling to get it working. I'm not sure if it's something I'm doing wrong or if maybe the fairseq support is out of date, but the fairseq repo is fairly inactive so I thought I would ask here.
I am working locally and run `aim server`, and see: "Server is mounted on 0.0.0.0:53800".
I then run my fairseq experiment, adding to my config.yaml file:
```
common:
aim_repo: aim://0.0.0.0:53800
```
then run my experiment. It seems to be working initially - aim detects the experiment and the log starts with:
```
[2023-11-15 14:31:07,453][fairseq.logging.progress_bar][INFO] - Storing logs at Aim repo: aim://0.0.0.0:53800
[2023-11-15 14:31:07,480][aim.sdk.reporter][INFO] - creating RunStatusReporter for f6f19ecf0e2147b19e24d52f
[2023-11-15 14:31:07,482][aim.sdk.reporter][INFO] - starting from: {}
[2023-11-15 14:31:07,482][aim.sdk.reporter][INFO] - starting writer thread for <aim.sdk.reporter.RunStatusReporter object at 0x7f57117363e0>
[2023-11-15 14:31:08,471][fairseq.trainer][INFO] - begin training epoch 1
[2023-11-15 14:31:08,471][fairseq_cli.train][INFO] - Start iterating over samples
[2023-11-15 14:31:10,821][fairseq.trainer][INFO] - NOTE: gradient overflow detected, ignoring gradient, setting loss scale to: 64.0
[2023-11-15 14:31:12,261][fairseq.trainer][INFO] - NOTE: gradient overflow detected, ignoring gradient, setting loss scale to: 32.0
[2023-11-15 14:31:12,261][fairseq_cli.train][INFO] - begin validation on "valid" subset
[2023-11-15 14:31:12,266][fairseq.logging.progress_bar][INFO] - Storing logs at Aim repo: aim://0.0.0.0:53800
[2023-11-15 14:31:12,283][fairseq.logging.progress_bar][INFO] - Appending to run: f6f19ecf0e2147b19e24d52f
```
but then I get an error:
```
...
File "/lib/python3.10/site-packages/fairseq/logging/progress_bar.py", line 64, in progress_bar
bar = AimProgressBarWrapper(
File "/lib/python3.10/site-packages/fairseq/logging/progress_bar.py", line 365, in __init__
self.run = get_aim_run(aim_repo, aim_run_hash)
File "/lib/python3.10/site-packages/fairseq/logging/progress_bar.py", line 333, in get_aim_run
return Run(run_hash=run_hash, repo=repo)
File "/lib/python3.10/site-packages/aim/ext/exception_resistant.py", line 70, in wrapper
_SafeModeConfig.exception_callback(e, func)
File "/lib/python3.10/site-packages/aim/ext/exception_resistant.py", line 47, in reraise_exception
raise e
File "/lib/python3.10/site-packages/aim/ext/exception_resistant.py", line 68, in wrapper
return func(*args, **kwargs)
File "/lib/python3.10/site-packages/aim/sdk/run.py", line 828, in __init__
super().__init__(run_hash, repo=repo, read_only=read_only, experiment=experiment, force_resume=force_resume)
File "/lib/python3.10/site-packages/aim/sdk/run.py", line 276, in __init__
super().__init__(run_hash, repo=repo, read_only=read_only, force_resume=force_resume)
File "/lib/python3.10/site-packages/aim/sdk/base_run.py", line 50, in __init__
self._lock.lock(force=force_resume)
File "/lib/python3.10/site-packages/aim/storage/lock_proxy.py", line 38, in lock
return self._rpc_client.run_instruction(self._hash, self._handler, 'lock', (force,))
File "/lib/python3.10/site-packages/aim/ext/transport/client.py", line 260, in run_instruction
return self._run_read_instructions(queue_id, resource, method, args)
File "/lib/python3.10/site-packages/aim/ext/transport/client.py", line 285, in _run_read_instructions
raise_exception(status_msg.header.exception)
File lib/python3.10/site-packages/aim/ext/transport/message_utils.py", line 76, in raise_exception
raise exception(*args) if args else exception()
TypeError: Timeout.__init__() missing 1 required positional argument: 'lock_file'
Exception in thread Thread-13 (worker):
Traceback (most recent call last):
File "lib/python3.10/threading.py", line 1016, in _bootstrap_inner
self.run()
File "lib/python3.10/threading.py", line 953, in run
self._target(*self._args, **self._kwargs)
File "/lib/python3.10/site-packages/aim/ext/transport/rpc_queue.py", line 55, in worker
if self._try_exec_task(task_f, *args):
File "/lib/python3.10/site-packages/aim/ext/transport/rpc_queue.py", line 81, in _try_exec_task
task_f(*args)
File "/lib/python3.10/site-packages/aim/ext/transport/client.py", line 301, in _run_write_instructions
raise_exception(response.exception)
File "/python3.10/site-packages/aim/ext/transport/message_utils.py", line 76, in raise_exception
raise exception(*args) if args else exception()
aim.ext.transport.message_utils.UnauthorizedRequestError: 3310c526-aa51-47ef-ba87-fbf75f80f610
```
Does anyone have any idea what might be causing this/if there's something wrong with the approach I'm taking? I've tried with a variety of different aim versions (going back to the versions when fairseq was more actively being developed) and I still get errors.
| 1medium
|
Title: order: combining different xarray variables followed by a reduction orders very inefficiently
Body: Lets look at the following example:
```
import xarray as xr
import dask.array as da
size = 50
ds = xr.Dataset(
dict(
u=(
["time", "j", "i"],
da.random.random((size, 20, 20), chunks=(10, -1, -1)),
),
v=(
["time", "j", "i"],
da.random.random((size, 20, 20), chunks=(10, -1, -1)),
),
w=(
["time", "j", "i"],
da.random.random((size, 20, 20), chunks=(10, -1, -1)),
),
)
)
ds["uv"] = ds.u * ds.v
ds["vw"] = ds.v * ds.w
ds = ds.fillna(199)
```
We are combining u and v and then v and w. Not having a reduction after that step generally works fine:
<img width="1321" alt="Screenshot 2025-01-07 at 16 21 32" src="https://github.com/user-attachments/assets/d72431d8-44bf-4bb1-b335-99480128c453" />
The individual chunks in one array are independent of all other chunks, so we can process chunk by chunk for all data arrays.
Adding a reduction after these cross dependencies makes things go sideways:
Add:
```
ds = ds.count()
```
The ordering algorithm eagerly processes a complete tree reduction for the first variable ``uv`` before touching anything from ``vw``. This means that the data array ``v`` is loaded completely into memory when the first tree reduction is finished before we are tackling the ``vw`` and thus we can't release any chunk from ``v``.
<img width="1511" alt="Screenshot 2025-01-07 at 16 21 12" src="https://github.com/user-attachments/assets/977a78f1-a11c-4ecd-b468-392a2c7f9c98" />
I am not sure what a good solution here would look like. Ideally, the ordering algorithm would know that the ``v`` chunks are a lot larger than the reduced chunks of the ``uv`` combination and thus prefer processing ``v`` before starting with a new chunk of ``uv``.
Alternatively, we could load ``v`` twice, i.e. drop the v chunks after they are added to ``uv``.
This is the pattern that kills https://github.com/coiled/benchmarks/blob/main/tests/geospatial/workloads/atmospheric_circulation.py
task graph:
```
from dask.base import collections_to_dsk
dsk = collections_to_dsk([ds.uv.data, ds.vw.data], optimize_graph=True)
```
cc @fjetter | 2hard
|
Title: Exporting out of memory dataframe to parquet error
Body: I am trying to export an out of memory dataframe to parquet as in the following code but i keep getting the following error.
Code:
'''
import numpy as np
from matplotlib import pyplot as plt
import vaex as vd
def custom_shift(df, column):
# Extract the values of the column
values = vd.from_arrays(column=df[f'{column}'].values)
# Create a new shifted column with None as the first value
d = vd.from_arrays(column=[None])
shifted_values = vd.concat([d, values[:-1]])
shifted_values = vd.from_arrays(ClosePrice_shifted=shifted_values).ClosePrice_shifted.values
df.add_column(f'{column}_shifted', shifted_values)
# Add the shifted values as a new column
return df
def get_threshold(daily_returns, lookback=40):
ewm_std = np.abs(daily_returns.rolling(window=lookback).std())
threshold = np.exp(ewm_std)
return threshold.mean() * 0.1
ddf = custom_shift(ddf, 'ClosePrice')
ddf = ddf.dropna()
ddf['daily_returns'] = ddf['ClosePrice'] / ddf['ClosePrice_shifted'] - 1
ddf['threshold'] = (ddf['daily_returns'].apply(get_threshold))
ddf.export_parquet('dollar_bars_threshold.parquet', engine='pyarrow')
'''
Error:
Traceback (most recent call last):
File "C:\Users\Intijir\PycharmProjects\quantStrategy\venv\lib\site-packages\vaex\scopes.py", line 113, in evaluate
result = self[expression]
File "C:\Users\Intijir\PycharmProjects\quantStrategy\venv\lib\site-packages\vaex\scopes.py", line 198, in __getitem__
raise KeyError("Unknown variables or column: %r" % (variable,))
KeyError: "Unknown variables or column: '((ClosePrice / ClosePrice_shifted) - 1)'"
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\Intijir\PycharmProjects\quantStrategy\venv\lib\site-packages\vaex\dataframe.py", line 2273, in data_type
data = self.evaluate(expression, 0, 1, filtered=False, array_type=array_type, parallel=False)
File "C:\Users\Intijir\PycharmProjects\quantStrategy\venv\lib\site-packages\vaex\dataframe.py", line 3095, in evaluate
return self._evaluate_implementation(expression, i1=i1, i2=i2, out=out, selection=selection, filtered=filtered, array_type=array_type, parallel=parallel, chunk_size=chunk_size, progress=progress)
File "C:\Users\Intijir\PycharmProjects\quantStrategy\venv\lib\site-packages\vaex\dataframe.py", line 6562, in _evaluate_implementation
value = block_scope.evaluate(expression)
File "C:\Users\Intijir\PycharmProjects\quantStrategy\venv\lib\site-packages\vaex\scopes.py", line 113, in evaluate
result = self[expression]
File "C:\Users\Intijir\PycharmProjects\quantStrategy\venv\lib\site-packages\vaex\scopes.py", line 188, in __getitem__
values = self.evaluate(expression)
File "C:\Users\Intijir\PycharmProjects\quantStrategy\venv\lib\site-packages\vaex\scopes.py", line 119, in evaluate
result = eval(expression, expression_namespace, self)
File "<string>", line 1, in <module>
File "C:\Users\Intijir\PycharmProjects\quantStrategy\venv\lib\site-packages\vaex\arrow\numpy_dispatch.py", line 74, in operator
result_data = a.add_missing(result_data)
File "C:\Users\Intijir\PycharmProjects\quantStrategy\venv\lib\site-packages\vaex\arrow\numpy_dispatch.py", line 27, in add_missing
ar = vaex.array_types.to_arrow(ar)
File "C:\Users\Intijir\PycharmProjects\quantStrategy\venv\lib\site-packages\vaex\array_types.py", line 184, in to_arrow
return pa.array(x)
File "pyarrow\array.pxi", line 340, in pyarrow.lib.array
File "pyarrow\array.pxi", line 86, in pyarrow.lib._ndarray_to_array
File "pyarrow\error.pxi", line 91, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: only handle 1-dimensional arrays
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\Intijir\PycharmProjects\quantStrategy\Data\Labeling\cumulative_sum.py", line 89, in <module>
ddf.export_parquet('dollar_bars_threshold.parquet', engine='pyarrow')
File "C:\Users\Intijir\PycharmProjects\quantStrategy\venv\lib\site-packages\vaex\dataframe.py", line 6823, in export_parquet
schema = self.schema_arrow(reduce_large=True)
File "C:\Users\Intijir\PycharmProjects\quantStrategy\venv\lib\site-packages\vaex\dataframe.py", line 2335, in schema_arrow
return pa.schema({name: reduce(dtype.arrow) for name, dtype in self.schema().items()})
File "C:\Users\Intijir\PycharmProjects\quantStrategy\venv\lib\site-packages\vaex\dataframe.py", line 2323, in schema
return {column_name:self.data_type(column_name) for column_name in self.get_column_names()}
File "C:\Users\Intijir\PycharmProjects\quantStrategy\venv\lib\site-packages\vaex\dataframe.py", line 2323, in <dictcomp>
return {column_name:self.data_type(column_name) for column_name in self.get_column_names()}
File "C:\Users\Intijir\PycharmProjects\quantStrategy\venv\lib\site-packages\vaex\dataframe.py", line 2275, in data_type
data = self.evaluate(expression, 0, 1, filtered=True, array_type=array_type, parallel=False)
File "C:\Users\Intijir\PycharmProjects\quantStrategy\venv\lib\site-packages\vaex\dataframe.py", line 3095, in evaluate
return self._evaluate_implementation(expression, i1=i1, i2=i2, out=out, selection=selection, filtered=filtered, array_type=array_type, parallel=parallel, chunk_size=chunk_size, progress=progress)
File "C:\Users\Intijir\PycharmProjects\quantStrategy\venv\lib\site-packages\vaex\dataframe.py", line 6402, in _evaluate_implementation
max_stop = (len(self) if (self.filtered and filtered) else self.length_unfiltered())
File "C:\Users\Intijir\PycharmProjects\quantStrategy\venv\lib\site-packages\vaex\dataframe.py", line 4326, in __len__
self._cached_filtered_length = int(self.count())
File "C:\Users\Intijir\PycharmProjects\quantStrategy\venv\lib\site-packages\vaex\dataframe.py", line 967, in count
return self._compute_agg('count', expression, binby, limits, shape, selection, delay, edges, progress, array_type=array_type)
File "C:\Users\Intijir\PycharmProjects\quantStrategy\venv\lib\site-packages\vaex\dataframe.py", line 941, in _compute_agg
return self._delay(delay, progressbar.exit_on(var))
File "C:\Users\Intijir\PycharmProjects\quantStrategy\venv\lib\site-packages\vaex\dataframe.py", line 1780, in _delay
self.execute()
File "C:\Users\Intijir\PycharmProjects\quantStrategy\venv\lib\site-packages\vaex\dataframe.py", line 421, in execute
self.executor.execute()
File "C:\Users\Intijir\PycharmProjects\quantStrategy\venv\lib\site-packages\vaex\execution.py", line 308, in execute
for _ in self.execute_generator():
File "C:\Users\Intijir\PycharmProjects\quantStrategy\venv\lib\site-packages\vaex\execution.py", line 432, in execute_generator
yield from self.thread_pool.map(self.process_part, dataset.chunk_iterator(run.dataset_deps, chunk_size),
File "C:\Users\Intijir\PycharmProjects\quantStrategy\venv\lib\site-packages\vaex\multithreading.py", line 100, in map
iterator = super(ThreadPoolIndex, self).map(wrapped, cancellable_iter())
File "C:\Users\Intijir\AppData\Local\Programs\Python\Python39\lib\concurrent\futures\_base.py", line 598, in map
fs = [self.submit(fn, *args) for args in zip(*iterables)]
File "C:\Users\Intijir\AppData\Local\Programs\Python\Python39\lib\concurrent\futures\_base.py", line 598, in <listcomp>
fs = [self.submit(fn, *args) for args in zip(*iterables)]
File "C:\Users\Intijir\PycharmProjects\quantStrategy\venv\lib\site-packages\vaex\multithreading.py", line 86, in cancellable_iter
for value in chunk_iterator:
File "C:\Users\Intijir\PycharmProjects\quantStrategy\venv\lib\site-packages\vaex\dataset.py", line 1257, in chunk_iterator
for (i1, i2, ichunks), (j1, j2, jchunks) in zip(
File "C:\Users\Intijir\PycharmProjects\quantStrategy\venv\lib\site-packages\vaex\arrow\dataset.py", line 182, in chunk_iterator
chunks = chunks_future.result()
File "C:\Users\Intijir\AppData\Local\Programs\Python\Python39\lib\concurrent\futures\_base.py", line 446, in result
return self.__get_result()
File "C:\Users\Intijir\AppData\Local\Programs\Python\Python39\lib\concurrent\futures\_base.py", line 391, in __get_result
raise self._exception
File "C:\Users\Intijir\AppData\Local\Programs\Python\Python39\lib\concurrent\futures\thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
File "C:\Users\Intijir\PycharmProjects\quantStrategy\venv\lib\site-packages\vaex\arrow\dataset.py", line 114, in reader
table = fragment.to_table(columns=list(columns_physical), use_threads=False)
File "pyarrow\_dataset.pyx", line 1613, in pyarrow._dataset.Fragment.to_table
File "pyarrow\_dataset.pyx", line 3713, in pyarrow._dataset.Scanner.to_table
File "pyarrow\error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow\error.pxi", line 91, in pyarrow.lib.check_status
pyarrow.lib.ArrowMemoryError: malloc of size 8388608 failed
Thanks! | 2hard
|
Title: Looking for performance metric for cyclegan
Body: Hi, we often apply cycleGAN for unpaired data. So, some of the performance metric will be not applied
- SSIM
- PSNR
For my dataset, I would like to use cyclegan to mapping an image from winter session to spring session and they have no pair data for each image. Could you tell me how can I evaluate the cyclegan performance (i.e how to know the output is close to a realistic image...)
| 1medium
|
Title: Question about the backward of the quantize function
Body: Usage Questions Only:
I have a question about quantize function in the dorefa paper.

I have confused to the process about find out the derivative r0/ri
| 3misc
|
Title: If I want to use /u as a placeholder instead of /t, what do I need to do
Body: | 1medium
|
Title: Using `format` as a query parameter for certain URLs causes `requests` to miss the `?` query string separator
Body: <!-- Summary. -->
Using `format` as a query parameter for certain URLs causes `requests` to miss the `?` query string separator
## Expected Result
I expect the `?` separator to be present whenever any query parameters are provided.
<!-- What you expected. -->
## Actual Result
The `?` separator is not present.
<!-- What happened instead. -->
## Reproduction Steps''
```python
>>> import requests
>>> requests.__version__
'2.28.1'
>>> requests.get("https://www.uniprot.org/uniprotkb/search", params={"format": "json"}).url
'https://rest.uniprot.org/uniprotkb/searchformat=json'
>>> requests.get("https://www.uniprot.org", params={"format": "json"}).url
'https://rest.uniprot.org/format=json'
>>> requests.get("https://www.uniprot.org/uniprotkb/search", params={"size": "500"}).url
'https://www.uniprot.org/uniprotkb/search?size=500'
>>> requests.get("https://www.google.com", params={"format": "json"}).url
'https://www.google.com/?format=json'
```
Note how the same issue happens in different subdomains of `uniprot.org` but that it doesn't happen with `google.com`
## System Information
$ python -m requests.help
```json
{
"chardet": {
"version": null
},
"charset_normalizer": {
"version": "2.1.0"
},
"cryptography": {
"version": ""
},
"idna": {
"version": "3.3"
},
"implementation": {
"name": "CPython",
"version": "3.9.7"
},
"platform": {
"release": "5.13.0-52-generic",
"system": "Linux"
},
"pyOpenSSL": {
"openssl_version": "",
"version": null
},
"requests": {
"version": "2.28.1"
},
"system_ssl": {
"version": "101010cf"
},
"urllib3": {
"version": "1.26.10"
},
"using_charset_normalizer": true,
"using_pyopenssl": false
}
```
<!-- This command is only available on Requests v2.16.4 and greater. Otherwise,
please provide some basic information about your system (Python version,
operating system, &c). -->
| 1medium
|
Title: force_unicode - py3 compatibility
Body: ```
'jet_tags' is not a valid tag library: ImportError raised loading jet.templatetags.jet_tags: cannot import name 'force_unicode'
```
http://django.readthedocs.org/en/latest/topics/python3.html
| 1medium
|
Title: NewConnectionError
Body: When I pass in a request url for a url that does not respond I get a
```python3
request = requests.get(
"url"
proxies=proxies,
)
```
```
Error: SOCKSHTTPSConnectionPool(host='atxdyqbqutne3q0hoidwnjkxzxzyogtw0boudm7ztiwrdxrucedrsolw.onion', port=443): Max retries exceeded with url: / (Caused by NewConnectionError('<urllib3.contrib.socks.SOCKSHTTPSConnection object at 0xffff8fbf9690>: Failed to establish a new connection: 0x01: General SOCKS server failure'))
```
But when I also kill my proxies I get the same error. This should not be, how can I catch the different errors and understand what I'm dealing with? Is this behaviour expected? If so, why? | 1medium
|
Title: Request to add xgboost
Body: | 1medium
|
Title: 50etf(510050.SH)基金复权因子数据缺失
Body: data = ts_api.fund_adj(ts_code='510050.SH', start_date='20190101', end_date='20191231')
50etf(510050.SH)基金复权因子缺失下面几个日期的数据:
20190701
20190807
20190826
20191028
20191029
tushare id: 216155(glhyy11313081@163.com)
| 1medium
|
Title: [DOC] Install django-cms by hand instruction is incomplete
Body: ## Description
If you follow the instructions - this will lead to the installation of a broken version of django-cms, because
`pip install django-cms`
will install django 4 as a dependency, which we don't support
* [x] Yes, I want to help fix this issue and I will join #workgroup-documentation on [Slack](https://www.django-cms.org/slack) to confirm with the team that a PR is welcome.
* [ ] No, I only want to report the issue.
| 0easy
|
Title: Receive empty list of related resources after upgrade from 0.30.1 to 0.31.x
Body: Such kind of requests always returns empty list:
Request:
`/addresses/43/values`
Response:
`{
"data": [],
"links": {"self": "http://127.0.0.1:5000/address_values"},
"meta": {"count": 0},
"jsonapi": {"version": "1.0"}
}`
If I try to get the same resources using the filter it returns expected result.
`/address_values?filter=[{"name":"address__id", "op":"has", "val":43}]`
On 0.30.1 everything works as expected. | 1medium
|
Title: After update to 3.4: unknown error: Runtime.evaluate threw exception: SyntaxError: missing ) after argument list
Body: Thanks to the author for writing such a powerful, elegant open source product, I'm happy to update to 3.4 now, but I'm having problems after updating to the new version, as the author says it's a major update that may have an impact on the code, and unfortunately I'm having a problem right now after updating to the new version.
**### Basic environmental information:**
python 3.9.2
debian 11 bullseye
google-chrome 109.*stable/110.*beta
selenium: The latest
docker: No docker
**### Error message:**
Traceback (most recent call last):
File "/usr/local/lib/python3.9/dist-packages/selenium/webdriver/remote/errorhandler.py", line 245, in check_response
raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.WebDriverException: Message: unknown error: Runtime.evaluate threw exception: SyntaxError: missing ) after argument list
(Session info: headless chrome=109.0.5414.119)
Stacktrace:
#0 0x55e3dc84d303 <unknown>
#1 0x55e3dc621d37 <unknown>
#2 0x55e3dc62d8df <unknown>
#3 0x55e3dc624df1 <unknown>
#4 0x55e3dc624931 <unknown>
#5 0x55e3dc625351 <unknown>
#6 0x55e3dc6256dc <unknown>
#7 0x55e3dc65e188 <unknown>
#8 0x55e3dc65e6c1 <unknown>
#9 0x55e3dc698b34 <unknown>
#10 0x55e3dc67e9ad <unknown>
#11 0x55e3dc69688c <unknown>
#12 0x55e3dc67e753 <unknown>
#13 0x55e3dc651a14 <unknown>
#14 0x55e3dc652b7e <unknown>
#15 0x55e3dc89c32e <unknown>
#16 0x55e3dc89fc0e <unknown>
#17 0x55e3dc882610 <unknown>
#18 0x55e3dc8a0c23 <unknown>
#19 0x55e3dc874545 <unknown>
#20 0x55e3dc8c16a8 <unknown>
#21 0x55e3dc8c1836 <unknown>
#22 0x55e3dc8dcd13 <unknown>
#23 0x7fa85aafeea7 start_thread
**Here's another mistake that sometimes occurs:**
FileNotFoundError: [Errno 2] No such file or directory: '/root/.local/share/undetected_chromedriver/undetected_chromedriver'```
**Sometimes it also appears:**
* Cannot connect to 127.0.0*...
**_I confirm that this issue is related to multithreading/multiprocessing, but I don't know the exact reason because I'm not a professional, if I don't use multithreading/processing using the latest UC 3.4 test, it doesn't show any error.
After many tests, the error message mentioned above only appears when the version is upgraded to 3.4, for example, 3.2.1 does not appear._**
Thanks! | 2hard
|
Title: Documentation: show examples of good docs
Body: # Tasks
* [x] Review existing criterias for documentation
* [x] Suggest own criterias
* [x] Show good examples and explain why | 0easy
|
Title: SARIMAX.predict() exog
Body: On SARIMAX.predict(), when you have an exog but the exog is only known today and in the past, how do you predict the endog's next 12 months off just the exog and data known through today? Is that what the SARIMAX.predict() is doing as a default? Example, my exog is SP500 price. I do not know what it will be tomorrow, but I want to use the values through today as an exog to predict the next 12 months forward of my endog.
Or is SARIMAX.predict() wanting you to know what the future exog values will be for the 12 months into the future of the endog you are predicting? Example, my exog is days of the week. I know today what day of the week things will happen on so I can fill in the future exog for the next 12 months to predict the endog for the same forward period.
And if the exog value is only known for today as in the SP500 example, will SARIMAX.forecast(steps = forecast_period, exog = exog[-forecast_period:]) result in an accurate modeled prediction of the 12 months ahead? Is it by default calculating a single step 12 month ahead forecast. Or is it trying to multistep off the exogs which in this case wouldn't be possible without predicting those exogs 1 step ahead as well.
Any help would be greatly appreciated! | 1medium
|
Title: Test dependencies not included in project dependencies?
Body: Is there a reason why [`requirements.txt`](https://github.com/frol/flask-restplus-server-example/blob/27dfdb8791f087b0c35b9e929a98e10f9d24ec21/requirements.txt) does not include `-r tests/requirements.txt`? | 3misc
|
Title: Input size does not match
Body: ### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
The input tensor size is not equal to the model input type: got [1,3,640,640] expecting [1,3,384,384]. Why am I getting this error and how do I resize the tensor size in my code?
### Additional
_No response_ | 0easy
|
Title: manual update seems don't work
Body: ## ❓ Questions and Help
double confirm if manual daily update don't works. If yes, when it will go back to normal? and warnings frequently pop up when running "collector.py".
logs of warning:
"FutureWarning: A value is trying to be set on a copy of a DataFrame or Series through chained assignment using an inplace method.
The behavior will change in pandas 3.0. This inplace method will never work because the intermediate object on which we are setting values always behaves as a copy.
For example, when doing 'df[col].method(value, inplace=True)', try using 'df.method({col: value}, inplace=True)' or df[col] = df[col].method(value) instead, to perform the operation inplace on the original object.
"
error logs:"r.utils:wrapper:517 - _get_simple: 1 :get data error: 000003.sz--2020-09-24 00:00:00+08:00--2025-02-12 00:00:00+08:00The stock may be delisted, please check
2025-02-12 08:16:54.421 | WARNING | data_collector.utils:wrapper:517 - _get_simple: 2 :get data error: 000003.sz--2020-09-24 00:00:00+08:00--2025-02-12 00:00:00+08:00The stock may be delisted, please check
2025-02-12 08:16:59.701 | WARNING | data_collector.utils:wrapper:517 - _get_simple: 3 :get data error: 000003.sz--2020-09-24 00:00:00+08:00--2025-02-12 00:00:00+08:00The stock may be delisted, please check
2025-02-12 08:17:05.112 | WARNING | data_collector.utils:wrapper:517 - _get_simple: 4 :get data error: 000003.sz--2020-09-24 00:00:00+08:00--2025-02-12 00:00:00+08:00The stock may be delisted, please check
2025-02-12 08:17:10.313 | WARNING | data_collector.utils:wrapper:517 - _get_simple: 5 :get data error: 000003.sz--2020-09-24 00:00:00+08:00--2025-02-12 00:00:00+08:00The stock may be delisted, please check
2025-02-12 08:17:10.314 | WARNING | data_collector.base:save_instrument:163 - 000003.sz is empty"
We sincerely suggest you to carefully read the [documentation](http://qlib.readthedocs.io/) of our library as well as the official [paper](https://arxiv.org/abs/2009.11189). After that, if you still feel puzzled, please describe the question clearly under this issue. | 1medium
|
Title: Tutorial: Creating DB within WSL on Windows
Body: ### First Check
- [X] I added a very descriptive title to this issue.
- [X] I used the GitHub search to find a similar issue and didn't find it.
- [X] I searched the SQLModel documentation, with the integrated search.
- [X] I already searched in Google "How to X in SQLModel" and didn't find any information.
- [X] I already read and followed all the tutorial in the docs and didn't find an answer.
- [X] I already checked if it is not related to SQLModel but to [Pydantic](https://github.com/samuelcolvin/pydantic).
- [X] I already checked if it is not related to SQLModel but to [SQLAlchemy](https://github.com/sqlalchemy/sqlalchemy).
### Commit to Help
- [X] I commit to help with one of those options 👆
### Example Code
```python
Create database as described in the tutorial using "DB browser" on an WSL drive.
```
### Description
If following the tutorial on WSL2, using the suggested sqlite DB management tool "DB browser" you get strange errors when creating the database on a WSL2 volume (it looks like it is due WSL2 does not support file locking)
You can work around this issue by creating the DB on the `C:\` drive and linking to the file from within the WSL volume.
Either
- add a warning to the docs
- or probably the best option, close this issue and leave it for people searching in the future.
### Operating System
Windows
### Operating System Details
WSL2
### SQLModel Version
0.0.8
### Python Version
Python 3.10.6
### Additional Context
_No response_ | 0easy
|
Title: `test_sftp.SFTPStorageTest` `test_accessed_time` & `test_modified_time` assume EST timezone
Body: Hi,
It seems as though these tests assume a timezone of UTC-5, resulting in e.g.:
```
======================================================================
FAIL: test_accessed_time (tests.test_sftp.SFTPStorageTest)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/nix/store/6v602p5l3c05iiq7jx8y0rjwiv2n8hhj-python3-3.9.13/lib/python3.9/unittest/mock.py", line 1336, in patched
return func(*newargs, **newkeywargs)
File "/build/django-storages-1.12.3/tests/test_sftp.py", line 121, in test_accessed_time
self.assertEqual(self.storage.accessed_time('foo'),
AssertionError: datetime.datetime(2016, 7, 28, 2, 58, 4) != datetime.datetime(2016, 7, 27, 21, 58, 4)
======================================================================
FAIL: test_modified_time (tests.test_sftp.SFTPStorageTest)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/nix/store/6v602p5l3c05iiq7jx8y0rjwiv2n8hhj-python3-3.9.13/lib/python3.9/unittest/mock.py", line 1336, in patched
return func(*newargs, **newkeywargs)
File "/build/django-storages-1.12.3/tests/test_sftp.py", line 128, in test_modified_time
self.assertEqual(self.storage.modified_time('foo'),
AssertionError: datetime.datetime(2016, 7, 28, 2, 58, 4) != datetime.datetime(2016, 7, 27, 21, 58, 4)
----------------------------------------------------------------------
```
when it isn't. | 1medium
|
Title: correctness tests
Body: I'm guessing:
PyGraphistry integration:
- github ci: default off
- github ci: explicit opt-out
- local dev: as part of regular gpu-enabled tests
cuCat repo: ??? | 1medium
|
Title: Gensim sort_by_descending_frequency changes most_similar results
Body: <!--
**IMPORTANT**:
- Use the [Gensim mailing list](https://groups.google.com/forum/#!forum/gensim) to ask general or usage questions. Github issues are only for bug reports.
- Check [Recipes&FAQ](https://github.com/RaRe-Technologies/gensim/wiki/Recipes-&-FAQ) first for common answers.
Github bug reports that do not include relevant information and context will be closed without an answer. Thanks!
-->
#### Problem description
It seems that when retrieving the most similar word vectors, sorting by word frequency will change the results in `Gensim`.
#### Steps/code/corpus to reproduce
Before sorting:
from gensim.models import FastText
from gensim.test.utils import common_texts # some example sentences
print(len(common_texts))
model = FastText(vector_size=4, window=3, min_count=1) # instantiate
model.build_vocab(corpus_iterable=common_texts)
model.train(corpus_iterable=common_texts, total_examples=len(common_texts), epochs=1)
model.wv.most_similar(positive=["human"])
> [('interface', 0.7432922720909119),
> ('minors', 0.6719315052032471),
> ('time', 0.3513716757297516),
> ('computer', 0.05815044790506363),
> ('response', -0.11714297533035278),
> ('graph', -0.15643596649169922),
> ('eps', -0.2679084539413452),
> ('survey', -0.34035828709602356),
> ('trees', -0.63677978515625),
> ('user', -0.6500451564788818)]
However, if I sort the vectors by descending frequency:
model.wv.sort_by_descending_frequency()
model.wv.most_similar(positive=["human"])
> [('minors', 0.9638221263885498),
> ('time', 0.6335864067077637),
> ('interface', 0.40014874935150146),
> ('computer', 0.03224882856011391),
> ('response', -0.14850640296936035),
> ('graph', -0.2249641716480255),
> ('survey', -0.26847705245018005),
> ('user', -0.45202943682670593),
> ('eps', -0.497650682926178),
> ('trees', -0.6367797255516052)]
The most similar word ranking as well as the word similarities change. Any idea why?
Include full tracebacks, logs and datasets if necessary. Please keep the examples minimal ("minimal reproducible example").
If your problem is with a specific Gensim model (word2vec, lsimodel, doc2vec, fasttext, ldamodel etc), include the following:
```python
print(my_model.lifecycle_events)
[{'params': 'FastText(vocab=0, vector_size=4, alpha=0.025)', 'datetime': '2021-07-20T09:46:56.158863', 'gensim': '4.0.1', 'python': '3.6.9 |Anaconda, Inc.| (default, Jul 30 2019, 19:07:31) \n[GCC 7.3.0]', 'platform': 'Linux-3.10.0-1160.31.1.el7.csd3.x86_64-x86_64-with-redhat-7.9-Nitrogen', 'event': 'created'}, {'msg': 'effective_min_count=1 retains 12 unique words (100.0%% of original 12, drops 0)', 'datetime': '2021-07-20T09:46:56.159995', 'gensim': '4.0.1', 'python': '3.6.9 |Anaconda, Inc.| (default, Jul 30 2019, 19:07:31) \n[GCC 7.3.0]', 'platform': 'Linux-3.10.0-1160.31.1.el7.csd3.x86_64-x86_64-with-redhat-7.9-Nitrogen', 'event': 'prepare_vocab'}, {'msg': 'effective_min_count=1 leaves 29 word corpus (100.0%% of original 29, drops 0)', 'datetime': '2021-07-20T09:46:56.160040', 'gensim': '4.0.1', 'python': '3.6.9 |Anaconda, Inc.| (default, Jul 30 2019, 19:07:31) \n[GCC 7.3.0]', 'platform': 'Linux-3.10.0-1160.31.1.el7.csd3.x86_64-x86_64-with-redhat-7.9-Nitrogen', 'event': 'prepare_vocab'}, {'msg': 'downsampling leaves estimated 3.5001157321504532 word corpus (12.1%% of prior 29)', 'datetime': '2021-07-20T09:46:56.160376', 'gensim': '4.0.1', 'python': '3.6.9 |Anaconda, Inc.| (default, Jul 30 2019, 19:07:31) \n[GCC 7.3.0]', 'platform': 'Linux-3.10.0-1160.31.1.el7.csd3.x86_64-x86_64-with-redhat-7.9-Nitrogen', 'event': 'prepare_vocab'}, {'update': False, 'trim_rule': 'None', 'datetime': '2021-07-20T09:46:56.233809', 'gensim': '4.0.1', 'python': '3.6.9 |Anaconda, Inc.| (default, Jul 30 2019, 19:07:31) \n[GCC 7.3.0]', 'platform': 'Linux-3.10.0-1160.31.1.el7.csd3.x86_64-x86_64-with-redhat-7.9-Nitrogen', 'event': 'build_vocab'}, {'msg': 'training model with 3 workers on 12 vocabulary and 4 features, using sg=0 hs=0 sample=0.001 negative=5 window=3', 'datetime': '2021-07-20T09:46:56.234068', 'gensim': '4.0.1', 'python': '3.6.9 |Anaconda, Inc.| (default, Jul 30 2019, 19:07:31) \n[GCC 7.3.0]', 'platform': 'Linux-3.10.0-1160.31.1.el7.csd3.x86_64-x86_64-with-redhat-7.9-Nitrogen', 'event': 'train'}, {'msg': 'training on 29 raw words (3 effective words) took 0.0s, 1377 effective words/s', 'datetime': '2021-07-20T09:46:56.236277', 'gensim': '4.0.1', 'python': '3.6.9 |Anaconda, Inc.| (default, Jul 30 2019, 19:07:31) \n[GCC 7.3.0]', 'platform': 'Linux-3.10.0-1160.31.1.el7.csd3.x86_64-x86_64-with-redhat-7.9-Nitrogen', 'event': 'train'}]
```
#### Versions
Linux-3.10.0-1160.31.1.el7.csd3.x86_64-x86_64-with-redhat-7.9-Nitrogen
Python 3.6.9 |Anaconda, Inc.| (default, Jul 30 2019, 19:07:31)
[GCC 7.3.0]
Bits 64
NumPy 1.18.1
SciPy 1.4.1
gensim 4.0.1
FAST_VERSION 0 | 1medium
|
Title: pip install brokenaxes failed
Body: I see the error:
```
Downloading brokenaxes-0.3.tar.gz
Complete output from command python setup.py egg_info:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/tmp/pip-build-m8qZrr/brokenaxes/setup.py", line 10, in <module>
with open(path.join(here, 'README.md'), encoding='utf-8') as f:
File "/usr/lib/python2.7/codecs.py", line 878, in open
file = __builtin__.open(filename, mode, buffering)
IOError: [Errno 2] No such file or directory: '/tmp/pip-build-m8qZrr/brokenaxes/README.md'
``` | 1medium
|
Title: [question] export annotations of a project in subfolders
Body: Hello!
In CVAT I have a project consisting of jobs (each job is a video with annotations, boxes and class). when I get an export from project labels (Images not included, only labels) the labels get accumulated in a single folder! so a single folder with thousands of yolo files for all frames of all videos. So right now I have to download labels one by one for each video. Is it possible to download all labels structured with job (video) names just like when you download it with images included? (I export in YOLO format btw) | 1medium
|
Title: pytest==6.1.0: processing filter warnings is done before django.setup()
Body: Lauching pytest results in the following error:
<details>
<summary><tt>pytest test_foo.py</tt> click to expand</summary>
<pre>
Traceback (most recent call last):
File "/home/presto/.virtualenvs/presto/bin/pytest", line 8, in <module>
sys.exit(console_main())
File "/home/presto/.virtualenvs/presto/lib/python3.8/site-packages/_pytest/config/__init__.py", line 187, in console_main
code = main()
File "/home/presto/.virtualenvs/presto/lib/python3.8/site-packages/_pytest/config/__init__.py", line 143, in main
config = _prepareconfig(args, plugins)
File "/home/presto/.virtualenvs/presto/lib/python3.8/site-packages/_pytest/config/__init__.py", line 318, in _prepareconfig
config = pluginmanager.hook.pytest_cmdline_parse(
File "/home/presto/.virtualenvs/presto/lib/python3.8/site-packages/pluggy/hooks.py", line 286, in __call__
return self._hookexec(self, self.get_hookimpls(), kwargs)
File "/home/presto/.virtualenvs/presto/lib/python3.8/site-packages/pluggy/manager.py", line 93, in _hookexec
return self._inner_hookexec(hook, methods, kwargs)
File "/home/presto/.virtualenvs/presto/lib/python3.8/site-packages/pluggy/manager.py", line 84, in <lambda>
self._inner_hookexec = lambda hook, methods, kwargs: hook.multicall(
File "/home/presto/.virtualenvs/presto/lib/python3.8/site-packages/pluggy/callers.py", line 203, in _multicall
gen.send(outcome)
File "/home/presto/.virtualenvs/presto/lib/python3.8/site-packages/_pytest/helpconfig.py", line 100, in pytest_cmdline_parse
config = outcome.get_result() # type: Config
File "/home/presto/.virtualenvs/presto/lib/python3.8/site-packages/pluggy/callers.py", line 80, in get_result
raise ex[1].with_traceback(ex[2])
File "/home/presto/.virtualenvs/presto/lib/python3.8/site-packages/pluggy/callers.py", line 187, in _multicall
res = hook_impl.function(*args)
File "/home/presto/.virtualenvs/presto/lib/python3.8/site-packages/_pytest/config/__init__.py", line 1003, in pytest_cmdline_parse
self.parse(args)
File "/home/presto/.virtualenvs/presto/lib/python3.8/site-packages/_pytest/config/__init__.py", line 1280, in parse
self._preparse(args, addopts=addopts)
File "/home/presto/.virtualenvs/presto/lib/python3.8/site-packages/_pytest/config/__init__.py", line 1186, in _preparse
self.hook.pytest_load_initial_conftests(
File "/home/presto/.virtualenvs/presto/lib/python3.8/site-packages/pluggy/hooks.py", line 286, in __call__
return self._hookexec(self, self.get_hookimpls(), kwargs)
File "/home/presto/.virtualenvs/presto/lib/python3.8/site-packages/pluggy/manager.py", line 93, in _hookexec
return self._inner_hookexec(hook, methods, kwargs)
File "/home/presto/.virtualenvs/presto/lib/python3.8/site-packages/pluggy/manager.py", line 84, in <lambda>
self._inner_hookexec = lambda hook, methods, kwargs: hook.multicall(
File "/home/presto/.virtualenvs/presto/lib/python3.8/site-packages/pluggy/callers.py", line 208, in _multicall
return outcome.get_result()
File "/home/presto/.virtualenvs/presto/lib/python3.8/site-packages/pluggy/callers.py", line 80, in get_result
raise ex[1].with_traceback(ex[2])
File "/home/presto/.virtualenvs/presto/lib/python3.8/site-packages/pluggy/callers.py", line 182, in _multicall
next(gen) # first yield
File "/home/presto/.virtualenvs/presto/lib/python3.8/site-packages/_pytest/warnings.py", line 136, in pytest_load_initial_conftests
with catch_warnings_for_item(
File "/usr/lib/python3.8/contextlib.py", line 113, in __enter__
return next(self.gen)
File "/home/presto/.virtualenvs/presto/lib/python3.8/site-packages/_pytest/warnings.py", line 52, in catch_warnings_for_item
apply_warning_filters(config_filters, cmdline_filters)
File "/home/presto/.virtualenvs/presto/lib/python3.8/site-packages/_pytest/config/__init__.py", line 1602, in apply_warning_filters
warnings.filterwarnings(*parse_warning_filter(arg, escape=False))
File "/home/presto/.virtualenvs/presto/lib/python3.8/site-packages/_pytest/config/__init__.py", line 1576, in parse_warning_filter
category = warnings._getcategory(
File "/usr/lib/python3.8/warnings.py", line 260, in _getcategory
m = __import__(module, None, None, [klass])
File "/home/presto/Projects/sandbox/pytest_bug_warning/bunny/models.py", line 10, in <module>
class Bunny(models.Model):
File "/home/presto/.virtualenvs/presto/lib/python3.8/site-packages/django/db/models/base.py", line 108, in __new__
app_config = apps.get_containing_app_config(module)
File "/home/presto/.virtualenvs/presto/lib/python3.8/site-packages/django/apps/registry.py", line 253, in get_containing_app_config
self.check_apps_ready()
File "/home/presto/.virtualenvs/presto/lib/python3.8/site-packages/django/apps/registry.py", line 136, in check_apps_ready
raise AppRegistryNotReady("Apps aren't loaded yet.")
django.core.exceptions.AppRegistryNotReady: Apps aren't loaded yet.
</pre>
</details>
The project is straighforward.
We created the following Warning and Model:
``` python
from django.db import models
# Create your models here.
class CarrotWarning(RuntimeWarning):
pass
class Bunny(models.Model):
pass
```
And use the following `setup.cfg`:
```
[tool:pytest]
filterwarnings =
ignore::bunny.models.CarrotWarning
```
The bug was introduced by `pytest==6.1.0`, it worked fine with `pytest==6.0.2`.
We get that it seems to be introduced by pytest itself, but as its django-related, we assume that the fix will be in `pytest-django`.
We would guess from the changelog that it's linked to this issue : https://github.com/pytest-dev/pytest/issues/6681
| 2hard
|
Title: Dataset and Pre-trained Model Files Not Accessible
Body: Dataset and the pre-trained model files are not accessible. Getting 403 Forbidden from the server
`$ sh get_data.sh
mkdir: cannot create directory ‘../data’: File exists
--2022-03-18 12:14:57-- http://sketch-code.s3.amazonaws.com/data/all_data.zip
Resolving sketch-code.s3.amazonaws.com (sketch-code.s3.amazonaws.com)... 52.217.16.188
Connecting to sketch-code.s3.amazonaws.com (sketch-code.s3.amazonaws.com)|52.217.16.188|:80... connected.
HTTP request sent, awaiting response... 403 Forbidden
2022-03-18 12:14:57 ERROR 403: Forbidden.
` | 1medium
|
Title: Error using augmentation
Body: Hi everyone,
I use augmentation as follows:
```
> ``seq = iaa.Sometimes(0.833, iaa.Sequential([
iaa.Fliplr(0.5), # horizontal flips
iaa.Crop(percent=(0, 0.1)), # random crops
# Small gaussian blur with random sigma between 0 and 0.5.
# But we only blur about 50% of all images.
iaa.Sometimes(0.5,
iaa.GaussianBlur(sigma=(0, 0.5))
),
# Strengthen or weaken the contrast in each image.
iaa.ContrastNormalization((0.75, 1.5)),
# Add gaussian noise.
# For 50% of all images, we sample the noise once per pixel.
# For the other 50% of all images, we sample the noise per pixel AND
# channel. This can change the color (not only brightness) of the
# pixels.
iaa.AdditiveGaussianNoise(loc=0, scale=(0.0, 0.05*255), per_channel=0.5),
# Make some images brighter and some darker.
# In 20% of all cases, we sample the multiplier once per channel,
# which can end up changing the color of the images.
iaa.Multiply((0.8, 1.2), per_channel=0.2),
# Apply affine transformations to each image.
# Scale/zoom them, translate/move them, rotate them and shear them.
iaa.Affine(
scale={"x": (0.8, 1.2), "y": (0.8, 1.2)},
translate_percent={"x": (-0.2, 0.2), "y": (-0.2, 0.2)},
rotate=(-25, 25),
shear=(-8, 8)
)
], random_order=True)) # apply augmenters in random order``
```
and the errors happen with the input array, mask:
```
ERROR:root:Error processing image {'id': '1281', 'source': 'dataset', 'path': '/content/drive/My Drive/mask rcnn/train_image/1281.jpg', 'annotation': '/content/drive/My Drive/mask rcnn/train_annotation/1281.xml'}
Traceback (most recent call last):
File "/content/drive/My Drive/mask rcnn/mrcnn/model.py", line 1709, in data_generator
use_mini_mask=config.USE_MINI_MASK)
File "/content/drive/My Drive/mask rcnn/mrcnn/model.py", line 1254, in load_image_gt
hooks=imgaug.HooksImages(activator=hook))
File "/usr/local/lib/python3.6/dist-packages/imgaug/augmenters/meta.py", line 470, in augment_image
return self.augment_images([image], hooks=hooks)[0]
File "/usr/local/lib/python3.6/dist-packages/imgaug/augmenters/meta.py", line 603, in augment_images
hooks=hooks
File "/usr/local/lib/python3.6/dist-packages/imgaug/augmenters/meta.py", line 3386, in _augment_images
hooks=hooks
File "/usr/local/lib/python3.6/dist-packages/imgaug/augmenters/meta.py", line 603, in augment_images
hooks=hooks
File "/usr/local/lib/python3.6/dist-packages/imgaug/augmenters/meta.py", line 2816, in _augment_images
hooks=hooks
File "/usr/local/lib/python3.6/dist-packages/imgaug/augmenters/meta.py", line 603, in augment_images
hooks=hooks
File "/usr/local/lib/python3.6/dist-packages/imgaug/augmenters/size.py", line 795, in _augment_images
image_cr_pa = ia.imresize_single_image(image_cr_pa, (height, width))
File "/usr/local/lib/python3.6/dist-packages/imgaug/imgaug.py", line 1289, in imresize_single_image
rs = imresize_many_images(image[np.newaxis, :, :, :], sizes, interpolation=interpolation)
File "/usr/local/lib/python3.6/dist-packages/imgaug/imgaug.py", line 1253, in imresize_many_images
result[i] = result_img
ValueError: could not broadcast input array from shape (1024,1024,512) into shape (1024,1024,0)
```
```
`Traceback (most recent call last):
File "/content/drive/My Drive/mask rcnn/mrcnn/model.py", line 1709, in data_generator
use_mini_mask=config.USE_MINI_MASK)
File "/content/drive/My Drive/mask rcnn/mrcnn/model.py", line 1257, in load_image_gt
assert mask.shape == mask_shape, "Augmentation shouldn't change mask size"
AssertionError: Augmentation shouldn't change mask size`
```
Hope you guys can help me, | 2hard
|
Title: Question about finite difference taps
Body: Hi~Thanks for the great work and provided code!
Is there any difference in the accuracy of finite difference when taps is equal to 6 and 4? Will taps=6 be more accurate? Because it is 6 in the paper, but it is 4 by default in the code. | 3misc
|
Title: Code execute raise TypeError: 'PandasConnector' object is not subscriptable, looks PandasConnector not be well linked to a DataFrame.
Body: ### System Info
pandasai==2.0.21
pytho==3.9.19
os==ubuntu
### 🐛 Describe the bug
The bug is PandasConnector isn't well-linked to a Pandas Dataframe, as error message below, running generated code to manipulate df raise error.
The user asked the following question:
QUERY
德州仪器中工作温度相同的芯片有哪些
You generated this python code:
import pandas as pd
Assuming the DataFrame provided is already given to us as df.
df = dfs[0]
Filter the DataFrame for Texas Instruments
texas_instruments_df = df[df['制造商'] == 'Texas Instruments']
Ensure that we only consider rows where temperature values are not NaN
texas_instruments_df = texas_instruments_df.dropna(subset=['最小工作温度', '最大工作温度'])
Group by unique combinations of minimum and maximum working temperature
grouped_by_temp_df = texas_instruments_df.groupby(['最小工作温度', '最大工作温度'])
Extract groups with more than one entry (the same working temperature range)
common_temp_chips = {}
for (min_temp, max_temp), group in grouped_by_temp_df:
if len(group) > 1:
common_temp_chips[f"{min_temp}C to {max_temp}C"] = group['零件型号'].tolist()
Convert dictionary to DataFrame for better readability
common_temp_chips_df = pd.DataFrame([(k, v) for k, v in common_temp_chips.items()], columns=['Temperature Range', 'Chip Models'])
Showing the result
print(common_temp_chips_df)
It fails with the following error:
Traceback (most recent call last):
File "/root/anaconda3/envs/chatdb/lib/python3.9/site-packages/pandasai/pipelines/chat/code_execution.py", line 64, in execute
result = code_manager.execute_code(code_to_run, code_context)
File "/root/anaconda3/envs/chatdb/lib/python3.9/site-packages/pandasai/helpers/code_manager.py", line 182, in execute_code
code_to_run = self._clean_code(code, context)
File "/root/anaconda3/envs/chatdb/lib/python3.9/site-packages/pandasai/helpers/code_manager.py", line 562, in _clean_code
self._extract_fix_dataframe_redeclarations(node, clean_code_lines)
File "/root/anaconda3/envs/chatdb/lib/python3.9/site-packages/pandasai/helpers/code_manager.py", line 467, in _extract_fix_dataframe_redeclarations
exec(code, env)
File "<string>", line 3, in <module>
TypeError: 'PandasConnector' object is not subscriptable | 2hard
|
Title: Switch to Dark Mode MkDocs
Body: Added the ability to change the appearance to "Dark Mode" and vice versa the document. | 1medium
|
Title: Missing tags for latest v3.6.x releases
Body: On PyPI, there are 3.6.4, 3.6.5 and 3.6.6 releases (https://pypi.org/project/pandas-profiling/#history), but no corresponding tags in this repo. Can you add them please? | 0easy
|
Title: TFLearn Memory Leak
Body: I'm running the following Flask application to train a DNN similar to AlexNet on some images. After each run I delete everything Python has available to me in memory. However, each run adds about 1GB of memory to the running Python process and produces an OOM exception sooner or later. Below is the entire application, each call to /build runs the model 30 times are random data. This is a proof of concept for a model which has far too much data to read into memory so I'm sampling the data over iterations like this. I know I can use the Image Preloader, but because I have so much data I'm actually loading pickle files in each iteration and not image files at the moment. Either way, some memory continues to be allocated over each iteration of this application which I cannot deallocate.
```
`import tflearn
from flask import Flask, jsonify
from tflearn.layers.core import input_data, dropout, fully_connected
from tflearn.layers.conv import conv_2d, max_pool_2d
from tflearn.layers.normalization import local_response_normalization
from tflearn.layers.estimator import regression
import numpy as np
app = Flask(__name__)
keep_prob = .8
num_labels = 3
batch_size = 64
def reformat(dataset, labels, num_labels):
if dataset is not None:
dataset = dataset.reshape((-1, 227, 227, num_labels)).astype(np.float32)
if labels is not None:
labels = (np.arange(3) == labels[:, None]).astype(np.float32)
return dataset, labels
class AlexNet():
def __init__(self):
@app.route('/build')
def build():
# Building 'AlexNet'
network = input_data(shape=[None, 227, 227, 3])
network = conv_2d(network, 96, 11, strides=4, activation='relu')
network = max_pool_2d(network, 3, strides=2)
network = local_response_normalization(network)
network = conv_2d(network, 256, 5, activation='relu')
network = max_pool_2d(network, 3, strides=2)
network = local_response_normalization(network)
network = conv_2d(network, 384, 3, activation='relu')
network = conv_2d(network, 384, 3, activation='relu')
network = conv_2d(network, 256, 3, activation='relu')
network = max_pool_2d(network, 3, strides=2)
network = local_response_normalization(network)
network = fully_connected(network, 4096, activation='tanh')
network = dropout(network, keep_prob)
network = fully_connected(network, 4096, activation='tanh')
network = dropout(network, keep_prob)
network = fully_connected(network, num_labels, activation='softmax')
network = regression(network, optimizer="adam",
loss='categorical_crossentropy',
learning_rate=0.001, batch_size=batch_size)
model = tflearn.DNN(network, tensorboard_dir="./tflearn_logs/",
checkpoint_path=None, tensorboard_verbose=0)
for i in range(30):
data = np.random.randn(10, 227, 227, 3)
labels = np.random.choice([0, 2], size=(10,), p=[1. / 3, 2. / 3])
data, labels = reformat(data, labels, 3)
model.fit(data, labels, n_epoch=1, shuffle=True,
validation_set=None,
show_metric=True, batch_size=batch_size, snapshot_step=None,
snapshot_epoch=True, run_id=None)
del data
del labels
del model
del network
return jsonify(status=200)
if __name__ == "__main__":
AlexNet()
app.run(host='0.0.0.0', port=5000, threaded=True)`
```
| 2hard
|
Title: Adding multiple outputs results in one output appearing until page refresh (master branch)
Body: Problem: Enter 2 in quantity field on output page, select any output, select Add. Only one output is added to the page. Refresh the page and two outputs are there.
Fix: allow multiple outputs to be added via the new ajax form submission system. | 1medium
|
Title: Tensorflow 2.0 parallel_model problem
Body: I'm trying to use mrcnn today but i had this problem.
do you have an idea for the solution ?
```
model = modellib.MaskRCNN(mode="training", config=config, model_dir=args.logs)
File "/hdd-raid0/home_server/houssem/anaconda3/envs/hxf_env/lib/python3.7/site-packages/mask_rcnn-2.1-py3.7.egg/mrcnn/model.py", line 1835, in __init__
File "/hdd-raid0/home_server/houssem/anaconda3/envs/hxf_env/lib/python3.7/site-packages/mask_rcnn-2.1-py3.7.egg/mrcnn/model.py", line 2060, in build
File "/hdd-raid0/home_server/houssem/anaconda3/envs/hxf_env/lib/python3.7/site-packages/mask_rcnn-2.1-py3.7.egg/mrcnn/parallel_model.py", line 37, in __init__
File "/hdd-raid0/home_server/houssem/anaconda3/envs/hxf_env/lib/python3.7/site-packages/mask_rcnn-2.1-py3.7.egg/mrcnn/parallel_model.py", line 79, in make_parallel
File "/hdd-raid0/home_server/houssem/anaconda3/envs/hxf_env/lib/python3.7/site-packages/mask_rcnn-2.1-py3.7.egg/mrcnn/parallel_model.py", line 79, in <listcomp>
File "/hdd-raid0/home_server/houssem/anaconda3/envs/hxf_env/lib/python3.7/site-packages/keras/engine/base_layer.py", line 443, in __call__
previous_mask = _collect_previous_mask(inputs)
File "/hdd-raid0/home_server/houssem/anaconda3/envs/hxf_env/lib/python3.7/site-packages/keras/engine/base_layer.py", line 1311, in _collect_previous_mask
mask = node.output_masks[tensor_index]
AttributeError: 'Node' object has no attribute 'output_masks'
``` | 2hard
|
Title: 404 not found 项目链接已经失效!
Body: 最新一期([第97期](https://github.com/521xueweihan/HelloGitHub/blob/master/content/HelloGitHub97.md#%E5%85%B6%E5%AE%83))
<img width="1029" alt="image" src="https://github.com/521xueweihan/HelloGitHub/assets/85916131/2b4b2e8c-ecf2-43db-bc14-46414cd90c33">
点击进去链接都没了。。。
<img width="915" alt="image" src="https://github.com/521xueweihan/HelloGitHub/assets/85916131/bce741d1-0ce7-416e-9d22-5ed02c695a2d">
建议28号提交的时候项目是否需要最后再审查下。 | 0easy
|
Title: Translate the Darts documentation into Chinese
Body: **I want to translate Darts doc into Chinese , make this great repo more and more popular, anyone there work with me ?**
-> into Chinese:
想把Darts的文档,翻译成中文,有没有人一起,让Darts更加有影响力。
| 1medium
|
Title: Project dependencies may have API risk issues
Body: Hi, In **flasgger**, inappropriate dependency versioning constraints can cause risks.
Below are the dependencies and version constraints that the project is using
```
Flask>=0.10
PyYAML>=3.0
jsonschema>=3.0.1
six>=1.10.0
mistune*
werkzeug*
```
The version constraint **==** will introduce the risk of dependency conflicts because the scope of dependencies is too strict.
The version constraint **No Upper Bound** and **\*** will introduce the risk of the missing API Error because the latest version of the dependencies may remove some APIs.
After further analysis, in this project,
The version constraint of dependency **Flask** can be changed to *>=0.10,<=0.12.5*.
The above modification suggestions can reduce the dependency conflicts as much as possible,
and introduce the latest version as much as possible without calling Error in the projects.
The invocation of the current project includes all the following methods.
<details>
<summary>The calling methods from the Flask</summary>
<pre>
json.dumps
json.loads
json.dump
</pre>
</details>
<details>
<summary>The calling methods from the all methods</summary>
<pre>
value.split
rule.methods.difference
parameter.get
content.values
full_doc.replace
wraps
schema.get.lower
schema.items
is_python_file
APISpecsView.as_view
re.compile
specs_data.get
OneOf
self.get_apispecs
mapping.items
read
parsed_data.items
app.run
request.endpoint.lower
self.endpoints.append
operations.items
self.load
spec.get.get
swag.keys
get_specs
Response
k.startswith
resolve_path
verb.lower
Api
rule.endpoint.replace
regex.search
__file__.os.path.dirname.root_path.swag_path.replace.split
deepcopy
self._init_config
reqparse.RequestParser
Swagger
user.password.encode
jwt.jwt_encode_callback
isinstance
definitions.get
abort
Exception
def_id.definitions.update
click.command
inspect.getfile
location.parsers.add_argument
definition.pop
retrieved_schema.viewitems
package_spec.origin.replace
os.environ.get
path_def.values
paths.values
APIDocsView.as_view
parser.parse_args
EmptyView.as_view
apispec_swag.get
swagger.app.app_context
data.get
filename.endswith
PetSchema
jsonschema.FormatChecker
get_schema_specs
set_from_filepath
swag.init_app
self.load_config
os.getcwd
definitions.update
UserAPI.as_view
stream.read
node.extend
cat.get
swag_path.replace
doc.get.get
UserPostView.as_view
spec.to_flasgger
example_blueprint.add_url_rule
function.__annotations__.items
flasgger.utils.validate
srule.startswith
final_filepath.split
key.srule.paths.update
definitions.items
authenticate
grep
PostAPIView.as_view
is_valid_method_view
self.init_app
os.sep.swag_path.replace.replace
full_doc.rfind
swag.update
str
source.get
swagger.config.get
self.validation_error_handler
received.items
wrap_view
ordered_dict_to_dict
f.read
FlaskPlugin
ItemMethodView.as_view
remove_suffix
self.CachedLazyString.super.__init__
OAuthRedirect.as_view
response.data.decode
import_prop.start
callbacks.values
jwt_required
fields.String
source.items
username_table.get
name.definitions.update
specs.update
raw.startswith
app.config.get
mod.__name__.split
new_d.items
target.setdefault
os.path.abspath
find_packages
location.parsers.parse_args
BPUserPostView.as_view
swag.get_schema
jwt.request_handler
parts.lower
JWTError
dict
f
import_prop.group
examples_dir.replace
self.APISpecsView.super.__init__
self.text_type
self.register_views
super
app.add_url_rule
utils.extract_schema
specs_data.items
Length
rule.rule.startswith
raw_definitions.get
MarshmallowPlugin
argparse.ArgumentParser
Animals.as_view
sub.index
self.add_headers
desc
inspect.getdoc
open
process_doc
d.items
abort_if_todo_doesnt_exist
fields.Str
specs.append
value.startswith
parsers.keys
sorted
doc.get.get.items
received.get
request_body.get
check_auth
param.get
param.get.get
full_doc.find
self.config.get
Flask
type
getattr
auth_header_prefix.lower
re.findall
file_content.rfind
validation_function
self.load_swagger_file
json.dump
ItemsView.as_view
spec.get
cat.items
self.data.dump
request.url_rule.str.split
defi.lower
request.json.get
validation_error_handler
item.get
swag_annotation
method.__dict__.get
schema2jsonschema
format
render_template
d.get.get
actual_schema.items
is_openapi3
rule_filter
hasattr
Blueprint
pathify
jsonschema.validate
data
self.vars.items
click.option
fname.fpath.open.read
RuntimeError
apispec.get
redirect
schema.__name__.replace
swag.extract_schema.get
example_blueprint.route
swag.validate
KeyError
parse_definition_docstring
is_path
d_schema_id.lower
iter
click.echo
os.path.splitext
issubclass
kwargs.pop
schema_id.lower
received.viewitems
PaletteView.as_view
partial
os.sep.join
client.put
self.loader
openapi.OpenAPIConverter
callbacks.items
validate
self.DEFAULT_CONFIG.copy
schema_id.split
setattr
verbs.append
auth_header_value.split
swag.setdefault
self.validation_function
inspect.isclass
json.loads
DispatcherMiddleware
self.LazyJSONEncoder.super.default
len
app.register_blueprint
os.path.join
subitem.dump
annotation.to_specs_dict
get_examples
url_for
base_path.endswith
defaultdict
AttributeError
rule.endpoint.startswith
User
request.form.get
spec.path
APISpec
model_filter
subprocess.check_call
max
fpath.endswith
spec.components.schema
vars
__replace_ref
responses.values
Meow.as_view
self.last_event.dump
retrieved_schema.items
client.get
responses.items
DEFAULT_FIELDS.items
self.config.update
TODOS.keys
api.add_resource
swag_from
swag.get.get
parser.add_argument
filepath.startswith
d.get
subs.append
fields.Nested
self.is_openapi3
OrderedDict
properties.values
schema.get
self.parse_request
client.post
extract_schema
imported_doc.replace
click.File
examples.items
blueprint.add_url_rule
request.method.lower
request.endpoint.lower.replace
self.update_schemas_parsers
item.update
hack
request.headers.get
srule.replace
decorator
import_module
alernative_schema.update
annotation
codecs.open
self.get_url_mappings
validate_annotation
filename.startswith
spec.get.self.get_def_models.items
self.get_def_models
defi.copy
fields.Int
parse_imports
callable
sys.path.append
data.str.lower
TODOS.keys.max.lstrip
openapi_version.str.split
join
self.definition_models.append
verb_swag.get
userid_table.get
JWT
markdown
SwaggerDefinition
endpoint.__dict__.get
full_doc.startswith
load_from_file
Markup
set
yaml_file.read
extract_definitions
variable.annotation.validate_annotation
data.setdefault
any
self.set_schemas
has_valid_dispatch_view_docs
an.to_specs_dict
request.args.get
endpoint.replace
detect_by_bom
inspect.stack
swag.get
importlib.util.find_spec
LazyString
safe_str_cmp
set_from_specs_dict
all_colors.get
function
contents.strip
data.update
full_doc.replace.strip
time.time
current_app.swag.get_apispecs
os.path.isfile
list
os.path.dirname
_extract_array_defs
app.route
APIDocsView
fields.List
request.form.keys
tuple
loader
paths.get
OAuthRedirect
get_path_from_doc
func
json.dumps
self.SwaggerView.super.dispatch_request
fpath
setup
specs_data.values
cat.viewitems
current_app.url_map.iter_rules
yaml.safe_load
actual_schema.viewitems
text.replace
jsonify
get_root_path
os.path.expanduser
parse_docstring
regex.sub
Path
get_vendor_extension_fields
self.APIDocsView.super.__init__
int
print
TestView.as_view
run_simple
spec.to_dict
schema2parameters
SubItem
doc.get
value.get
swag_path.split
resp.headers.extend
methods.items
defs.append
view_args.get
convert_schemas
merge_specs
swag.definition
password.encode
flasgger.utils.apispec_to_template
PetSchema.dump
app.app_context
new_v.append
file.read
os.listdir
self._func
copy.deepcopy
</pre>
</details>
@vshih
Could please help me check this issue?
May I pull a request to fix it?
Thank you very much.
| 2hard
|
Title: Model stored/encoded as a delimited string in the database?
Body: ### First Check
- [X] I added a very descriptive title to this issue.
- [X] I used the GitHub search to find a similar issue and didn't find it.
- [X] I searched the SQLModel documentation, with the integrated search.
- [X] I already searched in Google "How to X in SQLModel" and didn't find any information.
- [X] I already read and followed all the tutorial in the docs and didn't find an answer.
- [X] I already checked if it is not related to SQLModel but to [Pydantic](https://github.com/samuelcolvin/pydantic).
- [X] I already checked if it is not related to SQLModel but to [SQLAlchemy](https://github.com/sqlalchemy/sqlalchemy).
### Commit to Help
- [X] I commit to help with one of those options 👆
### Example Code
```python
ActualData(BaseModel):
id: int
KnownName: str
NickName: str
Mother: str
Father: str
SocialSecurity: int
Pets: Optional[boolean]
Vegan: Optional[boolean]
```
```
Database Schema:
id: int
name: str # KnownName
details: str # f"KnownName={KnownName};NickName={NickName};Mother={Mother};Father={Father};SocialSecurity={SocialSecurity};Pets={Pets};Vegan={Vegan}"
```
```
### Description
I have a very strange constraint, I am using SQLModel as my orm. I have a database with two important fields, for example `name` and `details`. The important information I need for my model `ActualData` is built from the `details` column from the database.
How do I use the `ActualData` model in my code, but when I submit/save/read from the database, it is encoded as a character delimited string structure?
[pydantic/validators](https://pydantic-docs.helpmanual.io/usage/validators/) was very helpful but it fills one field. Is it possible to fill out the entire model with one validator? How does one encode the model back to a single string within the database?
### Operating System
Windows
### Operating System Details
- win11 pro, python from windows store, pipenv
### SQLModel Version
0.0.8
### Python Version
3.10.8
### Additional Context
Think of [storing the string components of postgresql database connection string](https://www.connectionstrings.com/postgresql/), but storing it as a completed connection string
`User ID=root;Password=myPassword;Host=localhost;Port=5432;Database=myDataBase;Pooling=true;Min Pool Size=0;Max Pool Size=100;Connection Lifetime=0;` | 1medium
|
Title: mlfromscratch/supervised_learning/__init__.py has references to undefined python files.
Body: Everything you need to know is in the title. ` mlfromscratch/supervised_learning/__init__.py` has references to undefined python files.
To reproduce, run: "python demo.py", you get:
ImportError: No module named linear_regression
Because `supervisedLearning/__init__.py` has references to regression classes in files that don't exist.
I was able to correct them in my fork and rebuild and all is well, just have to rename the "whatever_regression" to regression and do the setup again.
Thinking it over, these may be intentional as an exercise for the user. Good job. :1st_place_medal: | 0easy
|
Title: Processes stopped when passing large objects to function to be parallelized
Body: Problem:
Apply a NLP Deep Learning model for Text Geneartion over the rows of a Pandas Series. The function call is:
`out = text_column.parallel_apply(lambda x: generate_text(args, model, tokenizer, x))`
where `args`, `tokenizer` are light objects but `model` is a heavy object, storing a Pytorch model which weighs more than 6GB on secondary memory and takes up **~12GB RAM** when running it.
I have been doing some tests and the problem arises only when I pass the heavy model to the function (even without effectively running it inside the function), so it seems that the problem is **passing an object as argument that takes up a lot of memory.** (Maybe related with the Sharing Memory strategy for parallel computing.)
After running the `parallel_apply `the output I get is:
```
INFO: Pandarallel will run on 8 workers.
INFO: Pandarallel will use standard multiprocessing data tranfer (pipe) to transfer data between the main process and workers.
0.00% | 0 / 552 |
0.00% | 0 / 552 |
0.00% | 0 / 551 |
0.00% | 0 / 551 |
0.00% | 0 / 551 |
0.00% | 0 / 551 |
0.00% | 0 / 551 |
0.00% | 0 / 551 |
```
And it gets stuck there forever. Indeed, there are two processed spawned and both are **stopped**:
```
ablanco+ 85448 0.0 4.9 17900532 12936684 pts/27 Sl 14:41 0:00 python3 text_generation.py --input_file input.csv --model_type gpt2 --output_file out.csv --no_cuda --n_cpu 8
ablanco+ 85229 21.4 21.6 61774336 57023740 pts/27 Sl 14:39 2:26 python3 text_generation.py --input_file input.csv --model_type gpt2 --output_file out.csv --no_cuda --n_cpu 8
```
| 2hard
|
Title: Can't place multiple devices in rack with non-racked device (using API)
Body: ### Deployment Type
Self-hosted
### Triage priority
This is preventing me from using NetBox
### NetBox Version
v4.1.4
### Python Version
3.12
### Steps to Reproduce
1. Create site
2. Create rack in the site (for example with id = 1)
3. Create first non-racked device in rack
4. Create second device
5. Try placing the the device in the rack as non-racked device via API
Important: I can only reproduce the problem using the REST API.
#### API Calls
```
# Create first device, non-racked in the rack (remember the id, ensure there are no devices non-racked before)
curl -X POST -H "authorization: Token TOKEN" -H 'Content-Type: application/json' -d '{"device_type":116,"role":25,"site":1,"rack":168}' https://netbox.example.org/api/dcim/devices/ | jq .id
# Create second device, floating around in the site
curl -X POST -H "authorization: Token TOKEN" -H 'Content-Type: application/json' -d '{"device_type":116,"role":25,"site":1}' https://netbox.example.org/api/dcim/devices/ | jq . | less
# Try moving the device to the rack of the first device non-racked (fails)
curl -X PATCH -H "authorization: Token TOKEN" -H 'Content-Type: application/json' -d '{"rack":168}' https://netbox.example.org/api/dcim/devices/$ID_FROM_THE_FIRST_COMMAND/ | jq . | less
# Run first command again, creating another device, doesn't fail strangely
curl -X POST -H "authorization: Token TOKEN" -H 'Content-Type: application/json' -d '{"device_type":116,"role":25,"site":1,"rack":168}' https://netbox.example.org/api/dcim/devices/ | jq .id
```
### Expected Behavior
Device should be placed in the rack as non-racked.
### Observed Behavior
HTTP 400 Error Code
```
{"non_field_errors": ["The fields rack, position, face must make a unique set."]}
``` | 1medium
|
Title: SyntaxWarning: "is" with a literal. Did you mean "=="?
Body: Hi all!
If I install python>=3.8.1 and segmentation-models-pytorch==0.32.2, then an warning occurs on import `import segmentation_models_pytorch as smp`. The warning appears in library `pretrainedmodels`.
```
Python 3.9.15 (main, Mar 13 2023, 09:18:54)
[GCC 7.5.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import segmentation_models_pytorch as smp
/home/jovyan/.cache/pypoetry/virtualenvs/s-outer-seg-sem-facade-mvp280223smp-82NsP1dj-py3.9/lib/python3.9/site-packages/pretrainedmodels/models/dpn.py:255: SyntaxWarning: "is" with a literal. Did you mean "=="?
if block_type is 'proj':
/home/jovyan/.cache/pypoetry/virtualenvs/s-outer-seg-sem-facade-mvp280223smp-82NsP1dj-py3.9/lib/python3.9/site-packages/pretrainedmodels/models/dpn.py:258: SyntaxWarning: "is" with a literal. Did you mean "=="?
elif block_type is 'down':
/home/jovyan/.cache/pypoetry/virtualenvs/s-outer-seg-sem-facade-mvp280223smp-82NsP1dj-py3.9/lib/python3.9/site-packages/pretrainedmodels/models/dpn.py:262: SyntaxWarning: "is" with a literal. Did you mean "=="?
assert block_type is 'normal'
```
The warning disappears when I install python 3.7.5.
In segmentation version 0.3.1 updated `Codebase refactoring and style checks (black, flake8)`. I think the problem is in the flake8. And in python 3.7.5. maybe flake8 is being ignored.
Is there a way to disable flake8? To turn off the warning python >= 3.8. | 1medium
|
Title: Error in reading data into a dataframe on encountering an empty cell
Body: I have the following data in an xl sheet, cell A7 onwards
<html xmlns:v="urn:schemas-microsoft-com:vml"
xmlns:o="urn:schemas-microsoft-com:office:office"
xmlns:x="urn:schemas-microsoft-com:office:excel"
xmlns="http://www.w3.org/TR/REC-html40">
<head>
<meta name=ProgId content=Excel.Sheet>
<meta name=Generator content="Microsoft Excel 15">
<link id=Main-File rel=Main-File
href="file:///C:/Users/Bhaskar/AppData/Local/Temp/msohtmlclip1/01/clip.htm">
<link rel=File-List
href="file:///C:/Users/Bhaskar/AppData/Local/Temp/msohtmlclip1/01/clip_filelist.xml">
<style>
<!--table
{mso-displayed-decimal-separator:"\.";
mso-displayed-thousand-separator:"\,";}
@page
{margin:.75in .7in .75in .7in;
mso-header-margin:.3in;
mso-footer-margin:.3in;}
tr
{mso-height-source:auto;}
col
{mso-width-source:auto;}
br
{mso-data-placement:same-cell;}
td
{padding-top:1px;
padding-right:1px;
padding-left:1px;
mso-ignore:padding;
color:black;
font-size:11.0pt;
font-weight:400;
font-style:normal;
text-decoration:none;
font-family:Calibri, sans-serif;
mso-font-charset:0;
mso-number-format:General;
text-align:general;
vertical-align:bottom;
border:none;
mso-background-source:auto;
mso-pattern:auto;
mso-protection:locked visible;
white-space:nowrap;
mso-rotate:0;}
-->
</style>
</head>
<body link="#0563C1" vlink="#954F72">
| FY-5 | | FY-4 | FY-3 | FY-2 | FY-1 | FY0
-- | -- | -- | -- | -- | -- | -- | --
INE002A01018 | 64852 | NULL | 85490 | 89193 | 80298 | 120991 | 142908
INE467B01029 | 32532 | NULL | 39512 | 42110 | 46547 | 53048 | 59259
INE982J01020 | NULL | NULL | -4343.8 | -2634.2 | -1763.4 | -2332.4 | -1631.5
</body>
</html>
I am reading this into a pandas dataframe using the following:
```
df = sheet.range('A7').options(pd.DataFrame,
header=1,
index=False,
empty = "NA",
expand='table').value
df
```
Output is:
<html>
<body>
<!--StartFragment-->
NA | FY-5
-- | --
INE002A01018 | 64852.0
INE467B01029 | 32532.0
INE982J01020 | NULL
<!--EndFragment-->
</body>
</html>
Basically, it stops reading the table on encountering an empty cell (C7). Setting the attribute "empty" to "NA" does not solve the problem. I cannot use the "used_range" attribute because this is just 1 table of many in the sheet.
Any ideas on what I can do to read the whole table correctly?
Thanks!
| 1medium
|
Title: Feature req: monthly report of approved and installed updates
Body: monthly report of approved and installed updates per client/site
Just a feature that customers may be willing to pay extra for. and paying means earning, and earning means budget for more sponsorship :)
| 1medium
|
Title: Tools-Alignment-Manual broken
Body: Hi torzdf,
I have problems with the Aligments -> Manual, when I call the command flashes the start image briefly and then the process is closed without error message Had previously had small problems with conda. I couldn't make any more proper updates. now everything works. had no problems to use manual before. Everything else works. do you have an idea? thank you already in Advance.
Tobi
============ System Information ============
encoding: cp1252
git_branch: Not Found
git_commits: Not Found
gpu_cuda: 10.1
gpu_cudnn: 7.6.2
gpu_devices: GPU_0: GeForce GTX 1050 Ti, GPU_1: GeForce GTX 1050 Ti
gpu_devices_active: GPU_0, GPU_1
gpu_driver: 425.25
gpu_vram: GPU_0: 4096MB, GPU_1: 4096MB
os_machine: AMD64
os_platform: Windows-10-10.0.18362-SP0
os_release: 10
py_command: C:\faceswap\faceswap.py gui
py_conda_version: N/A
py_implementation: CPython
py_version: 3.7.1
py_virtual_env: False
sys_cores: 6
sys_processor: Intel64 Family 6 Model 158 Stepping 10, GenuineIntel
sys_ram: Total: 16242MB, Available: 8182MB, Used: 8059MB, Free: 8182MB
=============== Pip Packages ===============
absl-py==0.7.1
alabaster==0.7.12
anaconda-client==1.7.2
anaconda-navigator==1.9.7
anaconda-project==0.8.2
asn1crypto==0.24.0
astor==0.8.0
astroid==2.1.0
astropy==3.1
atomicwrites==1.2.1
attrs==18.2.0
Babel==2.6.0
backcall==0.1.0
backports.os==0.1.1
backports.shutil-get-terminal-size==1.0.0
beautifulsoup4==4.6.3
bitarray==0.8.3
bkcharts==0.2
blaze==0.11.3
bleach==3.0.2
bokeh==1.0.2
boto==2.49.0
Bottleneck==1.2.1
certifi==2018.11.29
cffi==1.11.5
chardet==3.0.4
Click==7.0
cloudpickle==0.6.1
clyent==1.2.2
cmake==3.12.0
colorama==0.4.1
comtypes==1.1.7
conda==4.7.10
conda-build==3.17.6
conda-package-handling==1.3.11
conda-verify==3.1.1
contextlib2==0.5.5
cryptography==2.4.2
cycler==0.10.0
Cython==0.29.2
cytoolz==0.9.0.1
dask==1.0.0
datashape==0.5.4
decorator==4.3.0
defusedxml==0.5.0
distributed==1.25.1
dlib==19.16.0
docutils==0.14
entrypoints==0.2.3
et-xmlfile==1.0.1
face-recognition==1.2.3
face-recognition-models==0.3.0
fastcache==1.0.2
fastcluster==1.1.25
ffmpy==0.2.2
filelock==3.0.10
Flask==1.0.2
Flask-Cors==3.0.7
future==0.17.1
gast==0.2.2
gevent==1.3.7
glob2==0.6
google-pasta==0.1.7
greenlet==0.4.15
grpcio==1.22.0
h5py==2.9.0
heapdict==1.0.0
html5lib==1.0.1
idna==2.8
imageio==2.5.0
imageio-ffmpeg==0.3.0
imagesize==1.1.0
importlib-metadata==0.6
ipykernel==5.1.0
ipython==7.2.0
ipython-genutils==0.2.0
ipywidgets==7.4.2
isort==4.3.4
itsdangerous==1.1.0
jdcal==1.4
jedi==0.13.2
Jinja2==2.10
joblib==0.13.2
jsonschema==2.6.0
jupyter==1.0.0
jupyter-client==5.2.4
jupyter-console==6.0.0
jupyter-core==4.4.0
jupyterlab==0.35.3
jupyterlab-server==0.2.0
Keras==2.2.4
Keras-Applications==1.0.8
Keras-Preprocessing==1.1.0
keyring==17.0.0
kiwisolver==1.0.1
lazy-object-proxy==1.3.1
libarchive-c==2.8
llvmlite==0.26.0
locket==0.2.0
lxml==4.2.5
Markdown==3.1.1
MarkupSafe==1.1.0
matplotlib==2.2.2
mccabe==0.6.1
menuinst==1.4.14
mistune==0.8.4
mkl-fft==1.0.6
mkl-random==1.0.2
mock==3.0.5
more-itertools==4.3.0
mpmath==1.1.0
msgpack==0.5.6
multipledispatch==0.6.0
navigator-updater==0.2.1
nbconvert==5.4.0
nbformat==4.4.0
networkx==2.2
nltk==3.4
nose==1.3.7
notebook==5.7.4
numba==0.41.0
numexpr==2.6.8
numpy==1.16.2
numpydoc==0.8.0
nvidia-ml-py3==7.352.1
odo==0.5.1
olefile==0.46
opencv-python==4.1.0.25
openpyxl==2.5.12
packaging==18.0
pandas==0.23.4
pandocfilters==1.4.2
parso==0.3.1
partd==0.3.9
path.py==11.5.0
pathlib==1.0.1
pathlib2==2.3.3
patsy==0.5.1
pep8==1.7.1
pickleshare==0.7.5
Pillow==6.1.0
pkginfo==1.4.2
pluggy==0.8.0
ply==3.11
prometheus-client==0.5.0
prompt-toolkit==2.0.7
protobuf==3.9.0
psutil==5.6.3
py==1.7.0
pycodestyle==2.4.0
pycosat==0.6.3
pycparser==2.19
pycrypto==2.6.1
pycurl==7.43.0.2
pyflakes==2.0.0
Pygments==2.3.1
pylint==2.2.2
pyodbc==4.0.25
pyOpenSSL==18.0.0
pyparsing==2.3.0
PySocks==1.6.8
pytest==4.0.2
pytest-arraydiff==0.3
pytest-astropy==0.5.0
pytest-doctestplus==0.2.0
pytest-openfiles==0.3.1
pytest-remotedata==0.3.1
python-dateutil==2.7.5
pytz==2018.7
PyWavelets==1.0.1
pywin32==224
pywinpty==0.5.5
PyYAML==3.13
pyzmq==17.1.2
QtAwesome==0.5.3
qtconsole==4.4.3
QtPy==1.5.2
requests==2.21.0
rope==0.11.0
ruamel-yaml==0.15.46
scandir==1.7
scikit-image==0.15.0
scikit-learn==0.21.3
scipy==1.1.0
seaborn==0.9.0
Send2Trash==1.5.0
simplegeneric==0.8.1
singledispatch==3.4.0.3
six==1.12.0
snowballstemmer==1.2.1
sortedcollections==1.0.1
sortedcontainers==2.1.0
Sphinx==1.8.2
sphinxcontrib-websupport==1.1.0
spyder==3.3.2
spyder-kernels==0.3.0
SQLAlchemy==1.2.15
statsmodels==0.9.0
sympy==1.3
tables==3.4.4
tblib==1.3.2
tensorboard==1.13.1
tensorflow==1.13.1
tensorflow-estimator==1.13.0
tensorflow-gpu==1.13.1
termcolor==1.1.0
terminado==0.8.1
testpath==0.4.2
toolz==0.9.0
toposort==1.5
tornado==5.1.1
tqdm==4.33.0
traitlets==4.3.2
unicodecsv==0.14.1
urllib3==1.24.1
vboxapi==1.0
wcwidth==0.1.7
webencodings==0.5.1
Werkzeug==0.14.1
widgetsnbextension==3.4.2
win-inet-pton==1.0.1
win-unicode-console==0.5
wincertstore==0.2
wrapt==1.11.2
xlrd==1.2.0
XlsxWriter==1.1.2
xlwings==0.15.1
xlwt==1.3.0
zict==0.1.3 | 1medium
|
Title: OpenAI API Compatible Endpoints for each Agent (with optional additional arguements)
Body: ### Feature/Improvement Description
So much new dev now is using OpenAI API compatibility. Every time there's a new demo CoLab Notebook for a paper, it simply wants an OpenAI Key and can usually accept an OPEN_AI_BASE or openai.api_base endpoint.
AGiXT's backend really shines as a "one-ring-to-rule-them-all" solution - there's already support for so many diverse providers and the ability to set each agent to use a different provider and with different prompts and settings. And it's already set up to be a FastAPI server.
A way to point that Notebook/Langchain/Code at an endpoint for a particular agent in AGiXT (localhost:7437/api/agent/{agent_name}/v1 would be a game-changer. It would allow use of the same server or docker-image for a half-dozen LLMs, some proxied like revChatGPT (with different model settings, prompts etc), and some local specialized LLMs (ie Gorilla), all handled by one server, ideally with parallel workers for each agent.
That way when the next camel, tree-of-thoughts, fun-new-langchain-experiment comes along, I can simply point it at different agents on a single AGiXT server, each with its own endpoint.
Today's post from the FastChat folks summarizes the point here nicely:
https://lmsys.org/blog/2023-06-09-api-server/
### Proposed Solution
Update [app.py](https://github.com/Josh-XT/AGiXT/blob/main/agixt/app.py) to include [OpenAI API Endpoints.](https://platform.openai.com/docs/models/model-endpoint-compatibility)
Specifically /v1/chat/completions, /v1/completions, /v1/embeddings endpoints to each agent.
For example, currently an agent can be reached at /api/agent/{agent_name}/chat
Add
/api/agent/{agent_name}/v1/chat/completions
/api/agent/{agent_name}/v1/completions
/api/agent/{agent_name}/v1/embeddings
And check that the JSONs/payloads returned are in line with what's spec-ed out at https://platform.openai.com/docs/api-reference
This has been done for individual engines and proxies, for example:
https://github.com/lm-sys/FastChat/blob/main/docs/openai_api.md
https://github.com/acheong08/ChatGPT-to-API
https://github.com/oobabooga/text-generation-webui/tree/main/extensions/openai
### Acknowledgements
- [X] I have searched the existing issues to make sure this feature has not been requested yet.
- [X] I have provided enough information for everyone to understand why this feature request is needed in AGiXT. | 1medium
|
Title: Require Python >= 3.9
Body: Python 3.8 is official EOL.
[EOL Announcement](https://discuss.python.org/t/python-3-8-is-now-officially-eol/66983) | [Status of Python versions](https://devguide.python.org/versions/)
The next txtai release will require Python 3.9. The build scripts, docker images and anything else with 3.8 support should be updated.
| 1medium
|
Title: strawberry.ext.mypy_plugin PydanticModelField.to_argument error
Body: I think I'm having some issues with strawberry.ext.mypy_plugin when using a pydantic model to make my types
When I enable I get this error (image). If I disable it runs.
## Describe the Bug
When I enable I get this error. If I disable it runs.

## System Information
- Operating system: Linux
Using version ^0.224.1 for strawberry-graphql
Using version ^2.6.4 for pydantic
Using version ^0.0.16 for sqlmodel
Using version ^1.9.0 for mypy
## Additional Context
this is the file showed in error:
```py
from strawberry import auto, type
from strawberry.experimental.pydantic import type as pydantic_type
from database.models.brazil import Address, City, State
@pydantic_type(name='State', model=State)
class StateType:
name: auto
acronym: auto
@pydantic_type(name='City', model=City)
class CityType:
ibge: auto
name: auto
ddd: auto
@type(name='Coordinates')
class CoordinatesType:
latitude: float
longitude: float
altitude: float
@pydantic_type(name='Address', model=Address)
class AddressType:
zipcode: auto
city: CityType
state: StateType
neighborhood: auto
complement: auto
coordinates: CoordinatesType | None = None
``` | 1medium
|
Title: Enhancement request: async execution for a non-defined function
Body: ## Context
My use case is to be able to execute a function ("task") using the async execution in a different lambda. That lambda has a different code base than the calling lambda. In other words, the function ("task") to be executed is not defined in the calling lambda.
The async execution lets you specify a remote lambda and remote region but the function (to be executed) has to be defined in the code. The request is to be able to simply provide a function name as a string in the form of <module_name>.<function_name>.
This obviously does not work for the decorator. It works only using "zappa.async.run".
## Expected Behavior
The below should work:
`
from zappa.async import run
run(func="my_module.my_function", remote_aws_lambda_function_name="my_remote_lambda", remote_aws_region='us-east-1', kwargs=kwargs)
`
## Actual Behavior
The function/task path is retrieved via inspection (hence requires a function type) by "get_func_task_path"
## Possible Fix
This is a bit hackish but is the least intrusive. I'll make a PR but I'm thinking of:
`
def get_func_task_path(func):
"""
Format the modular task path for a function via inspection if param is
a function. If the param is of type string, it will simply return it.
"""
if isinstance(func , (str, unicode)):
return func
module_path = inspect.getmodule(func).__name__
task_path = '{module_path}.{func_name}'.format(
module_path=module_path,
func_name=func.__name__
)
return task_path
`
| 1medium
|
Title: attributes shall be resolveable on registered clients
Body: **Is your feature request related to a problem? Please describe.**
When I register a client like:
```
from authlib.integrations.starlette_client import OAuth,
oauth = OAuth()
oauth.register(
name="example",
client_id=client_id,
client_secret=client_secret,
server_metadata_url=well_known_url,
client_kwargs={"scope": "openid profile email"},
code_challenge_method="S256",
)
```
and I access it via:
`oauth.example.authorize_redirect(request, redirect_uri)`
the attribute "authorize_redirect" is not resolvable via my editor (VSCode) and Pylance complains
`"authorize_redirect" is not a known member of "None".`
**Describe the solution you'd like**
Registered clients shall be resolvable on the client object, so that I can easily browse references of authlib registered clients using my editor.
There is kind some magic happening in the background e.g. which OAuth client is finally chosen. It would be great to get this visible on each registered client.
| 1medium
|
Title: Usage of built-in collection type aliases with starlette breaks wiring
Body: ```
from typing import List
from dependency_injector import containers
from dependency_injector.wiring import inject
# MyList = List[int]
MyList = list[int]
class Container(containers.DeclarativeContainer):
pass
@inject
def main():
pass
if __name__ == "__main__":
container = Container()
container.wire(modules=[__name__])
main()
```
The code above fails with the following error:
```
Traceback (most recent call last):
File "C:\Users\user\di_experiment\main.py", line 21, in <module>
container.wire(modules=[__name__])
File "src/dependency_injector/containers.pyx", line 317, in dependency_injector.containers.DynamicContainer.wire
File "C:\Users\user\.env3.10\lib\site-packages\dependency_injector\wiring.py", line 347, in wire
if _inspect_filter.is_excluded(member):
File "C:\Users\user\.env3.10\lib\site-packages\dependency_injector\wiring.py", line 311, in is_excluded
elif self._is_starlette_request_cls(instance):
File "C:\Users\user\.env3.10\lib\site-packages\dependency_injector\wiring.py", line 324, in _is_starlette_request_cls
and issubclass(instance, starlette.requests.Request)
File "C:\Users\user\AppData\Local\Programs\Python\Python310\lib\abc.py", line 123, in __subclasscheck__
return _abc_subclasscheck(cls, subclass)
TypeError: issubclass() arg 1 must be a class
```
Replacing list with typing.List solves the problem.
Python version 3.10.11
dependency-injector==4.37.0
starlette==0.16.0 | 2hard
|
Title: Word2vec: total loss suspiciously drops with worker count, probably thread-unsafe tallying
Body: <!--
**IMPORTANT**:
- Use the [Gensim mailing list](https://groups.google.com/forum/#!forum/gensim) to ask general or usage questions. Github issues are only for bug reports.
- Check [Recipes&FAQ](https://github.com/RaRe-Technologies/gensim/wiki/Recipes-&-FAQ) first for common answers.
Github bug reports that do not include relevant information and context will be closed without an answer. Thanks!
-->
#### Problem description
The word2vec implementation requires a workaround, as detailed in #2735, to correctly report the total loss per epoch. After doing that though, the next issue is that the total loss reported seems to vary depending on the number of workers.
#### Steps/code/corpus to reproduce
This is my code:
class MyLossCalculatorII(CallbackAny2Vec):
def __init__(self):
self.epoch = 1
self.losses = []
self.cumu_loss = 0.0
self.previous_epoch_time = time.time()
def on_epoch_end(self, model):
loss = model.get_latest_training_loss()
norms = [linalg.norm(v) for v in model.wv.vectors]
now = time.time()
epoch_seconds = now - self.previous_epoch_time
self.previous_epoch_time = now
self.cumu_loss += float(loss)
print(f"Loss after epoch {self.epoch}: {loss} (cumulative loss so far: {self.cumu_loss}) "+\
f"-> epoch took {round(epoch_seconds, 2)} s - vector norms min/avg/max: "+\
f"{round(float(min(norms)), 2)}, {round(float(sum(norms)/len(norms)), 2)}, {round(float(max(norms)), 2)}")
self.epoch += 1
self.losses.append(float(loss))
model.running_training_loss = 0.0
def train_and_check(my_sentences, my_epochs, my_workers=8, my_loss_calc_class=MyLossCalculatorII):
print(f"Building vocab...")
my_model: Word2Vec = Word2Vec(sg=1, compute_loss=True, workers=my_workers)
my_model.build_vocab(my_sentences)
print(f"Vocab done. Training model for {my_epochs} epochs, with {my_workers} workers...")
loss_calc = my_loss_calc_class()
trained_word_count, raw_word_count = my_model.train(my_sentences, total_examples=my_model.corpus_count, compute_loss=True,
epochs=my_epochs, callbacks=[loss_calc])
loss = loss_calc.losses[-1]
print(trained_word_count, raw_word_count, loss)
loss_df = pd.DataFrame({"training loss": loss_calc.losses})
loss_df.plot(color="blue")
# print(f"Calculating accuracy...")
# acc, details = my_model.wv.evaluate_word_analogies(questions_file, case_insensitive=True)
# print(acc)
return loss_calc, my_model
My data is an in-memory list of sentences of Finnish text, each sentence being a list of strings:
[18]: sentences[0]
[18]: ['hän', 'tietää', 'minkälainen', 'tilanne', 'tulla']
I'm running the following code:
lc4, model4 = train_and_check(sentences, my_epochs=20, my_workers=4)
lc8, model8 = train_and_check(sentences, my_epochs=20, my_workers=8)
lc16, model16 = train_and_check(sentences, my_epochs=20, my_workers=16)
lc32, model32 = train_and_check(sentences, my_epochs=20, my_workers=32)
And the outputs are (last few lines + plot only):
# lc4
Loss after epoch 20: 40341580.0 (cumulative loss so far: 830458060.0) -> epoch took 58.15 s - vector norms min/avg/max: 0.02, 3.79, 12.27
589841037 669998240 40341580.0
Wall time: 20min 14s

# lc8
Loss after epoch 20: 25501282.0 (cumulative loss so far: 521681620.0) -> epoch took 36.6 s - vector norms min/avg/max: 0.02, 3.79, 12.24
589845960 669998240 25501282.0
Wall time: 12min 46s

# lc16
Loss after epoch 20: 14466763.0 (cumulative loss so far: 295212011.0) -> epoch took 26.25 s - vector norms min/avg/max: 0.02, 3.79, 12.55
589839763 669998240 14466763.0
Wall time: 9min 35s

# lc32
Loss after epoch 20: 7991086.5 (cumulative loss so far: 161415654.5) -> epoch took 27.5 s - vector norms min/avg/max: 0.02, 3.79, 12.33
589843184 669998240 7991086.5
Wall time: 9min 37s

What is going on here? The loss (whether total loss, final-epoch loss or average loss per epoch) varies, although the data is the same and the number of epochs is the same. I would imagine that "1 epoch" means "each data point is considered precisely once", in which case the number of workers should only affect how quickly the training is done and not the loss (the loss would still vary randomly a bit depending on which order the data points are considered etc, but that should be minor). Here though the loss seems to be roughly proportional to 1/n where n = number of workers.
I'm guessing based on the similar shape of the loss progressions and the very similar vector magnitudes that the training is actually fine in all four cases, so hopefully this is just another display bug similar to #2735.
#### Versions
The output of
```python
import platform; print(platform.platform())
import sys; print("Python", sys.version)
import numpy; print("NumPy", numpy.__version__)
import scipy; print("SciPy", scipy.__version__)
import gensim; print("gensim", gensim.__version__)
from gensim.models import word2vec;print("FAST_VERSION", word2vec.FAST_VERSION)
```
is
Windows-10-10.0.18362-SP0
Python 3.7.3 | packaged by conda-forge | (default, Jul 1 2019, 22:01:29) [MSC v.1900 64 bit (AMD64)]
NumPy 1.17.3
SciPy 1.3.1
gensim 3.8.1
FAST_VERSION 1
| 2hard
|
Title: Hard Limit Recursion Limit on `default_calls`
Body: Is there a specific reason the value `5` was chosen here?
`if self.default_calls > 5` https://github.com/ijl/orjson/blob/master/src/encode.rs#L227
If it's not a hard limit would it be possible to add it as an optional param to `dumps()` to enable the developer to choose what that limit would be?
I am not a Rust developer but I was able to put together a working proof of concept here:
https://github.com/brianjbuck/orjson/commit/eb9cfb8c7d84c6afd41693e1e310b64b5ea7c0a2
If you wanted to give feedback I can make edits and possibly a PR if you like.
| 1medium
|
Title: ENH: Task return values to parameters
Body: **Describe the solution you'd like**
The return values are passable via queue (if multiprocessing) or via direct modification (if thread or main). These values need to be processed in Scheduler. Probably requires a new Argument type.
The argument type can be named as ``ReturnArg`` and can be put to Parameters using the name of the task as the key or so.
| 1medium
|
Title: [BUG] Removing available option for custom field (of type select) results in undesired data change for many existing documents.
Body: ### Description
It looks like the value of custom select field is stored by it's options index and not value. If we remove any option from the middle of the list, values for this custom field are altered for documents, that have current value of this field set to option with higher index that the removed one.
### Steps to reproduce
1. Create custom field of type select.
2. Add 3 options - "a", "b", and "c".
3. Add document, assign custom field, pick option "b".
4. Edit custom field and remove option "a".
Now the document has value "c" instead of "b" for this custom field.
### Webserver logs
```bash
N/A
```
### Browser logs
_No response_
### Paperless-ngx version
2.13.5
### Host OS
Ubuntu 22.04.5
### Installation method
Bare metal
### System status
_No response_
### Browser
_No response_
### Configuration changes
_No response_
### Please confirm the following
- [X] I believe this issue is a bug that affects all users of Paperless-ngx, not something specific to my installation.
- [X] This issue is not about the OCR or archive creation of a specific file(s). Otherwise, please see above regarding OCR tools.
- [X] I have already searched for relevant existing issues and discussions before opening this report.
- [X] I have updated the title field above with a concise description. | 2hard
|
Title: Reevaluating Experimental Functions for Streamlit Development
Body: ### Checklist
- [x] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar feature requests.
- [x] I added a descriptive title and summary to this issue.
### Summary
Can someone please elaborate on the motivations behind the current approach to handle experimental functions in Streamlit?
### Why?
By naming experimental functions as they would be called normally and marking them with warnings, it is still possible to maintain a clearer separation between stable and experimental features. This could streamline development, reduce documentation confusion, and minimize conceptual overlap. This holds additional weight moving into AI augmented coding which appears to be here to stay. I imagine many packages may be developed from the start with these paradigms in mind - such that they code semantically to promote consistency.
Wouldn't naming experimental functions as they would be in their final form, but marking them with warnings indicating their experimental status (or a segregated import), help reduce confusion? This could make experimental functions easily identifiable and removable upon deprecation, preserving the core codebase's integrity while fostering innovation. Could this approach reduce cognitive load on developers, minimize documentation pollution, and improve overall development efficiency? While this may sound nitpicky, I see large efficiencies to be gained by tailoring this towards new information ingestion and auto coding technologies - and just from an update standpoint.
### How?
N/A
### Additional Context
_No response_ | 1medium
|
Title: dnspython3 packages, migration path?
Body: The current dnspython3 package is still [1.12.0 on pypi](https://pypi.python.org/pypi/dnspython3/1.12.0).
I'd like to see new releases there too: Dummy pypi packages which require the new dnspython. So users become aware of the obsolescence.
(For packaging rpm and deb, this is no problem, as it's possible obsolete and provide other packages)
| 1medium
|
Title: Training on older graphics card with custom dataset.
Body: Hey,
I am using the Common-Voice dataset for German language and I am trying to train a synthesizer on my old GTX 660TI with 2G VRAM (Unfortunately :D).
As Cuda Compute Capability 3.0 is not supported by Pytorch, I always built it from source. I've tried different versions of pytorch(1.4, 1.5, 1.7 and 1.10) and of cuda (10.1 and 10.2.) and cudnn(8.1 and 7.5). So finally I am able to train, even with a very low batch size of 6.
I have a few questions:
1: Do you think I can achieve decent results with this low batch size? I've trained to around 100k steps with a loss now of around 0.63, and it feels like it is converging now.
2: How many utterances should I use from every speaker. In my dataset, there are some speakers with thousands of utterances, is it better to shrink the number? For example, is it better using e.g. 200 utterances of each speaker?
3: When training, after a few steps (completely random, sometimes after 10, sometimes after 1000) I get a NaN loss and the error:
`.../cuda/Loss.cu:111: block: [7,0,0], thread: [33,0,0] Assertion `input_val >= zero && input_val <= one` failed. `
Do you think it is caused by my old graphics card? I've read that is maybe caused by non-normalized data, but I am not sure what I should normalize here. I didn't find any issues regarding that here.
Thanks in advance :) | 2hard
|
Title: [Question]: Why youtube stream does not start with writegear?
Body: ### Issue guidelines
- [X] I've read the [Issue Guidelines](https://abhitronix.github.io/vidgear/latest/contribution/issue/#submitting-an-issue-guidelines) and wholeheartedly agree.
### Issue Checklist
- [X] I have read the [Documentation](https://abhitronix.github.io/vidgear/latest) and found nothing related to my problem.
- [X] I have gone through the [Bonus Examples](https://abhitronix.github.io/vidgear/latest/help/get_help/#bonus-examples) and [FAQs](https://abhitronix.github.io/vidgear/latest/help/get_help/#frequently-asked-questions) and found nothing related or helpful.
### Describe your Question
I have tried basically everything I found on the internet, but nothing works for me to start a youtube livestream using opencv pipeing into ffmpeg.
I have an rtsp stream, which I would like to post process via opencv and other python libs, and then stream it to youtube.
I stumbled upon write gears youtube streaming solution:
```python
# import required libraries
from vidgear.gears import CamGear
from vidgear.gears import WriteGear
import cv2
# define video source
VIDEO_SOURCE = "/home/foo/foo.mp4"
# Open stream
stream = CamGear(source=VIDEO_SOURCE, logging=True).start()
# define required FFmpeg optimizing parameters for your writer
# [NOTE]: Added VIDEO_SOURCE as audio-source, since YouTube rejects audioless streams!
output_params = {
"-i": VIDEO_SOURCE,
"-acodec": "aac",
"-ar": 44100,
"-b:a": 712000,
"-vcodec": "libx264",
"-preset": "medium",
"-b:v": "4500k",
"-bufsize": "512k",
"-pix_fmt": "yuv420p",
"-f": "flv",
}
# [WARNING] Change your YouTube-Live Stream Key here:
YOUTUBE_STREAM_KEY = "xxxx-xxxx-xxxx-xxxx-xxxx"
# Define writer with defined parameters and
writer = WriteGear(
output_filename="rtmp://a.rtmp.youtube.com/live2/{}".format(YOUTUBE_STREAM_KEY),
logging=True,
**output_params
)
# loop over
while True:
# read frames from stream
frame = stream.read()
# check for frame if Nonetype
if frame is None:
break
# {do something with the frame here}
# write frame to writer
writer.write(frame)
# safely close video stream
stream.stop()
# safely close writer
writer.close()
```
and it runs properly, but then my youtube live does not start.
I inputed the correct stream api key (Ive tested it in other ways and it started livestream properly),
I have a conda environment, all of my libs are up to date...
please help:S
### VidGear Version
newest
### Python version
Python 3.9.12
### Operating System version
ubuntu 18.04
### Any other Relevant Information?
_No response_ | 1medium
|
Title: DecodeError use is inconsistent with api of AuthlibBaseError
Body: **Describe the bug**
It appears that everywhere that `authlib.jose.errors.DecodeError` is raised, it is raised as e.g. `raise DecodeError('message')`. Due to the API of `AuthlibBaseError` being `(error, description, uri)`, this overwrites the short-string error code of DecodeError.
**Error Stacks**
```python
Traceback (most recent call last):
File "/usr/lib/python3.8/code.py", line 90, in runcode
exec(code, self.locals)
File "<input>", line 1, in <module>
File "/usr/lib/python3.8/site-packages/authlib/jose/rfc7519/jwt.py", line 103, in decode
raise DecodeError('Invalid input segments length')
authlib.jose.errors.DecodeError: Invalid input segments length:
```
**To Reproduce**
```
from authlib.jose import jwt
jwt.decode("hello", "this is my key")
```
**Expected behavior**
The last line of the traceback to read `authlib.jose.errors.DecodeError: decode_error: Invalid input segments length`; or, if `JoseError` is caught as `exc`, `exc.error` to equal `'decode_error'`.
**Environment:**
- OS: Ubuntu 20.04
- Python Version: 3.8
- Authlib Version: 0.15.5
**Additional context**
I looked to see if this was fixed in 1.0.1, but it does not seem to have been addressed: https://github.com/lepture/authlib/blob/10cec2518fe0cc275897f12ae3683d4823f82928/authlib/jose/rfc7519/jwt.py#L100
| 1medium
|
Title: Slow query with ARRAY in SELECT and in WHERE .. ANY
Body: <!--
Thank you for reporting an issue/feature request.
If this is a feature request, please disregard this template. If this is
a bug report, please answer to the questions below.
It will be much easier for us to fix the issue if a test case that reproduces
the problem is provided, with clear instructions on how to run it.
Thank you!
-->
* **asyncpg version**: 0.25.0
* **PostgreSQL version**: PostgreSQL 12.9 (Debian 12.9-1.pgdg110+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit
* **Do you use a PostgreSQL SaaS? If so, which? Can you reproduce
the issue with a local PostgreSQL install?**: PostgreSQL is local, I can reproduce the issue
* **Python version**: 3.9
* **Platform**: Debian 10
* **Do you use pgbouncer?**: NO
* **Did you install asyncpg with pip?**: YES
* **If you built asyncpg locally, which version of Cython did you use?**: -
* **Can the issue be reproduced under both asyncio and
[uvloop](https://github.com/magicstack/uvloop)?**: YES
<!-- Enter your issue details below this comment. -->
Two essentially identical requests are executed at different times and it depends on:
1. The presence of the ARRAY-field in the SELECT section
2. The presence of the ANY expression in the WHERE section and the parameter of List type
The code below reproduces the described problem by showing:
```
(venv) root@hq77-01-dev01:/opt/scapsule-back# python t.py
Time 1 is 0:00:00.001613
Time 2 is 0:00:02.937298
```
Thank you.
```
import asyncio
from datetime import datetime
import asyncpg
from app.config import DB_URL
import uvloop
async def main():
conn = await asyncpg.connect(DB_URL)
await conn.execute('''
CREATE TABLE IF NOT EXISTS public.records
(
pk bigint NOT NULL,
errors integer[] DEFAULT ARRAY[]::integer[],
CONSTRAINT records_pkey PRIMARY KEY (pk)
) TABLESPACE pg_default;
''')
await conn.execute('''
INSERT INTO records (pk, errors) VALUES
(1, ARRAY[1,2]), (2, ARRAY[1,2]), (3, ARRAY[1,2]), (4, ARRAY[1,2]),
(5, ARRAY[1,2]), (6, ARRAY[1,2]), (7, ARRAY[1,2]), (8, ARRAY[1,2]),
(9, ARRAY[1,2]), (10, ARRAY[1,2]), (11, ARRAY[1,2]), (12, ARRAY[1,2]);
''')
a = datetime.now()
await conn.execute('SELECT pk, errors FROM records WHERE pk=ANY(ARRAY[1,2,3,4]);')
b = datetime.now()
print(f'Time 1 is {b - a}')
await conn.execute('SELECT pk, errors FROM records WHERE pk=ANY($1);', [1,2,3,4])
c = datetime.now()
print(f'Time 2 is {c - b}')
await conn.execute('DROP TABLE records;')
await conn.close()
if __name__ == '__main__':
uvloop.install()
asyncio.run(main())
``` | 1medium
|
Title: [Bug]: Uncaught exception | <class 'ValueError'>; Qwen2_5_VLModel has no vLLM implementation and the Transformers implementation is not compatible with vLLM
Body: ### Your current environment
I just know it's hosted on runpod serverless vLLM latest (today).
### 🐛 Describe the bug
When trying to host my finetuned Qwen2.5 VL 7b 4bit dynamic quantization using unsloth, and after I have saved the trained model it as bf16, when I try to host the model, it gives me this error:
```python
worker exited with exit code 1
j6zswihe185nfq[warning][rank0]:[W324 18:13:29.115599288 ProcessGroupNCCL.cpp:1250] Warning: WARNING: process group has NOT been destroyed before we destruct ProcessGroupNCCL. On normal program exit, the application should call destroy_process_group to ensure that any pending NCCL operations have finished in this process. In rare cases this process can exit before this point and block the progress of another member of the process group. This constraint has always been present, but this warning has only been added since PyTorch 2.4 (function operator())\n
j6zswihe185nfq[info]engine.py :116 2025-03-24 18:13:28,839 Error initializing vLLM engine: Qwen2_5_VLModel has no vLLM implementation and the Transformers implementation is not compatible with vLLM.\n
j6zswihe185nfq[error]Uncaught exception | <class 'ValueError'>; Qwen2_5_VLModel has no vLLM implementation and the Transformers implementation is not compatible with vLLM.; <traceback object at 0x7f5beafb7900>;
j6zswihe185nfq[info]INFO 03-24 18:13:28 model_runner.py:1110] Starting to load model itztheking/FMAX-testrun-1.0...\n
j6zswihe185nfq[info]INFO 03-24 18:13:28 cuda.py:229] Using Flash Attention backend.\n
j6zswihe185nfq[info]
j6zswihe185nfq[info]INFO 03-24 18:13:27 config.py:549] This model supports multiple tasks: {'score', 'embed', 'classify', 'reward', 'generate'}. Defaulting to 'generate'.\n
j6zswihe185nfq[info]tokenizer_name_or_path: itztheking/FMAX-testrun-1.0, tokenizer_revision: None, trust_remote_code: False\n
j6zswihe185nfq[info]engine.py :27 2025-03-24 18:13:18,801 Engine args: AsyncEngineArgs(model='itztheking/FMAX-testrun-1.0', served_model_name=None, tokenizer='itztheking/FMAX-testrun-1.0', task='auto', skip_tokenizer_init=False, tokenizer_mode='auto', trust_remote_code=False, allowed_local_media_path='', download_dir=None, load_format='auto', config_format=<ConfigFormat.AUTO: 'auto'>, dtype='bfloat16', kv_cache_dtype='auto', seed=0, max_model_len=10000, distributed_executor_backend=None, pipeline_parallel_size=1, tensor_parallel_size=1, max_parallel_loading_workers=None, block_size=16, enable_prefix_caching=False, disable_sliding_window=False, use_v2_block_manager='true', swap_space=4, cpu_offload_gb=0, gpu_memory_utilization=0.95, max_num_batched_tokens=None, max_num_partial_prefills=1, max_long_partial_prefills=1, long_prefill_token_threshold=0, max_num_seqs=256, max_logprobs=20, disable_log_stats=False, revision=None, code_revision=None, rope_scaling=None, rope_theta=None, hf_overrides=None, tokenizer_revision=None, quantization=None, enforce_eager=False, max_seq_len_to_capture=8192, disable_custom_all_reduce=False, tokenizer_pool_size=0, tokenizer_pool_type='ray', tokenizer_pool_extra_config=None, limit_mm_per_prompt=None, mm_processor_kwargs=None, disable_mm_preprocessor_cache=False, enable_lora=False, enable_lora_bias=False, max_loras=1, max_lora_rank=16, enable_prompt_adapter=False, max_prompt_adapters=1, max_prompt_adapter_token=0, fully_sharded_loras=False, lora_extra_vocab_size=256, long_lora_scaling_factors=None, lora_dtype='auto', max_cpu_loras=None, device='auto', num_scheduler_steps=1, multi_step_stream_outputs=True, ray_workers_use_nsight=False, num_gpu_blocks_override=None, num_lookahead_slots=0, model_loader_extra_config=None, ignore_patterns=None, preemption_mode=None, scheduler_delay_factor=0.0, enable_chunked_prefill=None, guided_decoding_backend='outlines', logits_processor_pattern=None, speculative_model=None, speculative_model_quantization=None, speculative_draft_tensor_parallel_size=None, num_speculative_tokens=None, speculative_disable_mqa_scorer=False, speculative_max_model_len=None, speculative_disable_by_batch_size=None, ngram_prompt_lookup_max=None, ngram_prompt_lookup_min=None, spec_decoding_acceptance_method='rejection_sampler', typical_acceptance_sampler_posterior_threshold=None, typical_acceptance_sampler_posterior_alpha=None, qlora_adapter_name_or_path=None, disable_logprobs_during_spec_decoding=None, otlp_traces_endpoint=None, collect_detailed_traces=None, disable_async_output_proc=False, scheduling_policy='fcfs', scheduler_cls='vllm.core.scheduler.Scheduler', override_neuron_config=None, override_pooler_config=None, compilation_config=None, worker_cls='auto', kv_transfer_config=None, generation_config=None, override_generation_config=None, enable_sleep_mode=False, model_impl='auto', calculate_kv_scales=None, additional_config=None, disable_log_requests=False)\n
j6zswihe185nfq[info]INFO 03-24 18:13:17 __init__.py:207] Automatically detected platform cuda.\n
```
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions. | 2hard
|
Title: Prefect Workers crash when server returns 500s
Body: ### Bug summary
We run a pretty big self-hosted installation of Prefect 2.x (20k flow runs/day, 200k tasks) and noticed that when self-hosted API server becomes overloaded, it starts returning HTTP 500s.
That's OK by itself, but this makes Workers (we use Kubernetes) exit unexpectedly, then get restarted by K8s (issue 1).
Specifically we see the following problematic stack traces after which it exits:
* _submit_run -> _check_flow_run -> read_deployment
* cancel_run -> _get_configuration -> read_deployment
Restarting also seems OK by itself, however we noticed that if a different flow was marked PENDING, but no K8s Job was scheduled yet when the worker exited, it'll be stuck in PENDING forever (issue 2). Here's the relevant code:
https://github.com/PrefectHQ/prefect/blob/c4ac23189af1d27f8260452df302be8daee792b6/src/prefect/workers/base.py#L972-L977
Issue 2 seems relatively hard to fully resolve, as it's impossible to atomically mark flow as pending and submit a job to K8s. Maybe we can do something by storing the state locally, but that will not work if the pod is restarted on a different node.
Issue 1 looks more straightforward though. There's already a try/except around these places, but it only catches some exceptions, not all of them. Hopefully it'd be easy to resolve.
### Version info
```Text
Version: 2.20.14
API version: 0.8.4
Python version: 3.11.7
Git commit: fb919c67
Built: Mon, Nov 18, 2024 4:41 PM
OS/Arch: linux/x86_64
Profile: default
Server type: ephemeral
Server:
Database: sqlite
SQLite version: 3.40.1
```
### Additional context
_No response_ | 1medium
|
Title: __version__ in __all__ issue
Body: See KeepSafe/aiohttp#978
| 0easy
|
Title: 研报查询无结果error
Body: 
已在群中确认,专门来拿积分的 383606 | 0easy
|
Title: What is the proper way to handle dependencies at module load time?
Body: Hello all! Im having a fun time experimenting with this framework, but have found myself stumped on this design issue for a while. In short, if a dependency needs to be used at module load time (e.g. decorators), the init_resources/wiring breaks
Example
`containers.py`
```
class ApplicationContainer(containers.DeclarativeContainer):
my_dependency = providers.Singleton(myClass)
my_decorator = providers.Callable(my_dynamic_decorator_function, my_dependency)
```
`main.py`
```
application_container = dependency_containers.ApplicationContainer()
application_container.init_resources()
application_container.wire(modules=module_names, packages=packages)
```
`some_dangerous_file.py`
```
from . import containers
@containers.ApplicationContainer.my_decorator()
def my_function():
print("...")
```
`dependency_injector.errors.Error: Can not copy initialized resource`
This is notably caused by the way/order python loads modules. All decorators are applied before the AppContainer or main can properly wire/initialize the dependencies. This basically means that if there is a dependency that is used outside a function, it will fail.
Is there any design or trick to get around this, i'd love to hear it. I don't like the idea of putting any container initialization into the dunder init file.
Here are some of my thoughts:
If there was a dependency/provider type that wraps a dependency that doesn't need to be available immediately i.e. lazy initialization, the module load would work at bootup and would be properly injected when it needs to be (_after main executes_) | 2hard
|
Title: Have you considered to have docker support?
Body: It would be easier to deploy if using docker, especially in Linux. | 3misc
|
Title: Failing to compute gradients, RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn
Body: ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
I'm resarching perturbations in neural networks. I have a video in which YOLOv11 correctly detects several objects. I'd like to add a gradient to each frame, so that it would fail detecting objects in the modified frames.
My current approach is;
```def fgsm(gradients, tensor):
perturbation = epsilon * gradients.sign()
alt_img = tensor + perturbation
alt_img = torch.clamp(alt_img, 0, 1) # clipping pixel values
alt_img_np = alt_img.squeeze().permute(1, 2, 0).detach().numpy()
alt_img_np = (alt_img_np * 255).astype(np.uint8)
return alt_img_np
def perturb(model, cap):
out = cv2.VideoWriter('perturbed.mp4', 0x7634706d, 30.0, (640, 640))
print("CUDA Available: ", torch.cuda.is_available())
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
model.to(device)
while cap.isOpened():
ret, img = cap.read()
resized = cv2.resize(img, (640, 640))
rgb = cv2.cvtColor(resized, cv2.COLOR_BGR2RGB)
tensor = torch.from_numpy(rgb).float()
tensor = tensor.to(device)
tensor = tensor.permute(2, 0, 1).unsqueeze(0) #change tensor dimensions
tensor /= 255.0 #normalize
tensor.requires_grad = True
output = model(tensor)
target = output[0].boxes.cls.long()
logits = output[0].boxes.data
loss = -F.cross_entropy(logits, target)
loss.backward() #Backpropagation
gradients = tensor.grad
if gradients is not None:
alt_img = fgsm(gradients, tensor)
cv2.imshow('Perturbed video', alt_img)
out.write(alt_img)
```
Without loss.requires_grad = True I receive;
```
loss.backward() #Backpropagation
^^^^^^^^^^^^^^^
File "/var/data/python/lib/python3.11/site-packages/torch/_tensor.py", line 581, in backward
torch.autograd.backward(
File "/var/data/python/lib/python3.11/site-packages/torch/autograd/__init__.py", line 347, in backward
_engine_run_backward(
File "/var/data/python/lib/python3.11/site-packages/torch/autograd/graph.py", line 825, in _engine_run_backward
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn
```
If I enable loss.requires_grad = True, I am able to extract gradients from loss, but those dont look like they are correctly applied (and dont lead to a decrease in detection/classification performance.
What am I missing?
Thanks.
### Additional
_No response_ | 2hard
|
Title: Docker marzban xray crashloop
Body: Non stopped restarting:
```
marzban-1 | WARNING: Xray core 1.8.8 started
marzban-1 | WARNING: Restarting Xray core...
marzban-1 | WARNING: Xray core 1.8.8 started
marzban-1 | WARNING: Restarting Xray core...
```
Steps to reproduce the behavior:
1. git clone the repo
2. `mv .env.example .env`
3. `docker compose up`
4. See error
- OS: Debian 12
- Docker version: 27.0.2, build 912c1dd | 1medium
|
Title: Feature: Default GraphQL Enums from Python Enums.
Body: <!--- Provide a general summary of the changes you want in the title above. -->
<!--- This template is entirely optional and can be removed, but is here to help both you and us. -->
<!--- Anything on lines wrapped in comments like these will not show up in the final text. -->
## Feature Request Type
- [ ] Core functionality
- [ ] Alteration (enhancement/optimization) of existing feature(s)
- [x] New behavior
## Description
Currently to register a GraphQL Enum from an existing enums that resides elsewhere in your code base
you need to do the following ~
```python
from somewhere import MyEnum
MyEnumGQL = strawberry.enum(MyEnum)
@strawberry.type
class Foo:
bar: MyEnumGQL
```
This results in a [pyright error ](https://discord.com/channels/689806334337482765/1252599999380983869/1253019059281199275) that can be fixed like this
```python
from somewhere import MyEnum
if TYPE_CHECKING:
MyEnumGQL = MyEnum
else:
MyEnumGQL = strawberry.enum(MyEnum)
@strawberry.type
class Foo:
bar: MyEnumGQL
```
It would be nice if we could just use a default GraphQL implementation of the enum on the fly
so that the `strawberry.enum` call would not be needed.
```python
from somewhere import MyEnum
@strawberry.type
class Foo:
bar: MyEnum
``` | 1medium
|
Title: When rebasing gitstatus raises "_GSField.updator() takes 2 positional arguments but 3 were given"
Body: ## xonfig
<details>
```
+------------------+-----------------+
| xonsh | 0.13.3 |
| Python | 3.10.7 |
| PLY | 3.11 |
| have readline | True |
| prompt toolkit | 3.0.31 |
| shell type | prompt_toolkit |
| history backend | json |
| pygments | None |
| on posix | True |
| on linux | True |
| distro | unknown |
| on wsl | False |
| on darwin | False |
| on windows | False |
| on cygwin | False |
| on msys2 | False |
| is superuser | True |
| default encoding | utf-8 |
| xonsh encoding | utf-8 |
| encoding errors | surrogateescape |
| xontrib | [] |
| RC file 1 | /root/.xonshrc |
+------------------+-----------------+
```
</details>
## Expected Behavior
No error
## Current Behavior
Xonsh prints the following error when generating the prompt:
```
prompt: error: on field 'gitstatus'
xonsh: For full traceback set: $XONSH_SHOW_TRACEBACK = True
TypeError: _GSField.updator() takes 2 positional arguments but 3 were given
```
Full traceback below.
### Traceback (if applicable)
<details>
```
{ERROR:gitstatus} #
prompt: error: on field 'gitstatus'
xonsh: To log full traceback to a file set: $XONSH_TRACEBACK_LOGFILE = <filename>
Traceback (most recent call last):
File "/usr/local/lib/python3.10/site-packages/xonsh/prompt/base.py", line 133, in _get_field_value
return self.fields.pick(field)
File "/usr/local/lib/python3.10/site-packages/xonsh/prompt/base.py", line 381, in pick
value.update(self)
File "/usr/local/lib/python3.10/site-packages/xonsh/prompt/gitstatus.py", line 343, in update
super().update(ctx)
File "/usr/local/lib/python3.10/site-packages/xonsh/prompt/base.py", line 519, in update
self.value = self.separator.join(self._collect(ctx))
File "/usr/local/lib/python3.10/site-packages/xonsh/prompt/base.py", line 514, in _collect
yield format(ctx.pick(frag))
File "/usr/local/lib/python3.10/site-packages/xonsh/prompt/base.py", line 381, in pick
value.update(self)
File "/usr/local/lib/python3.10/site-packages/xonsh/prompt/gitstatus.py", line 126, in update
self.updator(self, ctx)
File "/usr/local/lib/python3.10/site-packages/xonsh/prompt/gitstatus.py", line 246, in get_gitstatus_info
info = ctx.pick_val(porcelain)
File "/usr/local/lib/python3.10/site-packages/xonsh/prompt/base.py", line 391, in pick_val
val = self.pick(key)
File "/usr/local/lib/python3.10/site-packages/xonsh/prompt/base.py", line 381, in pick
value.update(self)
File "/usr/local/lib/python3.10/site-packages/xonsh/prompt/gitstatus.py", line 126, in update
self.updator(self, ctx)
File "/usr/local/lib/python3.10/site-packages/xonsh/prompt/gitstatus.py", line 206, in porcelain
branch = ctx.pick(tag_or_hash) or ""
File "/usr/local/lib/python3.10/site-packages/xonsh/prompt/base.py", line 381, in pick
value.update(self)
File "/usr/local/lib/python3.10/site-packages/xonsh/prompt/gitstatus.py", line 126, in update
self.updator(self, ctx)
File "/usr/local/lib/python3.10/site-packages/xonsh/prompt/gitstatus.py", line 146, in tag_or_hash
fld.value = ctx.pick(tag) or ctx.pick(short_head)
File "/usr/local/lib/python3.10/site-packages/xonsh/prompt/base.py", line 381, in pick
value.update(self)
File "/usr/local/lib/python3.10/site-packages/xonsh/prompt/gitstatus.py", line 126, in update
self.updator(self, ctx)
TypeError: _GSField.updator() takes 2 positional arguments but 3 were given
```
</details>
## Steps to Reproduce
1. Set `.xonshrc` to this:
```
$PROMPT = (
"{gitstatus} {prompt_end}{RESET} "
)
# you might also want to turn on:
# $XONSH_SHOW_TRACEBACK = True
```
2. Create a git repo with a few commits
3. `git rebase --interactive <old-commit>`
4. Set some commits to `edit` to pause the rebase at that commit
5. cd a xonsh shell to that git directory
## For community
⬇️ **Please click the 👍 reaction instead of leaving a `+1` or 👍 comment**
| 2hard
|
Title: Task admin - autocomplete_fields
Body: Dropdown select is not easy when you have many users.
I'd like to recommend adding **"autocomplete_fields"** in **workflow/admins.py**
```
@admin.register(Task)
class TaskAdmin(admin.ModelAdmin):
"""List all of viewflow tasks."""
icon = '<i class="material-icons">assignment_turned_in</i>'
actions = None
date_hierarchy = "created"
autocomplete_fields = [ "owner" ]
...
``` | 0easy
|
Title: [Bug] Cannot login on VPS using Linux Terminal
Body: **Describe the bug**
After installing Openbb Terminal in a remote VPS running Linux, I cannot login.
**To Reproduce**
Installed a fresh copy of openbb and worked properly.
Started the terminal and worked properly.
Went to accounts/login and at this point I believe the terminal is attempting to open a browser window but that is not possible on the Linux terminal VPS.
How am I to login using credentials or the personal access token I have previously generated?
**Screenshots**
https://i.imgur.com/zHYDGt8.png

**Additional context**
Please provide clear and easy to understand steps on how to login as I am new to both Linux and Openbb.
**Later Edit:**
After running /accounts/login multiple times, this error appeared three times in about 10 tries. I do not know if they are connected.
 | 1medium
|
Title: Document minimum tifffile requirements
Body: ### Description:
I'm looking to update the python-scikit-image package in Fedora and getting the following test failures:
```
_________________________________ test_shapes __________________________________
imgs = <skimage.io.collection.MultiImage object at 0x7fb1f0435790>
def test_shapes(imgs):
imgs = imgs[-1]
assert imgs[0][0].shape == imgs[0][1].shape
> assert imgs[0][0].shape == (10, 10, 3)
E assert (25, 14, 3) == (10, 10, 3)
E At index 0 diff: 25 != 10
E Full diff:
E - (10, 10, 3)
E + (25, 14, 3)
skimage/io/tests/test_multi_image.py:31: AssertionError
---------------------------- Captured stderr setup -----------------------------
Downloading file 'data/multipage_rgb.tif' from 'https://github.com/scikit-image/scikit-image/raw/v0.21.0/skimage/data/multipage_rgb.tif' to '/home/orion/fedora/python-scikit-image/scikit-image-0.21.0/scikit-image/0.21.0'.
_________________________________ test_slicing _________________________________
imgs = [<skimage.io.collection.MultiImage object at 0x7fb20002e190>, <skimage.io.collection.MultiImage object at 0x7fb1f045b8...kimage.io.collection.MultiImage object at 0x7fb1f034ba10>, <skimage.io.collection.MultiImage object at 0x7fb1f03f1c10>]
def test_slicing(imgs):
img = imgs[-1]
assert type(img[:]) is MultiImage
assert len(img[0][:]) + len(img[1][:]) == 26, len(img[:])
assert len(img[0][:1]) == 1
> assert len(img[1][1:]) == 23
E assert 1 == 23
E + where 1 = len(array([[[[6.11859237e-01, 6.13683830e-01, 2.61764427e-01],\n [8.64150092e-01, 1.86032699e-01, 7.67905107e-01],\n [4.47323589e-01, 8.81760559e-01, 7.71544235e-01],\n [3.01772365e-01, 2.93226077e-01, 1.62019224e-01],\n [6.60980691e-01, 9.67724343e-01, 3.53490597e-01],\n [4.16066406e-01, 3.12212669e-01, 5.95506126e-01],\n [1.78595715e-01, 1.74973403e-01, 8.04427253e-01],\n [4.51611405e-01, 3.13848890e-01, 5.25524362e-01],\n [3.05492419e-01, 1.11827285e-02, 9.39158752e-01],\n [2.02804065e-01, 6.23108406e-01, 9.59814407e-01]],\n\n [[4.54140258e-01, 5.17023922e-02, 1.72994131e-01],\n [2.45123764e-01, 5.86041911e-01, 1.44030170e-01],\n [9.49853360e-01, 1.65398332e-01, 9.20412825e-01],\n [9.71664246e-01, 9.13602646e-01, 4.60977608e-01],\n [6.75569279e-01, 7.94274742e-01, 1.72290952e-01],\n [9.94832905e-01, 4.38295464e-01, 6.12733696e-01],\n [1.29133003e-01, 1.69541113e-01, 1.40536150e-02],\n [8.20638267e-01, 4.79702746e-01, 8.87252462e-01],\n [9.30465060e-01, 9.43440274e-02, 1.45653304e-01],\n [4.00729428e-01, 7.57031255e-01, 9.87977575e-01]],\n\n [[4.88...304635e-01]],\n\n [[8.59327355e-01, 4.02425053e-01, 4.76087125e-01],\n [7.08815254e-01, 1.59897390e-01, 6.23051449e-01],\n [9.12360216e-02, 5.42339910e-01, 1.47251478e-01],\n [4.06343227e-01, 1.79473267e-01, 3.69216690e-02],\n [8.84683616e-01, 5.21541897e-01, 9.17095911e-01],\n [1.12484019e-01, 4.47171746e-01, 3.13240591e-01],\n [4.48454609e-01, 4.37365110e-01, 6.00244286e-01],\n [1.49488731e-01, 7.11540362e-01, 8.98939775e-01],\n [6.06455452e-01, 4.72772906e-01, 3.53782699e-01],\n [1.01016308e-01, 6.47838405e-01, 5.74062674e-01]],\n\n [[6.02083416e-01, 5.58439915e-01, 6.48526201e-01],\n [7.07022009e-01, 2.35024425e-01, 1.85182152e-02],\n [6.20179125e-01, 4.63161602e-01, 2.40833443e-01],\n [8.04487714e-03, 4.58802330e-01, 5.27037573e-02],\n [1.33684803e-01, 3.13904377e-01, 5.28891432e-02],\n [4.61955771e-01, 2.37523200e-01, 1.36905619e-01],\n [3.20624017e-02, 2.73770016e-01, 2.19316844e-01],\n [9.68227208e-01, 1.00798813e-01, 8.86375033e-01],\n [5.09561718e-01, 1.48944850e-01, 6.03461718e-01],\n [8.85811169e-02, 6.29287096e-01, 3.73361435e-01]]]]))
skimage/io/tests/test_multi_image.py:43: AssertionError
```
### Way to reproduce:
```
+ xvfb-run pytest -v --deselect=skimage/data/tests/test_data.py::test_download_all_with_pooch --deselect=skimage/data/tests/test_data.py::test_eagle --deselect=skimage/data/tests/test_data.py::test_brain_3d --deselect=skimage/data/tests/test_data.py::test_cells_3d --deselect=skimage/data/tests/test_data.py::test_kidney_3d_multichannel --deselect=skimage/data/tests/test_data.py::test_lily_multichannel --deselect=skimage/data/tests/test_data.py::test_skin --deselect=skimage/data/tests/test_data.py::test_vortex --deselect=skimage/measure/tests/test_blur_effect.py::test_blur_effect_3d --deselect=skimage/registration/tests/test_masked_phase_cross_correlation.py::test_masked_registration_3d_contiguous_mask skimage
============================= test session starts ==============================
platform linux -- Python 3.11.3, pytest-7.3.1, pluggy-1.0.0 -- /usr/bin/python3
hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('/home/orion/BUILDROOT/python-scikit-image-0.21.0-1.fc39.x86_64/usr/lib64/python3.11/site-packages/.hypothesis/examples')
rootdir: /home/orion/BUILDROOT/python-scikit-image-0.21.0-1.fc39.x86_64/usr/lib64/python3.11/site-packages
plugins: xprocess-0.22.2, remotedata-0.3.3, asyncio-0.21.0, libtmux-0.21.0, forked-1.6.0, mock-3.10.0, cov-4.0.0, flake8-1.1.1, timeout-2.1.0, rerunfailures-11.0, pyfakefs-5.2.2, xdist-3.3.1, doctestplus-0.12.0, openfiles-0.5.0, anyio-3.5.0, mpi-0.6, hypothesis-6.62.1, localserver-0.7.0, arraydiff-0.5.0
asyncio: mode=Mode.STRICT
collecting ... collected 8215 items / 9 deselected / 2 skipped / 8206 selected
```
### Version information:
```Shell
3.11.3 (main, May 24 2023, 00:00:00) [GCC 13.1.1 20230511 (Red Hat 13.1.1-2)]
Linux-6.4.0-0.rc2.23.fc39.x86_64-x86_64-with-glibc2.37.9000
numpy version: 1.24.3
```
```
| 1medium
|
Title: execution options param in ORM session hardcoded to immutable dict
Body: ### Discussed in https://github.com/sqlalchemy/sqlalchemy/discussions/10181
```py
from sqlalchemy.orm import Session
session: Session
session.connection(
execution_options={"isolation_level": "REPEATABLE READ"}
)
```
```
$ mypy test3.py
test3.py:6: error: Argument "execution_options" to "connection" of "Session" has incompatible type "dict[str, str]"; expected "immutabledict[str, Any] | None" [arg-type]
Found 1 error in 1 file (checked 1 source file)
``` | 1medium
|
Title: Attribute type change in Mapped[] not detected
Body: **Describe the bug**
<!-- A clear and concise description of what the bug is. -->
If I change the type of an attribute in my model, only defined by the `Mapped` annotation, then I do not get any changes in the resulting upgrade script:
```python
class File(Base):
# change type from str to int
# size: Mapped[Optional[str]]
size: Mapped[Optional[int]]
```
```bash
alembic revision -m "Change not detected" --autogenerate
```
Not expected:
```python
def upgrade():
# ### commands auto generated by Alembic - please adjust! ###
pass
# ### end Alembic commands ###
```
**Expected behavior**
<!-- A clear and concise description of what you expected to happen. -->
I expect an alter_column statement a la:
```python
def upgrade():
# ### commands auto generated by Alembic - please adjust! ###
op.alter_column('file', 'size',
existing_type=sa.NUMERIC(),
nullable=True)
# ### end Alembic commands ###
```
**To Reproduce**
I am very aware that this may be a problem on my side, and I think it is difficult to provide a complete example setup.
**Versions.**
- OS: Windows 11
- Python: 3.10.11
- Alembic: 1.11.1
- SQLAlchemy: 2.0.15
- Database: postgis/postgis:11-2.5-alpine
- DBAPI: psycopg2-binary 2.9.6
**Additional context**
<!-- Add any other context about the problem here. -->
Doing other changes to my model class, for instance, removing the `Optional` keyword or adding/removing/renaming, results in an expected change (except still no type change).
**Have a nice day!**
| 1medium
|
Title: AttributeError: 'Order' object has no attribute 'algoParamsCount'
Body: Hello,
I get AttributeError: 'Order' object has no attribute 'algoParamsCount', after I submit an adaptive algo order, which is contructed (following the instructions at https://interactivebrokers.github.io/tws-api/ibalgos.html#gsc.tab=0) as:
```
def MarketOrder(**kwargs):
order = Order(orderType='MKT', **kwargs)
return order
def AdaptiveMarketOrder(priority='Normal', **kwargs):
priority = priority.capitalize()
valid_priorities = ['Critical', 'Urgent', 'Normal', 'Patient']
assert priority in valid_priorities, "Invalid priority. Should be in %s." % (valid_priorities)
order = MarketOrder(**kwargs)
order.algoStrategy = "Adaptive"
order.algoParams = []
adaptivePriority = ib_insync.objects.TagValue('adaptivePriority', priority)
order.algoParams.append(adaptivePriority)
return order
class Data:
...
def order(self, contract: ib_insync.Contract, amount: int, style: ib_insync.order.Order, transmit=False):
order = style
order.account = self.data.account
order.action = ('BUY' if amount >= 0 else 'SELL')
order.totalQuantity = abs(int(amount))
order.transmit = transmit
trade = self.broker.placeOrder(contract, order)
return trade.order
...
data.order(gld, 100, style=AdaptiveMarketOrder(priority='Normal'), transmit=True)
```
The full traceback:
```
2017-11-15 02:38:41,724 ib_insync.client ERROR Decode failed
Traceback (most recent call last):
File "/home/dias/anaconda3/lib/python3.6/site-packages/ib_insync/client.py", line 223, in _onSocketHasData
self._decode(fields)
File "/home/dias/anaconda3/lib/python3.6/site-packages/ib_insync/client.py", line 378, in _decode
self.decoder.interpret(fields)
File "/home/dias/anaconda3/lib/python3.6/site-packages/ibapi-9.73.2-py3.6.egg/ibapi/decoder.py", line 1154, in interpret
handleInfo.processMeth(self, iter(fields))
File "/home/dias/anaconda3/lib/python3.6/site-packages/ibapi-9.73.2-py3.6.egg/ibapi/decoder.py", line 413, in processOpenOrder
self.wrapper.openOrder(order.orderId, contract, order, orderState)
File "/home/dias/anaconda3/lib/python3.6/site-packages/ib_insync/wrapper.py", line 192, in openOrder
order = Order(**order.__dict__)
File "/home/dias/anaconda3/lib/python3.6/site-packages/ib_insync/objects.py", line 53, in __init__
setattr(self, k, v)
AttributeError: 'Order' object has no attribute 'algoParamsCount'
2017-11-15 02:38:41,725 ib_insync.wrapper INFO orderStatus: Trade(contract=Contract(conId=51529211, symbol='GLD', secType='STK', exchange='SMART', primaryExchange='ARCA', currency='USD', localSymbol='GLD', tradingClass='GLD'), order=Order(orderId=149, action='SELL', totalQuantity=100, orderType='MKT', account='U1744631', algoStrategy='Adaptive', algoParams=[TagValue(tag='adaptivePriority', value='Normal')]), orderStatus=OrderStatus(status='Submitted', remaining=100.0, permId=1483409048, clientId=34), fills=[], log=[TradeLogEntry(time=datetime.datetime(2017, 11, 14, 20, 38, 41, 392626, tzinfo=datetime.timezone.utc), status='PendingSubmit', message=''), TradeLogEntry(time=datetime.datetime(2017, 11, 14, 20, 38, 41, 723600, tzinfo=datetime.timezone.utc), status='Submitted', message='')])
2017-11-15 02:38:57,046 ib_insync.wrapper INFO position: Position(account='U1744631', contract=Contract(conId=51529211, symbol='GLD', secType='STK', exchange='ARCA', currency='USD', localSymbol='GLD', tradingClass='GLD'), position=524.0, avgCost=121.82631555)
2017-11-15 02:38:57,047 ib_insync.wrapper INFO execDetails: Fill(contract=Contract(conId=51529211, symbol='GLD', secType='STK', exchange='SMART', primaryExchange='ARCA', currency='USD', localSymbol='GLD', tradingClass='GLD'), execution=Execution(execId='00013911.5a0ad7f2.01.01', time='20171115 02:38:56', acctNumber='U1744631', exchange='ISLAND', side='SLD', shares=100.0, price=121.71, permId=1483409048, clientId=34, orderId=149, cumQty=100.0, avgPrice=121.71), commissionReport=CommissionReport(), time=datetime.datetime(2017, 11, 14, 20, 38, 57, 47655, tzinfo=datetime.timezone.utc))
2017-11-15 02:38:57,048 ib_insync.client ERROR Decode failed
Traceback (most recent call last):
File "/home/dias/anaconda3/lib/python3.6/site-packages/ib_insync/client.py", line 223, in _onSocketHasData
self._decode(fields)
File "/home/dias/anaconda3/lib/python3.6/site-packages/ib_insync/client.py", line 378, in _decode
self.decoder.interpret(fields)
File "/home/dias/anaconda3/lib/python3.6/site-packages/ibapi-9.73.2-py3.6.egg/ibapi/decoder.py", line 1154, in interpret
handleInfo.processMeth(self, iter(fields))
File "/home/dias/anaconda3/lib/python3.6/site-packages/ibapi-9.73.2-py3.6.egg/ibapi/decoder.py", line 413, in processOpenOrder
self.wrapper.openOrder(order.orderId, contract, order, orderState)
File "/home/dias/anaconda3/lib/python3.6/site-packages/ib_insync/wrapper.py", line 192, in openOrder
order = Order(**order.__dict__)
File "/home/dias/anaconda3/lib/python3.6/site-packages/ib_insync/objects.py", line 53, in __init__
setattr(self, k, v)
AttributeError: 'Order' object has no attribute 'algoParamsCount'
2017-11-15 02:38:57,049 ib_insync.wrapper INFO orderStatus: Trade(contract=Contract(conId=51529211, symbol='GLD', secType='STK', exchange='SMART', primaryExchange='ARCA', currency='USD', localSymbol='GLD', tradingClass='GLD'), order=Order(orderId=149, action='SELL', totalQuantity=100, orderType='MKT', account='U1744631', algoStrategy='Adaptive', algoParams=[TagValue(tag='adaptivePriority', value='Normal')]), orderStatus=OrderStatus(status='Filled', filled=100.0, avgFillPrice=121.71, permId=1483409048, lastFillPrice=121.71, clientId=34), fills=[Fill(contract=Contract(conId=51529211, symbol='GLD', secType='STK', exchange='SMART', primaryExchange='ARCA', currency='USD', localSymbol='GLD', tradingClass='GLD'), execution=Execution(execId='00013911.5a0ad7f2.01.01', time='20171115 02:38:56', acctNumber='U1744631', exchange='ISLAND', side='SLD', shares=100.0, price=121.71, permId=1483409048, clientId=34, orderId=149, cumQty=100.0, avgPrice=121.71), commissionReport=CommissionReport(), time=datetime.datetime(2017, 11, 14, 20, 38, 57, 47655, tzinfo=datetime.timezone.utc))], log=[TradeLogEntry(time=datetime.datetime(2017, 11, 14, 20, 38, 41, 392626, tzinfo=datetime.timezone.utc), status='PendingSubmit', message=''), TradeLogEntry(time=datetime.datetime(2017, 11, 14, 20, 38, 41, 723600, tzinfo=datetime.timezone.utc), status='Submitted', message=''), TradeLogEntry(time=datetime.datetime(2017, 11, 14, 20, 38, 57, 47655, tzinfo=datetime.timezone.utc), status='Submitted', message='Fill 100.0@121.71'), TradeLogEntry(time=datetime.datetime(2017, 11, 14, 20, 38, 57, 47655, tzinfo=datetime.timezone.utc), status='Filled', message='')])
2017-11-15 02:38:57,051 ib_insync.client ERROR Decode failed
Traceback (most recent call last):
File "/home/dias/anaconda3/lib/python3.6/site-packages/ib_insync/client.py", line 223, in _onSocketHasData
self._decode(fields)
File "/home/dias/anaconda3/lib/python3.6/site-packages/ib_insync/client.py", line 378, in _decode
self.decoder.interpret(fields)
File "/home/dias/anaconda3/lib/python3.6/site-packages/ibapi-9.73.2-py3.6.egg/ibapi/decoder.py", line 1154, in interpret
handleInfo.processMeth(self, iter(fields))
File "/home/dias/anaconda3/lib/python3.6/site-packages/ibapi-9.73.2-py3.6.egg/ibapi/decoder.py", line 413, in processOpenOrder
self.wrapper.openOrder(order.orderId, contract, order, orderState)
File "/home/dias/anaconda3/lib/python3.6/site-packages/ib_insync/wrapper.py", line 192, in openOrder
order = Order(**order.__dict__)
File "/home/dias/anaconda3/lib/python3.6/site-packages/ib_insync/objects.py", line 53, in __init__
setattr(self, k, v)
AttributeError: 'Order' object has no attribute 'algoParamsCount'
```
The order actually gets filled, but this error is disturbing.
Could you help? | 1medium
|
Title: [Tech Debt] Simplify kernel size logic
Body: RIght now all blurs will throw an error if input kernel range has even sides.
We can simplify it by sampling from any interval, and if picking one that close to samped, larger and valid | 1medium
|
Title: UserWarning: NVIDIA GeForce RTX 5090 with CUDA capability sm_120 is not compatible with the current PyTorch installation
Body: ### Checklist
- [x] The issue exists after disabling all extensions
- [x] The issue exists on a clean installation of webui
- [ ] The issue is caused by an extension, but I believe it is caused by a bug in the webui
- [x] The issue exists in the current version of the webui
- [x] The issue has not been reported before recently
- [ ] The issue has been reported before but has not been fixed yet
### What happened?
Upgraded to a 5090, And got this error, following the link and downloading using the pip command didn't solve the issue it still uses the older cuda and gives this errore
### Steps to reproduce the problem
Run webui.bat
### What should have happened?
load with no errors and allow me to generate images.
### What browsers do you use to access the UI ?
Google Chrome
### Sysinfo
[sysinfo-2025-02-02-17-05.json](https://github.com/user-attachments/files/18633156/sysinfo-2025-02-02-17-05.json)
### Console logs
```Shell
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: v1.10.1
Commit hash: 82a973c04367123ae98bd9abdf80d9eda9b910e2
Launching Web UI with arguments:
C:\Users\andy9\Downloads\sd.webui\system\python\lib\site-packages\timm\models\layers\__init__.py:48: FutureWarning: Importing from timm.models.layers is deprecated, please import via timm.layers
warnings.warn(f"Importing from {__name__} is deprecated, please import via timm.layers", FutureWarning)
no module 'xformers'. Processing without...
no module 'xformers'. Processing without...
No module 'xformers'. Proceeding without it.
C:\Users\andy9\Downloads\sd.webui\system\python\lib\site-packages\torch\cuda\__init__.py:215: UserWarning:
NVIDIA GeForce RTX 5090 with CUDA capability sm_120 is not compatible with the current PyTorch installation.
The current PyTorch install supports CUDA capabilities sm_50 sm_60 sm_61 sm_70 sm_75 sm_80 sm_86 sm_90.
If you want to use the NVIDIA GeForce RTX 5090 GPU with PyTorch, please check the instructions at https://pytorch.org/get-started/locally/
warnings.warn(
Loading weights [39590fcd2d] from C:\Users\andy9\Downloads\sd.webui\webui\models\Stable-diffusion\symPonyWorld_v10.safetensors
Running on local URL: http://127.0.0.1:7861
Creating model from config: C:\Users\andy9\Downloads\sd.webui\webui\repositories\generative-models\configs\inference\sd_xl_base.yaml
To create a public link, set `share=True` in `launch()`.
Startup time: 8.4s (prepare environment: 2.1s, import torch: 3.1s, import gradio: 0.8s, setup paths: 0.6s, initialize shared: 0.2s, other imports: 0.4s, load scripts: 0.4s, create ui: 0.3s, gradio launch: 0.5s).
C:\Users\andy9\Downloads\sd.webui\system\python\lib\site-packages\huggingface_hub\file_download.py:795: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
warnings.warn(
loading stable diffusion model: RuntimeError
Traceback (most recent call last):
File "threading.py", line 973, in _bootstrap
File "threading.py", line 1016, in _bootstrap_inner
File "threading.py", line 953, in run
File "C:\Users\andy9\Downloads\sd.webui\webui\modules\initialize.py", line 149, in load_model
shared.sd_model # noqa: B018
File "C:\Users\andy9\Downloads\sd.webui\webui\modules\shared_items.py", line 175, in sd_model
return modules.sd_models.model_data.get_sd_model()
File "C:\Users\andy9\Downloads\sd.webui\webui\modules\sd_models.py", line 693, in get_sd_model
load_model()
File "C:\Users\andy9\Downloads\sd.webui\webui\modules\sd_models.py", line 845, in load_model
load_model_weights(sd_model, checkpoint_info, state_dict, timer)
File "C:\Users\andy9\Downloads\sd.webui\webui\modules\sd_models.py", line 440, in load_model_weights
model.load_state_dict(state_dict, strict=False)
File "C:\Users\andy9\Downloads\sd.webui\webui\modules\sd_disable_initialization.py", line 223, in <lambda>
module_load_state_dict = self.replace(torch.nn.Module, 'load_state_dict', lambda *args, **kwargs: load_state_dict(module_load_state_dict, *args, **kwargs))
File "C:\Users\andy9\Downloads\sd.webui\webui\modules\sd_disable_initialization.py", line 221, in load_state_dict
original(module, state_dict, strict=strict)
File "C:\Users\andy9\Downloads\sd.webui\system\python\lib\site-packages\torch\nn\modules\module.py", line 2138, in load_state_dict
load(self, state_dict)
File "C:\Users\andy9\Downloads\sd.webui\system\python\lib\site-packages\torch\nn\modules\module.py", line 2126, in load
load(child, child_state_dict, child_prefix)
File "C:\Users\andy9\Downloads\sd.webui\system\python\lib\site-packages\torch\nn\modules\module.py", line 2126, in load
load(child, child_state_dict, child_prefix)
File "C:\Users\andy9\Downloads\sd.webui\system\python\lib\site-packages\torch\nn\modules\module.py", line 2126, in load
load(child, child_state_dict, child_prefix)
[Previous line repeated 1 more time]
File "C:\Users\andy9\Downloads\sd.webui\system\python\lib\site-packages\torch\nn\modules\module.py", line 2120, in load
module._load_from_state_dict(
File "C:\Users\andy9\Downloads\sd.webui\webui\modules\sd_disable_initialization.py", line 225, in <lambda>
linear_load_from_state_dict = self.replace(torch.nn.Linear, '_load_from_state_dict', lambda *args, **kwargs: load_from_state_dict(linear_load_from_state_dict, *args, **kwargs))
File "C:\Users\andy9\Downloads\sd.webui\webui\modules\sd_disable_initialization.py", line 191, in load_from_state_dict
module._parameters[name] = torch.nn.parameter.Parameter(torch.zeros_like(param, device=device, dtype=dtype), requires_grad=param.requires_grad)
File "C:\Users\andy9\Downloads\sd.webui\system\python\lib\site-packages\torch\_meta_registrations.py", line 4516, in zeros_like
res.fill_(0)
RuntimeError: CUDA error: no kernel image is available for execution on the device
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
Stable diffusion model failed to load
Applying attention optimization: Doggettx... done.
Loading weights [39590fcd2d] from C:\Users\andy9\Downloads\sd.webui\webui\models\Stable-diffusion\symPonyWorld_v10.safetensors
Exception in thread Thread-18 (load_model):
Traceback (most recent call last):
File "threading.py", line 1016, in _bootstrap_inner
File "threading.py", line 953, in run
File "C:\Users\andy9\Downloads\sd.webui\webui\modules\initialize.py", line 154, in load_model
devices.first_time_calculation()
File "C:\Users\andy9\Downloads\sd.webui\webui\modules\devices.py", line 281, in first_time_calculation
conv2d(x)
File "C:\Users\andy9\Downloads\sd.webui\system\python\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "C:\Users\andy9\Downloads\sd.webui\system\python\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Users\andy9\Downloads\sd.webui\webui\extensions-builtin\Lora\networks.py", line 599, in network_Conv2d_forward
return originals.Conv2d_forward(self, input)
File "C:\Users\andy9\Downloads\sd.webui\system\python\lib\site-packages\torch\nn\modules\conv.py", line 460, in forward
Creating model from config: C:\Users\andy9\Downloads\sd.webui\webui\repositories\generative-models\configs\inference\sd_xl_base.yaml
return self._conv_forward(input, self.weight, self.bias)
File "C:\Users\andy9\Downloads\sd.webui\system\python\lib\site-packages\torch\nn\modules\conv.py", line 456, in _conv_forward
return F.conv2d(input, weight, bias, self.stride,
RuntimeError: CUDA error: no kernel image is available for execution on the device
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
loading stable diffusion model: RuntimeError
Traceback (most recent call last):
File "threading.py", line 973, in _bootstrap
File "threading.py", line 1016, in _bootstrap_inner
File "C:\Users\andy9\Downloads\sd.webui\system\python\lib\site-packages\anyio\_backends\_asyncio.py", line 807, in run
result = context.run(func, *args)
File "C:\Users\andy9\Downloads\sd.webui\system\python\lib\site-packages\gradio\utils.py", line 707, in wrapper
response = f(*args, **kwargs)
File "C:\Users\andy9\Downloads\sd.webui\webui\modules\ui.py", line 1165, in <lambda>
update_image_cfg_scale_visibility = lambda: gr.update(visible=shared.sd_model and shared.sd_model.cond_stage_key == "edit")
File "C:\Users\andy9\Downloads\sd.webui\webui\modules\shared_items.py", line 175, in sd_model
return modules.sd_models.model_data.get_sd_model()
File "C:\Users\andy9\Downloads\sd.webui\webui\modules\sd_models.py", line 693, in get_sd_model
load_model()
File "C:\Users\andy9\Downloads\sd.webui\webui\modules\sd_models.py", line 845, in load_model
load_model_weights(sd_model, checkpoint_info, state_dict, timer)
File "C:\Users\andy9\Downloads\sd.webui\webui\modules\sd_models.py", line 440, in load_model_weights
model.load_state_dict(state_dict, strict=False)
File "C:\Users\andy9\Downloads\sd.webui\webui\modules\sd_disable_initialization.py", line 223, in <lambda>
module_load_state_dict = self.replace(torch.nn.Module, 'load_state_dict', lambda *args, **kwargs: load_state_dict(module_load_state_dict, *args, **kwargs))
File "C:\Users\andy9\Downloads\sd.webui\webui\modules\sd_disable_initialization.py", line 221, in load_state_dict
original(module, state_dict, strict=strict)
File "C:\Users\andy9\Downloads\sd.webui\system\python\lib\site-packages\torch\nn\modules\module.py", line 2138, in load_state_dict
load(self, state_dict)
File "C:\Users\andy9\Downloads\sd.webui\system\python\lib\site-packages\torch\nn\modules\module.py", line 2126, in load
load(child, child_state_dict, child_prefix)
File "C:\Users\andy9\Downloads\sd.webui\system\python\lib\site-packages\torch\nn\modules\module.py", line 2126, in load
load(child, child_state_dict, child_prefix)
File "C:\Users\andy9\Downloads\sd.webui\system\python\lib\site-packages\torch\nn\modules\module.py", line 2126, in load
load(child, child_state_dict, child_prefix)
[Previous line repeated 1 more time]
File "C:\Users\andy9\Downloads\sd.webui\system\python\lib\site-packages\torch\nn\modules\module.py", line 2120, in load
module._load_from_state_dict(
File "C:\Users\andy9\Downloads\sd.webui\webui\modules\sd_disable_initialization.py", line 225, in <lambda>
linear_load_from_state_dict = self.replace(torch.nn.Linear, '_load_from_state_dict', lambda *args, **kwargs: load_from_state_dict(linear_load_from_state_dict, *args, **kwargs))
File "C:\Users\andy9\Downloads\sd.webui\webui\modules\sd_disable_initialization.py", line 191, in load_from_state_dict
module._parameters[name] = torch.nn.parameter.Parameter(torch.zeros_like(param, device=device, dtype=dtype), requires_grad=param.requires_grad)
File "C:\Users\andy9\Downloads\sd.webui\system\python\lib\site-packages\torch\_meta_registrations.py", line 4516, in zeros_like
res.fill_(0)
RuntimeError: CUDA error: no kernel image is available for execution on the device
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
Stable diffusion model failed to load
Loading weights [39590fcd2d] from C:\Users\andy9\Downloads\sd.webui\webui\models\Stable-diffusion\symPonyWorld_v10.safetensors
Creating model from config: C:\Users\andy9\Downloads\sd.webui\webui\repositories\generative-models\configs\inference\sd_xl_base.yaml
loading stable diffusion model: RuntimeError
Traceback (most recent call last):
File "threading.py", line 973, in _bootstrap
File "threading.py", line 1016, in _bootstrap_inner
File "C:\Users\andy9\Downloads\sd.webui\system\python\lib\site-packages\anyio\_backends\_asyncio.py", line 807, in run
result = context.run(func, *args)
File "C:\Users\andy9\Downloads\sd.webui\system\python\lib\site-packages\gradio\utils.py", line 707, in wrapper
response = f(*args, **kwargs)
File "C:\Users\andy9\Downloads\sd.webui\webui\modules\ui.py", line 1165, in <lambda>
update_image_cfg_scale_visibility = lambda: gr.update(visible=shared.sd_model and shared.sd_model.cond_stage_key == "edit")
File "C:\Users\andy9\Downloads\sd.webui\webui\modules\shared_items.py", line 175, in sd_model
return modules.sd_models.model_data.get_sd_model()
File "C:\Users\andy9\Downloads\sd.webui\webui\modules\sd_models.py", line 693, in get_sd_model
load_model()
File "C:\Users\andy9\Downloads\sd.webui\webui\modules\sd_models.py", line 845, in load_model
load_model_weights(sd_model, checkpoint_info, state_dict, timer)
File "C:\Users\andy9\Downloads\sd.webui\webui\modules\sd_models.py", line 440, in load_model_weights
model.load_state_dict(state_dict, strict=False)
File "C:\Users\andy9\Downloads\sd.webui\webui\modules\sd_disable_initialization.py", line 223, in <lambda>
module_load_state_dict = self.replace(torch.nn.Module, 'load_state_dict', lambda *args, **kwargs: load_state_dict(module_load_state_dict, *args, **kwargs))
File "C:\Users\andy9\Downloads\sd.webui\webui\modules\sd_disable_initialization.py", line 221, in load_state_dict
original(module, state_dict, strict=strict)
File "C:\Users\andy9\Downloads\sd.webui\system\python\lib\site-packages\torch\nn\modules\module.py", line 2138, in load_state_dict
load(self, state_dict)
File "C:\Users\andy9\Downloads\sd.webui\system\python\lib\site-packages\torch\nn\modules\module.py", line 2126, in load
load(child, child_state_dict, child_prefix)
File "C:\Users\andy9\Downloads\sd.webui\system\python\lib\site-packages\torch\nn\modules\module.py", line 2126, in load
load(child, child_state_dict, child_prefix)
File "C:\Users\andy9\Downloads\sd.webui\system\python\lib\site-packages\torch\nn\modules\module.py", line 2126, in load
load(child, child_state_dict, child_prefix)
[Previous line repeated 1 more time]
File "C:\Users\andy9\Downloads\sd.webui\system\python\lib\site-packages\torch\nn\modules\module.py", line 2120, in load
module._load_from_state_dict(
File "C:\Users\andy9\Downloads\sd.webui\webui\modules\sd_disable_initialization.py", line 225, in <lambda>
linear_load_from_state_dict = self.replace(torch.nn.Linear, '_load_from_state_dict', lambda *args, **kwargs: load_from_state_dict(linear_load_from_state_dict, *args, **kwargs))
File "C:\Users\andy9\Downloads\sd.webui\webui\modules\sd_disable_initialization.py", line 191, in load_from_state_dict
module._parameters[name] = torch.nn.parameter.Parameter(torch.zeros_like(param, device=device, dtype=dtype), requires_grad=param.requires_grad)
File "C:\Users\andy9\Downloads\sd.webui\system\python\lib\site-packages\torch\_meta_registrations.py", line 4516, in zeros_like
res.fill_(0)
RuntimeError: CUDA error: no kernel image is available for execution on the device
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
Stable diffusion model failed to load
```
### Additional information
_No response_ | 2hard
|
Title: BUG: failed to install doc requirements on Apple M1
Body: ### Describe the bug
The doc requirements include `lightgbm>=3.0.0`. But installing lightgbm on Apple M1 could result in an error.
### To Reproduce
To help us to reproduce this bug, please provide information below:
1. Your Python version
2. The version of Xorbits you use
3. Versions of crucial packages, such as numpy, scipy and pandas
4. Full stack of the error.
```python
Traceback (most recent call last):
File "/private/var/folders/2p/lb2j30s116zgq01tny9srvbw0000gn/T/pip-install-6kmldwi9/lightgbm_ea8fe0e250774541b5bfaba97a9cc4c3/setup.py", line 95, in silent_call
subprocess.check_call(cmd, stderr=log, stdout=log)
File "/Users/jon/Documents/miniconda3/envs/dev/lib/python3.9/subprocess.py", line 373, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['make', '_lightgbm', '-I/private/var/folders/2p/lb2j30s116zgq01tny9srvbw0000gn/T/pip-install-6kmldwi9/lightgbm_ea8fe0e250774541b5bfaba97a9cc4c3/build_cpp', '-j4']' returned non-zero exit status 2.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<string>", line 2, in <module>
File "<pip-setuptools-caller>", line 34, in <module>
File "/private/var/folders/2p/lb2j30s116zgq01tny9srvbw0000gn/T/pip-install-6kmldwi9/lightgbm_ea8fe0e250774541b5bfaba97a9cc4c3/setup.py", line 334, in <module>
setup(name='lightgbm',
File "/Users/jon/Documents/miniconda3/envs/dev/lib/python3.9/site-packages/setuptools/__init__.py", line 87, in setup
return distutils.core.setup(**attrs)
File "/Users/jon/Documents/miniconda3/envs/dev/lib/python3.9/site-packages/setuptools/_distutils/core.py", line 185, in setup
return run_commands(dist)
File "/Users/jon/Documents/miniconda3/envs/dev/lib/python3.9/site-packages/setuptools/_distutils/core.py", line 201, in run_commands
dist.run_commands()
File "/Users/jon/Documents/miniconda3/envs/dev/lib/python3.9/site-packages/setuptools/_distutils/dist.py", line 969, in run_commands
self.run_command(cmd)
File "/Users/jon/Documents/miniconda3/envs/dev/lib/python3.9/site-packages/setuptools/dist.py", line 1208, in run_command
super().run_command(command)
File "/Users/jon/Documents/miniconda3/envs/dev/lib/python3.9/site-packages/setuptools/_distutils/dist.py", line 988, in run_command
cmd_obj.run()
File "/Users/jon/Documents/miniconda3/envs/dev/lib/python3.9/site-packages/wheel/bdist_wheel.py", line 360, in run
self.run_command("install")
File "/Users/jon/Documents/miniconda3/envs/dev/lib/python3.9/site-packages/setuptools/_distutils/cmd.py", line 318, in run_command
self.distribution.run_command(command)
File "/Users/jon/Documents/miniconda3/envs/dev/lib/python3.9/site-packages/setuptools/dist.py", line 1208, in run_command
super().run_command(command)
File "/Users/jon/Documents/miniconda3/envs/dev/lib/python3.9/site-packages/setuptools/_distutils/dist.py", line 988, in run_command
cmd_obj.run()
File "/private/var/folders/2p/lb2j30s116zgq01tny9srvbw0000gn/T/pip-install-6kmldwi9/lightgbm_ea8fe0e250774541b5bfaba97a9cc4c3/setup.py", line 248, in run
compile_cpp(use_mingw=self.mingw, use_gpu=self.gpu, use_cuda=self.cuda, use_mpi=self.mpi,
File "/private/var/folders/2p/lb2j30s116zgq01tny9srvbw0000gn/T/pip-install-6kmldwi9/lightgbm_ea8fe0e250774541b5bfaba97a9cc4c3/setup.py", line 199, in compile_cpp
silent_call(["make", "_lightgbm", f"-I{build_dir}", "-j4"], raise_error=True,
File "/private/var/folders/2p/lb2j30s116zgq01tny9srvbw0000gn/T/pip-install-6kmldwi9/lightgbm_ea8fe0e250774541b5bfaba97a9cc4c3/setup.py", line 99, in silent_call
raise Exception("\n".join((error_msg, LOG_NOTICE)))
Exception: An error has occurred while building lightgbm library file
The full version of error log was saved into /Users/jon/LightGBM_compilation.log
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for lightgbm
error: subprocess-exited-with-error
× Running setup.py install for lightgbm did not run successfully.
│ exit code: 1
╰─> [39 lines of output]
INFO:root:running install
/Users/jon/Documents/miniconda3/envs/dev/lib/python3.9/site-packages/setuptools/command/install.py:34: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools.
warnings.warn(
INFO:LightGBM:Starting to compile the library.
INFO:LightGBM:Starting to compile with CMake.
Traceback (most recent call last):
File "/private/var/folders/2p/lb2j30s116zgq01tny9srvbw0000gn/T/pip-install-6kmldwi9/lightgbm_ea8fe0e250774541b5bfaba97a9cc4c3/setup.py", line 95, in silent_call
subprocess.check_call(cmd, stderr=log, stdout=log)
File "/Users/jon/Documents/miniconda3/envs/dev/lib/python3.9/subprocess.py", line 373, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['make', '_lightgbm', '-I/private/var/folders/2p/lb2j30s116zgq01tny9srvbw0000gn/T/pip-install-6kmldwi9/lightgbm_ea8fe0e250774541b5bfaba97a9cc4c3/build_cpp', '-j4']' returned non-zero exit status 2.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<string>", line 2, in <module>
File "<pip-setuptools-caller>", line 34, in <module>
File "/private/var/folders/2p/lb2j30s116zgq01tny9srvbw0000gn/T/pip-install-6kmldwi9/lightgbm_ea8fe0e250774541b5bfaba97a9cc4c3/setup.py", line 334, in <module>
setup(name='lightgbm',
File "/Users/jon/Documents/miniconda3/envs/dev/lib/python3.9/site-packages/setuptools/__init__.py", line 87, in setup
return distutils.core.setup(**attrs)
File "/Users/jon/Documents/miniconda3/envs/dev/lib/python3.9/site-packages/setuptools/_distutils/core.py", line 185, in setup
return run_commands(dist)
File "/Users/jon/Documents/miniconda3/envs/dev/lib/python3.9/site-packages/setuptools/_distutils/core.py", line 201, in run_commands
dist.run_commands()
File "/Users/jon/Documents/miniconda3/envs/dev/lib/python3.9/site-packages/setuptools/_distutils/dist.py", line 969, in run_commands
self.run_command(cmd)
File "/Users/jon/Documents/miniconda3/envs/dev/lib/python3.9/site-packages/setuptools/dist.py", line 1208, in run_command
super().run_command(command)
File "/Users/jon/Documents/miniconda3/envs/dev/lib/python3.9/site-packages/setuptools/_distutils/dist.py", line 988, in run_command
cmd_obj.run()
File "/private/var/folders/2p/lb2j30s116zgq01tny9srvbw0000gn/T/pip-install-6kmldwi9/lightgbm_ea8fe0e250774541b5bfaba97a9cc4c3/setup.py", line 248, in run
compile_cpp(use_mingw=self.mingw, use_gpu=self.gpu, use_cuda=self.cuda, use_mpi=self.mpi,
File "/private/var/folders/2p/lb2j30s116zgq01tny9srvbw0000gn/T/pip-install-6kmldwi9/lightgbm_ea8fe0e250774541b5bfaba97a9cc4c3/setup.py", line 199, in compile_cpp
silent_call(["make", "_lightgbm", f"-I{build_dir}", "-j4"], raise_error=True,
File "/private/var/folders/2p/lb2j30s116zgq01tny9srvbw0000gn/T/pip-install-6kmldwi9/lightgbm_ea8fe0e250774541b5bfaba97a9cc4c3/setup.py", line 99, in silent_call
raise Exception("\n".join((error_msg, LOG_NOTICE)))
Exception: An error has occurred while building lightgbm library file
The full version of error log was saved into /Users/jon/LightGBM_compilation.log
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: legacy-install-failure
× Encountered error while trying to install package.
╰─> lightgbm
note: This is an issue with the package mentioned above, not pip.
hint: See above for output from the failure.
```
5. Minimized code to reproduce the error.
```bash
pip install lightgbm>\=3.0.0
```
### Expected behavior
### Additional context
[LightGBM_compilation.log](https://github.com/xprobe-inc/xorbits/files/11541385/LightGBM_compilation.log)
| 2hard
|
Title: Our tests fail to test workflows
Body: ...except the Qt6 tests.
#6181 had a compatibility issue which we almost did not catch because only Qt6 tests were having problems.
The cause for this is that `tox.ini` for `testenv` includes:
```
# Skip loading of example workflows as that inflates coverage
SKIP_EXAMPLE_WORKFLOWS=True
```
These worfklows should be skipped only for the run that is actually used for coverage. So we should do the opposite, run example workflows everywhere except on a one specific (coverage) run.
| 1medium
|
Title: [FEATURE] Add tests in SAR for remove seen feature
Body: ### Description
<!--- Describe your expected feature in detail -->
### Expected behavior with the suggested feature
<!--- For example: -->
<!--- *Adding algorithm xxx will help people understand more about xxx use case scenarios. -->
### Other Comments
| 1medium
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.