text
stringlengths 20
57.3k
| labels
class label 4
classes |
---|---|
Title: Can't set CONN_MAX_AGE to None
Body: When setting DATABASE_URL to something like `postgres://posgres@db:5432/postgres?conn_max_age=None` the end result is:
``` python
{
'ENGINE': 'django.db.backends.postgresql_psycopg2',
'HOST': 'db'
'PORT': 5432,
'USER': 'postgres',
'PASSWORD': None,
'NAME': 'postgres',
'CONN_MAX_AGE': 'None',
'OPTIONS': {},
}
```
Where CONN_MAX_AGE has an invalid value: it has to be set to either a float or None, but django-environ returns a str.
| 1medium
|
Title: Error starting container - failed to write to cpu.cfs_quota_us
Body: <!-- This form is for bug reports and feature requests ONLY!
If you're looking for help check [Stack Overflow](https://stackoverflow.com/questions/tagged/kubernetes) and the [troubleshooting guide](https://kubernetes.io/docs/tasks/debug-application-cluster/troubleshooting/).
-->
**Is this a BUG REPORT or FEATURE REQUEST?**:
/kind bug
**What happened**:
Create a Pod with a single container with CPU requests and limits of `51m`. When the container is run, it generates the following error:
```
'invalid header field value "oci runtime error: container_linux.go:247:
starting container process caused \"process_linux.go:327: setting cgroup
config for procHooks process caused \\\"failed to write 5100 to cpu.cfs_quota_us:
write /sys/fs/cgroup/cpu,cpuacct/kubepods.slice/kubepods-pod5383ae7e_22e2_11e8_9323_005056888407.slice/docker-154ab5a8cd23771c42db005a12cef3791ca672b8a1a9181e2939e6ea0e205d2b.scope/cpu.cfs_quota_us:
invalid argument\\\"\"\n"'
```
**What you expected to happen**:
The container runs properly.
**How to reproduce it (as minimally and precisely as possible)**:
docker and kubelet must be configured to use the systemd cgroup driver.
Create a pod with cpu limits that any contain a non-zero millicore digit (e.g. 51m).
**Anything else we need to know?**:
The bug lies in [runc](https://github.com/opencontainers/runc). The systemctl set-property rounds the cpu limit on the parent cgroup to the nearest percent (e.g. 5%). The child cgroup is then created by docker with a cpu.cfs_quota_us with a greater value than the parent cgroup, which causes an error.
The bug seems to have been fixed in [this commit](https://github.com/opencontainers/runc/commit/bca53e7b49208db35fcda418a45655fd2f8ea5bb#diff-e99e0d455975094fba0f58dcd6a84522) and included in [runc 1.0.0-rc5](https://github.com/opencontainers/runc/releases/tag/v1.0.0-rc5) which was released Feb. 27, 2018.
**Environment**:
- Kubernetes version (use `kubectl version`):
Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.3", GitCommit:"d2835416544f298c919e2ead3be3d0864b52323b", GitTreeState:"clean", BuildDate:"2018-02-07T12:22:21Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.3", GitCommit:"d2835416544f298c919e2ead3be3d0864b52323b", GitTreeState:"clean", BuildDate:"2018-02-07T11:55:20Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}
- Cloud provider or hardware configuration:
- OS (e.g. from /etc/os-release):
NAME="CentOS Linux"
VERSION="7 (Core)"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="7"
PRETTY_NAME="CentOS Linux 7 (Core)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:7"
HOME_URL="https://www.centos.org/"
BUG_REPORT_URL="https://bugs.centos.org/"
CENTOS_MANTISBT_PROJECT="CentOS-7"
CENTOS_MANTISBT_PROJECT_VERSION="7"
REDHAT_SUPPORT_PRODUCT="centos"
REDHAT_SUPPORT_PRODUCT_VERSION="7"
- Kernel (e.g. `uname -a`):
3.10.0-693.5.2.el7.x86_64 #1 SMP Fri Oct 20 20:32:50 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
- Install tools:
- Others:
the systemd cgroup driver must be used for kubelet.
@kubernetes/sig-node-bugs | 2hard
|
Title: Add support for combined unique constraints
Body: ## Problem
<!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] -->
Prisma supports declaring multiple fields as a combined unique constraint, e.g.
```prisma
model User {
firstName String
lastName String
@@id([firstName, lastName])
}
```
| 1medium
|
Title: Add Vertex AI Integration
Body: **Is your feature request related to a problem? Please describe.**
No Vertex AI support
**Describe the solution you'd like**
Add a new LLM class that integrates Vertex AI, Langchain [docs](https://python.langchain.com/v0.2/docs/integrations/chat/google_vertex_ai_palm/)
**Describe alternatives you've considered**
Directly pass the model_instance in the graph_config, like [here](https://github.com/VinciGit00/Scrapegraph-ai/blob/main/examples/azure/smart_scraper_azure.py)
| 1medium
|
Title: 提取新词的几点疑问
Body: <!--
注意事项和版本号必填,否则不回复。若希望尽快得到回复,请按模板认真填写,谢谢合作。
-->
## 注意事项
请确认下列注意事项:
* 我已仔细阅读下列文档,都没有找到答案:
- [首页文档](https://github.com/hankcs/HanLP)
- [wiki](https://github.com/hankcs/HanLP/wiki)
- [常见问题](https://github.com/hankcs/HanLP/wiki/FAQ)
* 我已经通过[Google](https://www.google.com/#newwindow=1&q=HanLP)和[issue区检索功能](https://github.com/hankcs/HanLP/issues)搜索了我的问题,也没有找到答案。
* 我明白开源社区是出于兴趣爱好聚集起来的自由社区,不承担任何责任或义务。我会礼貌发言,向每一个帮助我的人表示感谢。
* [x] 我在此括号内输入x打钩,代表上述事项确认完毕。
## 版本号
<!-- 发行版请注明jar文件名去掉拓展名的部分;GitHub仓库版请注明master还是portable分支 -->
当前最新版本号是:master
我使用的版本是:master
1.在com.hankcs.hanlp.mining.word.WordInfo.computeProbabilityEntropy()计算左右熵的时候,如果该词位于首或者尾部时取最小值 这样取出的熵值为0,当设置默认熵值,开头结尾的词将无法取出
2.互信息
代码中给出的算法


不是完全一致
| 1medium
|
Title: Instapy does not follow the correct amount of people
Body: ```
session = InstaPy(username="Username", password="Password")
session.login()
tags = ['makeup','eyeliner']
session.follow_by_tags(tags=tags, amount=10)
```
as you can see, I set the amount to 10 but for whatever reason it follows 19.
It first follows 19 people on the hashtag 'makeup' than 19 from the hashtag 'eyeliner' | 1medium
|
Title: Unhandled Exception (7b158f29f)
Body: Autosploit version: `3.0`
OS information: `Linux-4.19.0-kali3-amd64-x86_64-with-Kali-kali-rolling-kali-rolling`
Running context: `autosploit.py`
Error meesage: `expected a string or other character buffer object`
Error traceback:
```
Traceback (most recent call):
File "/root/AutoSploit/autosploit/main.py", line 117, in main
terminal.terminal_main_display(loaded_tokens)
File "/root/AutoSploit/lib/term/terminal.py", line 553, in terminal_main_display
self.do_token_reset(api, token, username)
File "/root/AutoSploit/lib/term/terminal.py", line 165, in do_token_reset
username_.write(username)
TypeError: expected a string or other character buffer object
```
Metasploit launched: `False`
| 1medium
|
Title: DNS labels incorrectly require lower case characters
Body: **What happened**:
The DNS-1035 checks (RFC 1035) will result in an error if the name is not lower case.
**What you expected to happen**:
The RFC does not prohibit lower case labels, they just must be evaluated in a case-insensitive way.
**How to reproduce it (as minimally and precisely as possible)**:
The fields that are evaluated directly vary, some fields that are "fixed" via a workaround before evaluation in CRDs are shown below.
**Anything else we need to know?**:
See RFC spec.
I'm seeking to understand why the DNS-1035 check enforces case sensitivity. Is this a design decision in Kubernetes?
https://github.com/kubernetes/kubernetes/blob/939f8ffa87dfd25309d3350ba798498e522a85e2/staging/src/k8s.io/apimachinery/pkg/util/validation/validation.go#L152-L153
From the RFC:
> \<label\> ::= \<letter\> [ [ \<ldh-str\> ] \<let-dig\> ]
>
> \<ldh-str\> ::= \<let-dig-hyp\> | \<let-dig-hyp\> \<ldh-str\>
>
> \<let-dig-hyp\> ::= \<let-dig\> | "-"
>
> \<let-dig\> ::= \<letter\> | \<digit\>
>
> \<letter\> ::= any one of the 52 alphabetic characters **A through Z in**
> **upper case and a through z in lower case**
>
> \<digit\> ::= any one of the ten digits 0 through 9
>
> Note that **while upper and lower case letters are allowed in domain**
> **names, no significance is attached to the case**. That is, two names with
> the same spelling but different case are to be treated as if identical.
There are some checks in the CRDs that will lowercase the names before the check. The history here is not quite clear to me, but assuming the fields are expected to be DNS labels, it looks like a workaround to the actual check.
https://github.com/kubernetes/kubernetes/blob/e4782435429ae6518ad2f5e1713f1f99b30f0fb6/staging/src/k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/validation/validation.go#L404-L406 | 1medium
|
Title: Automated cherry pick of #70696: Filter out spammy audit logs from cluster autoscaler.
Body: Cherry pick of #70696 on release-1.12.
#70696: Filter out spammy audit logs from cluster autoscaler. | 0easy
|
Title: How do i resuse weights from one network in another?
Body: lets say i have an autoencoder network
```
encoder = tflearn.input_data(shape=[None, samples])
encoder2 = tflearn.fully_connected(encoder, 30)
decoder = tflearn.fully_connected(encoder2, samples, activation='tanh')
```
how do i input my own values into layer encoder2 once the network is trained?
In other words turn it into this network:
```
encoder2 = tflearn.input_data(shape=[None, 30])
decoder = tflearn.fully_connected(encoder2, samples)
```
| 1medium
|
Title: The First Simple Example Has a Question
Body: ### Description
README.md
Executing a coroutine on a child process is as simple as:
1.
async def put(url, params):
2. This args has't params
async def main():
p = Process(target=put, args=("https://jreese.sh", ))
await p
TypeError: put() missing 1 required positional argument: 'params'
### Details
* OS: Centos 7.4
* Python version: 3.7.2
* aiomultiprocess version: 0.5.0
* Can you repro on master?
* Can you repro in a clean virtualenv?
| 0easy
|
Title: face_recognition.face_encodings(...) doesn't work on multiprocessing
Body: * face_recognition version:lastest
* Python version:3.6
* Operating System:Ubuntu16.04
### Description
I was trying to put face_recognition.face_encodings on multiprocessing cause it's time consuming.
But it seems like face_recognition didn't do anything on sub process.
### What I Did
import cv2
import face_recognition as fr
import os
import numpy as np
from multiprocessing import Process,Pool
img_list = []
def multi_process(img):
cnt=img_list.index( img)+1
print(cnt)
locations=fr.face_locations(img,number_of_times_to_upsample=2,model="cnn")
print(cnt,locations)
out_path = "./test/faces"
encs = fr.face_encodings(img, locations, 500)
for index, location in enumerate(locations):
print(out_path + "/%d_%d_%d_%d_%d_%d.jpg" % (cnt, index, location[1], location[0]
, location[3], location[2]))
img = cv2.rectangle(img, (location[1], location[0]), (location[3], location[2]), (0, 255, 0), 1)
if not os.path.exists(out_path):
os.makedirs(out_path)
cv2.imwrite(out_path + "/%d_%d_%d_%d_%d_%d.jpg" % (cnt, index, location[1], location[0]
, location[3], location[2]), img)
np.save(out_path + "/%d_%d" % (cnt, index), encs[index])
def preprocess(video_path):
cap=cv2.VideoCapture(video_path)
cnt=0
out_path="./test/faces"
while True:
_,img=cap.read()
cnt+=1
if img is None:
break
img_list.append(img)
pro_cnt=32
print(len(img_list))
locations = fr.face_locations(img_list[0], number_of_times_to_upsample=2, model="cnn")
print(locations)
for i in range(len(img_list)//pro_cnt):
p = Pool(pro_cnt)
for img_i in img_list[i*pro_cnt:i*pro_cnt+pro_cnt]:
p.apply_async(multi_process,(img_i,))
p.close()
p.join()
if __name__=="__main__":
video_path ="/path/to/test.avi"
data_base = "./test"
start=time.time()
preprocess(video_path)
end=time.time()
print(end-start)
###Thanks
Pretty thank you all for watching my stupid issue. | 2hard
|
Title: PyTorch With distributed.sh train.py: error: unrecognized arguments: --local-rank=0
Body: **Describe the bug**
PyTorch With distributed.sh train.py: error: unrecognized arguments: --local-rank=0
FutureWarning: The module torch.distributed.launch is deprecated
and will be removed in future. Use torchrun.
Note that --use-env is set by default in torchrun
**To Reproduce**
Steps to reproduce the behavior:
1. ./distributed_train.sh
2.
**Expected behavior**
A clear and concise description of what you expected to happen.
**Screenshots**
If applicable, add screenshots to help explain your problem.

**Additional context**
I solved it by use torchrun --nproc_per_node=$NUM_PROC train.py
| 1medium
|
Title: 【开源自荐】利用 GitHub Action 自动更新赞助者列表
Body: ## 项目推荐
- 项目地址:仅收录 GitHub 的开源项目,请填写 GitHub 的项目地址
> <https://github.com/yiyungent/afdian-action>
- 类别:请从中选择(C、C#、C++、CSS、Go、Java、JS、Kotlin、Objective-C、PHP、Python、Ruby、Swift、其它、书籍、机器学习)
> C#
- 项目后续更新计划:
> - Added: 常用工具类
- 项目描述:
- 必写:这是个什么项目、能用来干什么、有什么特点或解决了什么痛点
- 可选:适用于什么场景、能够让初学者学到什么
- 描述长度(不包含示例代码): 10 - 256 个字符
> 利用 `GitHub Action` 自动更新来自 `爱发电` 的赞助者列表, 免去人为手动更新赞助列表的烦恼,支持自定义风格及内容
- 推荐理由:令人眼前一亮的点是什么?解决了什么痛点?
> - 很多开源项目使用了 `爱发电` 作为赞助渠道,同时提供在 `README.md` 的赞助者显示,但人为手动更新赞助者列表较为麻烦,本项目基于 `GitHub Action` 自动更新赞助者列表
> - 支持自定义模板文件,通过模板文件,来自定义风格及内容
> - 由于是 `GitHub Action` 中的一个步骤,也可用于 `Hexo`, `Hugo` 等静态网页生成中添加 赞助页面
- 示例代码:(可选)长度:1-20 行
- 截图:(可选)gif/png/jpg

| 1medium
|
Title: What happened to the facebook repo on HuggingFace?
Body: Could I ask for that? I couldn't find any models from Facebook at this time (just today). | 3misc
|
Title: CoreDictionary中有一个"机收"的词,导致“手机收邮件”分词结果为“手 机收 邮件”
Body: <!--
这是HanLP的issue模板,用于规范提问题的格式。本来并不打算用死板的格式限制大家,但issue区实在有点混乱。有时候说了半天才搞清楚原来对方用的是旧版、自己改了代码之类,浪费双方宝贵时间。所以这里用一个规范的模板统一一下,造成不便望海涵。除了注意事项外,其他部分可以自行根据实际情况做适量修改。
-->
## 注意事项
请确认下列注意事项:
* 我已仔细阅读下列文档,都没有找到答案:
- [首页文档](https://github.com/hankcs/HanLP)
- [wiki](https://github.com/hankcs/HanLP/wiki)
- [常见问题](https://github.com/hankcs/HanLP/wiki/FAQ)
* 我已经通过[Google](https://www.google.com/#newwindow=1&q=HanLP)和[issue区检索功能](https://github.com/hankcs/HanLP/issues)搜索了我的问题,也没有找到答案。
* 我明白开源社区是出于兴趣爱好聚集起来的自由社区,不承担任何责任或义务。我会礼貌发言,向每一个帮助我的人表示感谢。
* [x] 我在此括号内输入x打钩,代表上述事项确认完毕。
## 版本号
<!-- 发行版请注明jar文件名去掉拓展名的部分;GitHub仓库版请注明master还是portable分支 -->
当前最新版本号是:1.3.4
我使用的版本是:1.3.2
## 我的问题
<!-- 请详细描述问题,越详细越可能得到解决 -->
在分词的时候发现对一个句子“手机收邮件的问题”进行分词,结果是“手 机收 邮件 的 问题”,即使将“手机”加到CustomDictionary中也还是这样子的结果。尝试了各个分词类:NotionalTokenizer,HanLP.segment(),HanLP.newSegment() 都出现这个问题
定位发现CoreDictionary中有一个"机收"的词,导致“手机收邮件”分词结果为“手 机收 邮件”
## 复现问题
<!-- 你是如何操作导致产生问题的?比如修改了代码?修改了词典或模型?-->
### 步骤
1. 首先……
2. 然后……
3. 接着……
### 触发代码
```
static void testSeg(){
Segment segment = HanLP.newSegment().enableCustomDictionary(true);
String str = "手机收邮件的问题";
List<Term> res = segment .seg(str);
StringBuilder sb = new StringBuilder();
for(Term term:res ){
sb.append(term.word).append("\t");
}
System.out.println(sb.toString());
}
```
### 期望输出
<!-- 你希望输出什么样的正确结果?-->
```
手机 收 邮件 的 问题
```
### 实际输出
<!-- HanLP实际输出了什么?产生了什么效果?错在哪里?-->
```
手 机收 邮件 的 问题
```
## 其他信息
<!-- 任何可能有用的信息,包括截图、日志、配置文件、相关issue等等。-->
| 1medium
|
Title: Speed up processing of VCFs when opening the contacts menu
Body: Opening the contacts menu results for me to 627 calls to `Sabre\VObject\Parser\MimeDir::readLine` which takes alone around 0.75 seconds.
Ref https://blackfire.io/profiles/2a140454-bfeb-4241-90a3-a9c8012f81e2/graph | 1medium
|
Title: Ensure /etc/hosts has a header always - Fix conformance test
Body: We have 2 scenarios where we copy /etc/hosts
- with host network (we just copy the /etc/hosts from node)
- without host network (create a fresh /etc/hosts from pod info)
We are having trouble figuring out whether a /etc/hosts in a
pod/container has been "fixed-up" or not. And whether we used
host network or a fresh /etc/hosts in the various ways we start
up the tests which are:
- VM/box against a remote cluster
- As a container inside the k8s cluster
- DIND scenario in CI where test runs inside a managed container
Please see previous mis-guided attempt to fix this problem at
ba20e63446815f8bcab4fc4c6e07eab2e2a9e121 In this commit we revert
the code from there as well.
So we should make sure:
- we always add a header if we touched the file
- we add slightly different headers so we can figure out if we used the
host network or not.
Update the test case to inject /etc/hosts from node to another path
(/etc/hosts-original) as well and use that to compare.
<!-- Thanks for sending a pull request! Here are some tips for you:
1. If this is your first time, read our contributor guidelines https://git.k8s.io/community/contributors/guide#your-first-contribution and developer guide https://git.k8s.io/community/contributors/devel/development.md#development-guide
2. If you want *faster* PR reviews, read how: https://git.k8s.io/community/contributors/guide/pull-requests.md#best-practices-for-faster-reviews
3. Follow the instructions for writing a release note: https://git.k8s.io/community/contributors/guide/release-notes.md
4. If the PR is unfinished, see how to mark it: https://git.k8s.io/community/contributors/guide/pull-requests.md#marking-unfinished-pull-requests
-->
**What this PR does / why we need it**:
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #
**Special notes for your reviewer**:
**Release note**:
<!-- Write your release note:
1. Enter your extended release note in the below block. If the PR requires additional action from users switching to the new release, include the string "action required".
2. If no release note is required, just write "NONE".
-->
```release-note
Rework Kubelet set `/etc/hosts` behavior to fix conformance testability
```
| 1medium
|
Title: Enhance netpol E2E tests connectivity information on startup
Body: <!-- Please only use this template for submitting enhancement requests -->
**What would you like to be added**:
Improve logs and connectivity debugging information on Netpol E2E tests suite bootstrap, so it will give us fine-grained info:
https://github.com/kubernetes/kubernetes/blob/73d4c245ef870390b052a070134f7c4751744037/test/e2e/network/netpol/kubemanager.go#L253
It can be a quick check-in of the health of the components, for example.
**Why is this needed**:
This can help diagnostics failed infrastructure before starting the actual pod probing (which will fail),
XRef https://github.com/kubernetes/kubernetes/issues/98102
| 1medium
|
Title: Inline format definition issue
Body: Hi,
I am using XlsxWriter to put some JSON files in tabular. I think there is a bug with inline format declaration, which can be seen below.
I am using Python version 3.7.3 and XlsxWriter 1.1.8 and Excel version 12.26 with macOS Mojave 10.14.5.
Here is some code that demonstrates the problem:
```python
sheet.write(row_index - 1, column_index, merchant_string, {'bold': True})
workbook.close()
```
if I change that particular line to this it is all ok.
```python
format = worksheet.add_format({'bold': True})
sheet.write(row_index - 1, column_index, merchant_string, format)
```
Here is the error message that first case creates
```
Traceback (most recent call last):
File "/Users/xxx/Desktop/gg/xxxx/main.py", line 91, in <module>
workbook.close()
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/xlsxwriter/workbook.py", line 304, in close
self._store_workbook()
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/xlsxwriter/workbook.py", line 646, in _store_workbook
xml_files = packager._create_package()
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/xlsxwriter/packager.py", line 135, in _create_package
self._write_worksheet_files()
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/xlsxwriter/packager.py", line 190, in _write_worksheet_files
worksheet._assemble_xml_file()
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/xlsxwriter/worksheet.py", line 3745, in _assemble_xml_file
self._write_sheet_data()
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/xlsxwriter/worksheet.py", line 5242, in _write_sheet_data
self._write_rows()
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/xlsxwriter/worksheet.py", line 5435, in _write_rows
self._write_cell(row_num, col_num, col_ref)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/xlsxwriter/worksheet.py", line 5600, in _write_cell
xf_index = cell.format._get_xf_index()
AttributeError: 'dict' object has no attribute '_get_xf_index'
```
This can be handled out easily but I wanted this issue to be known.
-----------------------------------
Edit:
Just realized by using .add_format() method, the initialized object is instance of 'xlsxwriter.format.Format', eventhough it makes my post pointless, this in-line declaration is also very intuitive to me. Maybe considered as feature request? | 1medium
|
Title: مشکل نصب مرزبان
Body:

سرور هتزنر هست وقتی نصب میکنم و تمام میشود پنل بالا نمیاد تفاوتی که با بقیه دیدم داخل لاگ ها این بود
marzban-1 | INFO: Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit)
برای من اینه
ولی برای بقیه
marzban-1 | INFO: Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit) | 1medium
|
Title: Feature request: support readOnlyRecommended
Body: Hi,
While not in the specifications, Excel (online) has a "Protect Workbook" feature that changes the default open behaviour for a spreadsheet to readonly.
This is done by adding a readOnlyRecommended attribute to a fileSharing node on the workbook root node like so
`<fileSharing readOnlyRecommended="1"/>`
(in excel just after the fileVersion node)
I've done this locally by adding
`self._xml_empty_tag('fileSharing', [('readOnlyRecommended', "1")])`
to Workbook._assemble_xml_file()
| 1medium
|
Title: No package python36-devel available.
Body: I am following the example that how to build Tensorflow.
When I input below command
docker exec -i -t lambdapackgen /bin/bash /outputs/buildPack_py3.sh
error is:
...
No package python36-devel available.
No package python36-virtualenv available.
No package python36-pip available.
...
...
rm: cannot remove 'pip': No such file or directory
rm: cannot remove 'pip-*': No such file or directory
rm: cannot remove 'wheel': No such file or directory
rm: cannot remove 'wheel-*': No such file or directory
rm: cannot remove 'easy_install.py': No such file or directory
...
I think this sh doesn't install python36 and then it also doesn't install pip...
How can I fix this?
Thank you for advance!!
| 1medium
|
Title: oc cluster up --service-catalog=true fails to register template broker
Body: For me, cluster up consistently times out trying to install the template broker when using the service catalog.
```
$ oc cluster up --version=latest --service-catalog=true
```
Service catalog is running without errors, but the template broker is not registered. Increasing the log level, I see
```
-- Installing service catalog ...
I0703 08:12:49.666835 32682 servicecatalog.go:85] instantiating service catalog template
I0703 08:12:55.799747 32682 servicecatalog.go:95] polling for service catalog api server endpoint availability
I0703 08:13:00.800325 32682 servicecatalog.go:95] polling for service catalog api server endpoint availability
I0703 08:13:00.804488 32682 servicecatalog.go:110] setting up the api aggregator
I0703 08:13:00.810042 32682 servicecatalog.go:142] registering the template broker with the service catalog
I0703 08:13:01.815732 32682 servicecatalog.go:178] retrying registration after error the server could not find the requested resource (post brokers.servicecatalog.k8s.io)
I0703 08:13:02.815544 32682 servicecatalog.go:178] retrying registration after error the server could not find the requested resource (post brokers.servicecatalog.k8s.io)
I0703 08:13:03.816358 32682 servicecatalog.go:178] retrying registration after error the server could not find the requested resource (post brokers.servicecatalog.k8s.io)
I0703 08:13:04.815602 32682 servicecatalog.go:178] retrying registration after error the server could not find the requested resource (post brokers.servicecatalog.k8s.io)
I0703 08:13:05.813794 32682 servicecatalog.go:178] retrying registration after error the server could not find the requested resource (post brokers.servicecatalog.k8s.io)
I0703 08:13:06.844970 32682 servicecatalog.go:178] retrying registration after error the server could not find the requested resource (post brokers.servicecatalog.k8s.io)
I0703 08:13:07.835231 32682 servicecatalog.go:178] retrying registration after error the server could not find the requested resource (post brokers.servicecatalog.k8s.io)
I0703 08:13:08.815566 32682 servicecatalog.go:178] retrying registration after error the server could not find the requested resource (post brokers.servicecatalog.k8s.io)
I0703 08:13:09.820726 32682 servicecatalog.go:178] retrying registration after error the server could not find the requested resource (post brokers.servicecatalog.k8s.io)
I0703 08:13:10.812604 32682 servicecatalog.go:178] retrying registration after error the server could not find the requested resource (post brokers.servicecatalog.k8s.io)
I0703 08:13:11.816320 32682 servicecatalog.go:178] retrying registration after error the server could not find the requested resource (post brokers.servicecatalog.k8s.io)
I0703 08:13:12.815285 32682 servicecatalog.go:178] retrying registration after error the server could not find the requested resource (post brokers.servicecatalog.k8s.io)
I0703 08:13:13.815608 32682 servicecatalog.go:178] retrying registration after error the server could not find the requested resource (post brokers.servicecatalog.k8s.io)
I0703 08:13:14.816297 32682 servicecatalog.go:178] retrying registration after error the server could not find the requested resource (post brokers.servicecatalog.k8s.io)
I0703 08:13:15.812245 32682 servicecatalog.go:178] retrying registration after error the server could not find the requested resource (post brokers.servicecatalog.k8s.io)
I0703 08:13:16.820990 32682 servicecatalog.go:178] retrying registration after error the server could not find the requested resource (post brokers.servicecatalog.k8s.io)
I0703 08:13:17.815509 32682 servicecatalog.go:178] retrying registration after error the server could not find the requested resource (post brokers.servicecatalog.k8s.io)
I0703 08:13:18.815179 32682 servicecatalog.go:178] retrying registration after error the server could not find the requested resource (post brokers.servicecatalog.k8s.io)
I0703 08:13:19.815231 32682 servicecatalog.go:178] retrying registration after error the server could not find the requested resource (post brokers.servicecatalog.k8s.io)
I0703 08:13:20.813902 32682 servicecatalog.go:178] retrying registration after error the server could not find the requested resource (post brokers.servicecatalog.k8s.io)
I0703 08:13:21.814974 32682 servicecatalog.go:178] retrying registration after error the server could not find the requested resource (post brokers.servicecatalog.k8s.io)
I0703 08:13:22.815265 32682 servicecatalog.go:178] retrying registration after error the server could not find the requested resource (post brokers.servicecatalog.k8s.io)
I0703 08:13:23.815524 32682 servicecatalog.go:178] retrying registration after error the server could not find the requested resource (post brokers.servicecatalog.k8s.io)
I0703 08:13:24.815143 32682 servicecatalog.go:178] retrying registration after error the server could not find the requested resource (post brokers.servicecatalog.k8s.io)
I0703 08:13:25.813003 32682 servicecatalog.go:178] retrying registration after error the server could not find the requested resource (post brokers.servicecatalog.k8s.io)
I0703 08:13:26.816452 32682 servicecatalog.go:178] retrying registration after error the server could not find the requested resource (post brokers.servicecatalog.k8s.io)
I0703 08:13:27.815056 32682 servicecatalog.go:178] retrying registration after error the server could not find the requested resource (post brokers.servicecatalog.k8s.io)
I0703 08:13:28.815034 32682 servicecatalog.go:178] retrying registration after error the server could not find the requested resource (post brokers.servicecatalog.k8s.io)
I0703 08:13:29.815683 32682 servicecatalog.go:178] retrying registration after error the server could not find the requested resource (post brokers.servicecatalog.k8s.io)
I0703 08:13:30.812381 32682 servicecatalog.go:178] retrying registration after error the server could not find the requested resource (post brokers.servicecatalog.k8s.io)
I0703 08:13:30.814036 32682 servicecatalog.go:178] retrying registration after error the server could not find the requested resource (post brokers.servicecatalog.k8s.io)
FAIL
Error: failed to register broker with service catalog: timed out waiting for the condition
```
##### Version
Current master.
```
$ oc version
oc v3.6.0-alpha.2+fc34104-740
kubernetes v1.6.1+5115d708d7
features: Basic-Auth
Server https://127.0.0.1:8443
kubernetes v1.6.1+5115d708d7
```
cc @bparees @csrwng | 1medium
|
Title: cli: cookiecutter project
Body: Add cookie cutter template rendering on init
https://github.com/rochacbruno/quokka_ng/issues/57 | 1medium
|
Title: [archival placeholder]
Body: This is a placeholder for later issues/prs archival.
It is needed now to reserve the initial issue numbers before going with actual development (PRs), so that later these placeholders could be populated with actual archived issues & prs with proper intra-repo cross-linking preserved. | 3misc
|
Title: spam false positive
Body: Hello!
SimpleLogin rejects emails from fwd@dropmail.me. This address is NOT malicious and is used by dropmail.me (a one-time mailbox generation service with forwarding to a real email address). Please fix this problem.
However, emails are sent perfectly to all addresses (including one-time emails (tested on 10minutemail.net and temp-mail.org)), which tells us that this is a problem on the SimpleLogin side.
Links:
Issue website: https://dropmail.me
Issue email receiving from: fwd@dropmail.me
Important notes:
The problem with forwarding from fwd@dropmail.me is ONLY to SimpleLogin addresses. Forwarding to ProtonMail/Gmail/Yahoo and so on works OK
SimpleLogin rejects the mail from fwd@dropmail.me with “554 5.7.1 Spam message rejected”.
Yes I know about surbl (https://www.surbl.org/) blocking. I had contacted them, but their tech support response was that "this is a problem on the recipient side and that temp mail services providing forwarding functions often get on such lists. You need to contact the support team of your email provider" n
Please fix the issue | 1medium
|
Title: Type hints: re-add `py.typed` marker [regression]
Body: ## Description of the problem, including code/CLI snippet
```
(.venv) C:\Build\project>echo import gitlab > bug.py
(.venv) C:\Build\project>mypy bug.py
```
## Expected Behavior
No `mypy` error
## Actual Behavior
```
bug.py:1: error: Skipping analyzing "gitlab": module is installed, but missing library stubs or py.typed marker [import-untyped]
bug.py:1: note: See https://mypy.readthedocs.io/en/stable/running_mypy.html#missing-imports
bug.py:1: note: See https://mypy.rtfd.io/en/stable/_refs.html#code-import-untyped for more info
Found 1 error in 1 file (checked 1 source file)
```
## Specifications
- python-gitlab version: 4.1.0 and 4.0.0 (not before 4)
- API version you are using (v3/v4): n/a
- Gitlab server version (or gitlab.com): n/a
## Relevant info
> Mypy will not try inferring the types of any 3rd party libraries you have installed unless they either have declared themselves to be [PEP 561 compliant stub package](https://mypy.readthedocs.io/en/stable/installed_packages.html#installed-packages) (e.g. with a py.typed file) or have registered themselves on [typeshed](https://github.com/python/typeshed), the repository of types for the standard library and some 3rd party libraries.
https://mypy.readthedocs.io/en/stable/running_mypy.html#missing-library-stubs-or-py-typed-marker
And in fact, `echo > .venv\Lib\site-packages\gitlab\py.typed` fixes the problem for me. | 1medium
|
Title: TikTok API Internal Server Error 500
Body: TikTok api has 500 internal server error
example here: https://api.douyin.wtf/api?url=https://www.tiktok.com/@dard..e..dill/video/7225981135069760773 | 1medium
|
Title: Add a Chinese version of README
Body: ### 📚 Documentation
These are the reasons why I want to add a Chinese version of the README:
1、Reduce language barriers and expand user base: Chinese is one of the most widely spoken languages in the world, and providing a Chinese version of the README will help a large number of Chinese developers and researchers get up to speed with PyTorch Lightning, especially those who are not familiar with English, thereby attracting more people to participate and use the project.
2、Increase the international reach of open source projects: Adding multilingual support, especially Chinese, will help PyTorch Lightning spread globally, especially in academia and industry in China and other Chinese-speaking regions. This will greatly enhance the project's user base and number of contributors.
3、Accelerate community contributions: By providing Chinese documentation, Chinese developers can better understand the project, which makes it easier to participate in the development and contribution of the project, and promotes the activity and growth of the open source community.
4、Improve learning efficiency: Providing Chinese users with native language versions of documents can significantly shorten their learning curve, allowing them to focus on the technology itself instead of spending extra time on language understanding. This will improve the efficiency of learning and research.
cc @borda | 0easy
|
Title: ORM alias on CTE doesnt separate out things correctly
Body:
### Discussed in https://github.com/sqlalchemy/sqlalchemy/discussions/11164
```py
from sqlalchemy import literal
from sqlalchemy import select
from sqlalchemy import union_all
from sqlalchemy.orm import aliased
from sqlalchemy.orm import DeclarativeBase
from sqlalchemy.orm import Mapped
from sqlalchemy.orm import mapped_column
class Base(DeclarativeBase):
pass
class A(Base):
__tablename__ = "a"
i: Mapped[int] = mapped_column(primary_key=True)
j: Mapped[int] = mapped_column()
class B(Base):
__tablename__ = "b"
i: Mapped[int] = mapped_column(primary_key=True)
j: Mapped[int | None] = mapped_column()
# fails
a = aliased(A, select(A).where(A.i > A.j).cte("filtered_a"))
ac = a
# works
#a = select(A).where(A.i > A.j).cte("filtered_a")
#ac = a.c
a1 = select(ac.i, literal(1, literal_execute=True).label("j"))
b = select(B).join(a, ac.i == B.i).where(B.j.is_not(None))
query = union_all(a1, b)
print(query)
``` | 1medium
|
Title: DOC: ceil, floor, etc. changes in 2.0 not represented in docs
Body: ### Describe the issue:
As [ceil's doc](https://numpy.org/doc/2.1/reference/generated/numpy.ceil.html) claims:
> The return is the ceiling of each element in x, with float dtype. This is a scalar if x is a scalar.
However, I found that NumPy-2 breaks this contract.
### Reproduce the code example:
```python
import numpy as np
a = np.array([1], dtype="int64")
b = np.ceil(a)
print(b.dtype)
```
### Error message:
```shell
>>> import numpy as np
>>> a = np.array([1], dtype="int64")
>>> b = np.ceil(a)
>>> print(b.dtype)
int64
I test it in NumPy 1.20.3:
>>> import numpy as np
>>> a = np.array([1], dtype="int64")
>>> b = np.ceil(a)
>>> print(b.dtype)
float64
```
### Python and NumPy Versions:
2.1.0
3.10.12 (main, Jul 29 2024, 16:56:48) [GCC 11.4.0]
### Runtime Environment:
[{'numpy_version': '2.1.0',
'python': '3.10.12 (main, Jul 29 2024, 16:56:48) [GCC 11.4.0]',
'uname': uname_result(system='Linux', node='42e7fc5f4d8e', release='6.8.0-49-generic', version='#49~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Wed Nov 6 17:42:15 UTC 2', machine='x86_64')},
{'simd_extensions': {'baseline': ['SSE', 'SSE2', 'SSE3'],
'found': ['SSSE3',
'SSE41',
'POPCNT',
'SSE42',
'AVX',
'F16C',
'FMA3',
'AVX2'],
'not_found': ['AVX512F',
'AVX512CD',
'AVX512_KNL',
'AVX512_KNM',
'AVX512_SKX',
'AVX512_CLX',
'AVX512_CNL',
'AVX512_ICL']}},
{'architecture': 'Haswell',
'filepath': '/usr/local/lib/python3.10/dist-packages/numpy.libs/libscipy_openblas64_-ff651d7f.so',
'internal_api': 'openblas',
'num_threads': 24,
'prefix': 'libscipy_openblas',
'threading_layer': 'pthreads',
'user_api': 'blas',
'version': '0.3.27'}]
### Context for the issue:
_No response_ | 1medium
|
Title: throw exception when calling history() on a vietnamese stock which contains dividend record
Body: ### Describe bug
When calling history() for Vietnamese stocks, if it contains dividend records, the dividend records will have a currency field, causing an exception. For example:
```
tk = yf.Ticker('PNJ.VN')
hist = tk.history(start='2024-11-01')
```
will cause:
```Traceback (most recent call last):
File "/Users/alai04/projects/python/yfinance/history.py", line 8, in <module>
hist = tk.history(start='2024-11-01')
File "/Users/alai04/.virtualenvs/stock/lib/python3.13/site-packages/yfinance/utils.py", line 104, in wrapper
result = func(*args, **kwargs)
File "/Users/alai04/.virtualenvs/stock/lib/python3.13/site-packages/yfinance/base.py", line 80, in history
return self._lazy_load_price_history().history(*args, **kwargs)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^
File "/Users/alai04/.virtualenvs/stock/lib/python3.13/site-packages/yfinance/utils.py", line 104, in wrapper
result = func(*args, **kwargs)
File "/Users/alai04/.virtualenvs/stock/lib/python3.13/site-packages/yfinance/scrapers/history.py", line 318, in history
dividends, splits, capital_gains = utils.parse_actions(data["chart"]["result"][0])
~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/alai04/.virtualenvs/stock/lib/python3.13/site-packages/yfinance/utils.py", line 536, in parse_actions
dividends.columns = ["Dividends"]
^^^^^^^^^^^^^^^^^
File "/Users/alai04/.virtualenvs/stock/lib/python3.13/site-packages/pandas/core/generic.py", line 6313, in __setattr__
return object.__setattr__(self, name, value)
~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^
File "properties.pyx", line 69, in pandas._libs.properties.AxisProperty.__set__
File "/Users/alai04/.virtualenvs/stock/lib/python3.13/site-packages/pandas/core/generic.py", line 814, in _set_axis
self._mgr.set_axis(axis, labels)
~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^
File "/Users/alai04/.virtualenvs/stock/lib/python3.13/site-packages/pandas/core/internals/managers.py", line 238, in set_axis
self._validate_set_axis(axis, new_labels)
~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^
File "/Users/alai04/.virtualenvs/stock/lib/python3.13/site-packages/pandas/core/internals/base.py", line 98, in _validate_set_axis
raise ValueError(
...<2 lines>...
)
ValueError: Length mismatch: Expected axis has 2 elements, new values have 1 elements
```
### Simple code that reproduces your problem
```python
tk = yf.Ticker('PNJ.VN')
hist = tk.history(start='2024-11-01')
```
### Debug log
DEBUG Entering history()
DEBUG Entering history()
DEBUG PNJ.VN: Yahoo GET parameters: {'period1': '2024-11-01 00:00:00+07:00', 'period2': '2025-02-17 14:10:03+07:00', 'interval': '1d', 'includePrePost': False, 'events': 'div,splits,capitalGains'}
DEBUG Entering get()
DEBUG Entering _make_request()
DEBUG url=https://query2.finance.yahoo.com/v8/finance/chart/PNJ.VN
DEBUG params={'period1': 1730394000, 'period2': 1739776203, 'interval': '1d', 'includePrePost': False, 'events': 'div,splits,capitalGains'}
DEBUG Entering _get_cookie_and_crumb()
DEBUG cookie_mode = 'basic'
DEBUG Entering _get_cookie_and_crumb_basic()
DEBUG loaded persistent cookie
DEBUG reusing cookie
DEBUG crumb = 'vsEB7v4e5NI'
DEBUG Exiting _get_cookie_and_crumb_basic()
DEBUG Exiting _get_cookie_and_crumb()
DEBUG response code=200
DEBUG Exiting _make_request()
DEBUG Exiting get()
DEBUG PNJ.VN: yfinance received OHLC data: 2024-11-01 02:00:00 -> 2025-02-17 06:54:51
DEBUG PNJ.VN: OHLC after cleaning: 2024-11-01 09:00:00+07:00 -> 2025-02-17 13:54:51+07:00
Traceback (most recent call last):
File "/Users/alai04/projects/python/yfinance/history.py", line 6, in <module>
hist = tk.history(start='2024-11-01')
File "/Users/alai04/.virtualenvs/stock/lib/python3.13/site-packages/yfinance/utils.py", line 104, in wrapper
result = func(*args, **kwargs)
File "/Users/alai04/.virtualenvs/stock/lib/python3.13/site-packages/yfinance/base.py", line 80, in history
return self._lazy_load_price_history().history(*args, **kwargs)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^
File "/Users/alai04/.virtualenvs/stock/lib/python3.13/site-packages/yfinance/utils.py", line 104, in wrapper
result = func(*args, **kwargs)
File "/Users/alai04/.virtualenvs/stock/lib/python3.13/site-packages/yfinance/scrapers/history.py", line 318, in history
dividends, splits, capital_gains = utils.parse_actions(data["chart"]["result"][0])
~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/alai04/.virtualenvs/stock/lib/python3.13/site-packages/yfinance/utils.py", line 535, in parse_actions
dividends.columns = ["Dividends"]
^^^^^^^^^^^^^^^^^
File "/Users/alai04/.virtualenvs/stock/lib/python3.13/site-packages/pandas/core/generic.py", line 6313, in __setattr__
return object.__setattr__(self, name, value)
~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^
File "properties.pyx", line 69, in pandas._libs.properties.AxisProperty.__set__
File "/Users/alai04/.virtualenvs/stock/lib/python3.13/site-packages/pandas/core/generic.py", line 814, in _set_axis
self._mgr.set_axis(axis, labels)
~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^
File "/Users/alai04/.virtualenvs/stock/lib/python3.13/site-packages/pandas/core/internals/managers.py", line 238, in set_axis
self._validate_set_axis(axis, new_labels)
~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^
File "/Users/alai04/.virtualenvs/stock/lib/python3.13/site-packages/pandas/core/internals/base.py", line 98, in _validate_set_axis
raise ValueError(
...<2 lines>...
)
ValueError: Length mismatch: Expected axis has 2 elements, new values have 1 elements
### Bad data proof
_No response_
### `yfinance` version
0.2.53
### Python version
3.13.2
### Operating system
macOS 15.3 | 2hard
|
Title: Merging corpora requires converting itertools chain object to list object
Body: When merging corpora, it is essential to convert the itertools.chain object to a list. Otherwise the serialization will not save the older corpus.
# now we can merge corpora from the two incompatible dictionaries into one
merged_corpus = itertools.chain(some_corpus_from_dict1, dict2_to_dict1[some_corpus_from_dict2])
should be
merged_corpus = list(itertools.chain(some_corpus_from_dict1, dict2_to_dict1[some_corpus_from_dict2]))
Then the merged_corpus can be serialized using the standard
MmCorpus.serialize(merged_corpus_output_fname, merged_corpus) | 0easy
|
Title: Changing a value of a widget placed below other widgets triggers their re-rendering if another one used of the same type
Body: Place two widgets of the same type (dropdown) in two different cells separated by a cell which prints a message with a counter.
If the upper one value is changed, all of them are re-rendered (ok) but if the lower one is changed, it's still the case, even if it should not. I suspect it has to do with the fact that these two widgets are of same type, and there is no unique identifier or something. | 1medium
|
Title: Run FLUX-controlnet zero3 training failed: 'weight' must be 2-D
Body: ### Describe the bug
I am attempting to use Zero-3 for Flux Controlnet training on 8 GPUs following the guidance of [README](https://github.com/huggingface/diffusers/blob/main/examples/controlnet/README_flux.md#apply-deepspeed-zero3). The error below occured:
```
[rank0]: RuntimeError: 'weight' must be 2-D
```
### Reproduction
accelerate config:
```
compute_environment: LOCAL_MACHINE
debug: false
deepspeed_config:
gradient_accumulation_steps: 8
offload_optimizer_device: cpu
offload_param_device: cpu
zero3_init_flag: true
zero3_save_16bit_model: true
zero_stage: 3
distributed_type: DEEPSPEED
downcast_bf16: 'no'
enable_cpu_affinity: false
machine_rank: 0
main_training_function: main
mixed_precision: bf16
num_machines: 1
num_processes: 8
rdzv_backend: static
same_network: true
tpu_env: []
tpu_use_cluster: false
tpu_use_sudo: false
use_cpu: false
```
training command:
```
accelerate launch --config_file "./accelerate_config_zero3.yaml" train_controlnet_flux_zero3.py --pretrained_model_name_or_path=/srv/mindone/wty/flux.1-dev/ --jsonl_for_train=/srv/mindone/wty/diffusers/examples/controlnet/train_1000.jsonl --conditioning_image_column=conditioning_image --image_column=image --caption_column=text --output_dir=/srv/mindone/wty/diffusers/examples/controlnet/single_layer --mixed_precision="bf16" --resolution=512 --learning_rate=1e-5 --max_train_steps=100 --train_batch_size=1 --gradient_accumulation_steps=8 --num_double_layers=4 --num_single_layers=0 --seed=42 --gradient_checkpointing --cache_dir=/srv/mindone/wty/diffusers/examples/controlnet/cache --dataloader_num_workers=8 --resume_from_checkpoint="latest"
```
### Logs
```shell
Map: 0%| | 0/1000 [00:00<?, ? examples/s]
[rank0]: Traceback (most recent call last):
[rank0]: File "/srv/mindone/wty/diffusers/examples/controlnet/train_controlnet_flux_zero3.py", line 1481, in <module>
[rank0]: main(args)
[rank0]: File "/srv/mindone/wty/diffusers/examples/controlnet/train_controlnet_flux_zero3.py", line 1182, in main
[rank0]: train_dataset = train_dataset.map(
[rank0]: File "/home/miniconda3/envs/flux-perf/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 562, in wrapper
[rank0]: out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
[rank0]: File "/home/miniconda3/envs/flux-perf/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 3079, in map
[rank0]: for rank, done, content in Dataset._map_single(**dataset_kwargs):
[rank0]: File "/home/miniconda3/envs/flux-perf/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 3519, in _map_single
[rank0]: for i, batch in iter_outputs(shard_iterable):
[rank0]: File "/home/miniconda3/envs/flux-perf/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 3469, in iter_outputs
[rank0]: yield i, apply_function(example, i, offset=offset)
[rank0]: File "/home/miniconda3/envs/flux-perf/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 3392, in apply_function
[rank0]: processed_inputs = function(*fn_args, *additional_args, **fn_kwargs)
[rank0]: File "/srv/mindone/wty/diffusers/examples/controlnet/train_controlnet_flux_zero3.py", line 1094, in compute_embeddings
[rank0]: prompt_embeds, pooled_prompt_embeds, text_ids = flux_controlnet_pipeline.encode_prompt(
[rank0]: File "/srv/mindone/wty/diffusers/src/diffusers/pipelines/flux/pipeline_flux_controlnet.py", line 396, in encode_prompt
[rank0]: pooled_prompt_embeds = self._get_clip_prompt_embeds(
[rank0]: File "/srv/mindone/wty/diffusers/src/diffusers/pipelines/flux/pipeline_flux_controlnet.py", line 328, in _get_clip_prompt_embeds
[rank0]: prompt_embeds = self.text_encoder(text_input_ids.to(device), output_hidden_states=False)
[rank0]: File "/home/miniconda3/envs/flux-perf/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
[rank0]: return self._call_impl(*args, **kwargs)
[rank0]: File "/home/miniconda3/envs/flux-perf/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl
[rank0]: return forward_call(*args, **kwargs)
[rank0]: File "/home/miniconda3/envs/flux-perf/lib/python3.9/site-packages/transformers/models/clip/modeling_clip.py", line 1056, in forward
[rank0]: return self.text_model(
[rank0]: File "/home/miniconda3/envs/flux-perf/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
[rank0]: return self._call_impl(*args, **kwargs)
[rank0]: File "/home/miniconda3/envs/flux-perf/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl
[rank0]: return forward_call(*args, **kwargs)
[rank0]: File "/home/miniconda3/envs/flux-perf/lib/python3.9/site-packages/transformers/models/clip/modeling_clip.py", line 947, in forward
[rank0]: hidden_states = self.embeddings(input_ids=input_ids, position_ids=position_ids)
[rank0]: File "/home/miniconda3/envs/flux-perf/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
[rank0]: return self._call_impl(*args, **kwargs)
[rank0]: File "/home/miniconda3/envs/flux-perf/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl
[rank0]: return forward_call(*args, **kwargs)
[rank0]: File "/home/miniconda3/envs/flux-perf/lib/python3.9/site-packages/transformers/models/clip/modeling_clip.py", line 292, in forward
[rank0]: inputs_embeds = self.token_embedding(input_ids)
[rank0]: File "/home/miniconda3/envs/flux-perf/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
[rank0]: return self._call_impl(*args, **kwargs)
[rank0]: File "/home/miniconda3/envs/flux-perf/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl
[rank0]: return forward_call(*args, **kwargs)
[rank0]: File "/home/miniconda3/envs/flux-perf/lib/python3.9/site-packages/torch/nn/modules/sparse.py", line 190, in forward
[rank0]: return F.embedding(
[rank0]: File "/home/miniconda3/envs/flux-perf/lib/python3.9/site-packages/torch/nn/functional.py", line 2551, in embedding
[rank0]: return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
[rank0]: RuntimeError: 'weight' must be 2-D
```
### System Info
- 🤗 Diffusers version: 0.33.0.dev0(HEAD on #10945)
- Platform: Linux-4.15.0-156-generic-x86_64-with-glibc2.27
- Running on Google Colab?: No
- Python version: 3.9.21
- PyTorch version (GPU?): 2.6.0+cu124 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Huggingface_hub version: 0.29.1
- Transformers version: 4.49.0
- Accelerate version: 1.4.0
- PEFT version: not installed
- Bitsandbytes version: not installed
- Safetensors version: 0.5.3
- xFormers version: not installed
- Accelerator: NVIDIA A100-SXM4-80GB, 81920 MiB
NVIDIA A100-SXM4-80GB, 81920 MiB
NVIDIA A100-SXM4-80GB, 81920 MiB
NVIDIA A100-SXM4-80GB, 81920 MiB
NVIDIA A100-SXM4-80GB, 81920 MiB
NVIDIA A100-SXM4-80GB, 81920 MiB
NVIDIA A100-SXM4-80GB, 81920 MiB
NVIDIA A100-SXM4-80GB, 81920 MiB
### Who can help?
@yiyixuxu @sayakpaul | 2hard
|
Title: add a timestamp token that can be part of the file_template
Body: Wouldn't it be nice to have the migration files (the ones which are in the `migration/versions` folder) sorted by creation date. These are hard to retrieve without the help of the command `alembic history`.
I propose a new variable to use in the `file_template` option, that could be placed at first, so that the file order could be maintained.
Is that reasonable and isn't that somehow a shallow request ?
| 1medium
|
Title: [tubitv] An extractor error has occurred (caused by KeyError(`video_id`)) tubi
Body: ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting that yt-dlp is broken on a **supported** site
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [X] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
### Region
United States
### Provide a description that is worded well enough to be understood
I have found references to extractor errors, but not for this site. Downloads were working yesterday (1/13/2025). Have tried multiple URLs, including one that I successfully downloaded yesterday.
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [X] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
[debug] Command-line config: ['-vU', '--retries', 'infinite', '--socket-timeout', '10', '-o', 'E:\\vidl\\uncategorized\\%(title)s.%(ext)s', 'https://tubitv.com/movies/100025513/seven-days']
[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version nightly@2025.01.12.232754 from yt-dlp/yt-dlp-nightly-builds [dade5e35c] (win_exe)
[debug] Python 3.10.11 (CPython AMD64 64bit) - Windows-10-10.0.19045-SP0 (OpenSSL 1.1.1t 7 Feb 2023)
[debug] exe versions: ffmpeg 7.0.2-full_build-www.gyan.dev (setts), ffprobe 7.0.2-full_build-www.gyan.dev
[debug] Optional libraries: Cryptodome-3.21.0, brotli-1.1.0, certifi-2024.12.14, curl_cffi-0.5.10, mutagen-1.47.0, requests-2.32.3, sqlite3-3.40.1, urllib3-2.3.0, websockets-14.1
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests, websockets, curl_cffi
[debug] Loaded 1837 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp-nightly-builds/releases/latest
Latest version: nightly@2025.01.12.232754 from yt-dlp/yt-dlp-nightly-builds
yt-dlp is up to date (nightly@2025.01.12.232754 from yt-dlp/yt-dlp-nightly-builds)
[tubitv] Extracting URL: https://tubitv.com/movies/100025513/seven-days
[tubitv] 100025513: Downloading webpage
ERROR: 100025513: An extractor error has occurred. (caused by KeyError('100025513')); please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U
File "yt_dlp\extractor\common.py", line 742, in extract
File "yt_dlp\extractor\tubitv.py", line 100, in _real_extract
KeyError: '100025513'
```
| 1medium
|
Title: Secret ended with: too old resource version
Body: I've read a lot about this issue on google and noticed people saying these logs are expected nothing to worry about, but I often see the same logs in the pods and I am curious about how I can get rid of them. Why are we seeing these logs exactly? Please shed some light on that.
Secret ended with: too old resource version
```
W0528 08:24:35.191059 1 reflector.go:289] pkg/mod/k8s.io/client-go@v11.0.1-0.20190409021438-1a26190bd76a+incompatible/tools/cache/reflector.go:94: watch of *v1.Secret ended with: too old resource version: 136329240 (136330521)
W0528 08:39:29.197834 1 reflector.go:289] pkg/mod/k8s.io/client-go@v11.0.1-0.20190409021438-1a26190bd76a+incompatible/tools/cache/reflector.go:94: watch of *v1.Secret ended with: too old resource version: 136335511 (136336795)
W0528 08:55:35.204496 1 reflector.go:289] pkg/mod/k8s.io/client-go@v11.0.1-0.20190409021438-1a26190bd76a+incompatible/tools/cache/reflector.go:94: watch of *v1.Secret ended with: too old resource version: 136341573 (136343365)
W0528 09:09:04.216732 1 reflector.go:289] pkg/mod/k8s.io/client-go@v11.0.1-0.20190409021438-1a26190bd76a+incompatible/tools/cache/reflector.go:94: watch of *v1.Secret ended with: too old resource version: 136348121 (136349202)
W0528 09:24:00.224786 1 reflector.go:289] pkg/mod/k8s.io/client-go@v11.0.1-0.20190409021438-1a26190bd76a+incompatible/tools/cache/reflector.go:94: watch of *v1.Secret ended with: too old resource version: 136353600 (136355423)
W0528 09:41:36.231222 1 reflector.go:289] pkg/mod/k8s.io/client-go@v11.0.1-0.20190409021438-1a26190bd76a+incompatible/tools/cache/reflector.go:94: watch of *v1.Secret ended with: too old resource version: 136359660 (136362039)
```
#### Environment:
- Kubernetes version (use `kubectl version`): 1.18
- Cloud provider or hardware configuration: AWS
/sig api-machinery
| 1medium
|
Title: Linux Agent Wayland
Body: Currently the linux agent is working only with X. More and more distro are switching to Wayland as default display manager.
Supporting Wayland would be nice, having an option to choose between X and Wayland when connexion would be perfect.
| 1medium
|
Title: maxvol adjustment
Body: #### Is your feature request related to a problem? Please describe.
I think there might be an issue with the maxvol function. <https://github.com/tensorly/tensorly/blob/main/tensorly/contrib/decomposition/_tt_cross.py>
You referred to "The Greedy approximation algorithm for MAX-VOL" in the paper "Ali Çivril, Malik Magdon-Ismail: On selecting a maximum volume sub-matrix of a matrix and related problems."
In that paper, they mention "Remove the projection of v from every element of A." When I read this part, I interpreted it as calculating an orthogonal vector to the vector 𝑣 with the largest norm.
However, in your code, it doesn't seem to calculate the orthogonal vector. Instead, it uses something that looks strange to me.
Maybe I am misunderstanding your code or the paper.
Could you clarify whether I am wrong or if there is indeed an issue with the code?
#### Describe alternatives you've considered

This is your code
```python
#Find the row of max norm"
max_row_idx = np.argmax(rows_norms, axis=0)
max_row = A[rest_row_idx[max_row_idx], :]
projection = np.dot(A_new, max_row.T)
projection = projection / np.sum(max_row**2)
#calculate orthogonal vector"
A_new = A_new - np.outer(projection, max_row)
```
This is my code
#### Additional context
When I compared the performance between my code and yours, here are the results:

- The **blue line** represents my code, while the **orange line** represents your code.
- You can see that the volume achieved by my code is almost always larger than yours.
- Occasionally, your code outperforms mine, but this happens in only **11% of the iterations**.
The values on the graph were calculated using a randomly generated matrix.
Could you clarify the cause of this discrepancy? Is there an issue with the logic in your code, or is it something I might be missing?
Let me know if this works! | 1medium
|
Title: [Question] Does backtest method automatically apply inverse transformation for mappings?
Body: I'm currently studying Darts and it is clear to me how the backtest method works when I pass a mapped series to it as an argument.
For example, if I write
`best_model.backtest(series[0].map(np.log1p), forecast_horizon=1, metric=mae)`
where `best_model` is a Theta model object and mae was imported from Darts, will it infer the inverse transformation (in this case, np.expm1) and apply it to the forecasted data in order to calculate the metrics or not?
Thank you in advance. | 3misc
|
Title: بهم ریختگی تاریخ کاربران
Body: با سلام
دیشب ساعت 12 بعلت ری استارت یهویی سرور ها وضعیت تمام کاربرانی که حتی تاریخ هم داشتن یهویی شد expire .
مورد 1 : چرا اینطوری شده ؟
مورد 2 : دستوری هست که بصورت bulk همه رو یکبار فعال کنه ؟
سپاس | 1medium
|
Title: fix exact year inconsistencies in human readable duration
Body: <!-- Thanks for sending a pull request! Here are some tips for you:
1. If this is your first time, please read our contributor guidelines: https://git.k8s.io/community/contributors/guide/first-contribution.md#your-first-contribution and developer guide https://git.k8s.io/community/contributors/devel/development.md#development-guide
2. Please label this pull request according to what type of issue you are addressing, especially if this is a release targeted pull request. For reference on required PR/issue labels, read here:
https://git.k8s.io/community/contributors/devel/sig-release/release.md#issuepr-kind-label
3. Ensure you have added or ran the appropriate tests for your PR: https://git.k8s.io/community/contributors/devel/sig-testing/testing.md
4. If you want *faster* PR reviews, read how: https://git.k8s.io/community/contributors/guide/pull-requests.md#best-practices-for-faster-reviews
5. If the PR is unfinished, see how to mark it: https://git.k8s.io/community/contributors/guide/pull-requests.md#marking-unfinished-pull-requests
-->
**What type of PR is this?**
/kind bug
**What this PR does / why we need it**:
Human readable duration formats for exact years (e.g. 2 years, 3 years, etc.) are not consistent. They are currently displayed as 2y0d, 3y0d, ..., 7y0d. 8 years and above showed only years. Other exact units of time did not display 0, e.g. 2h and not 2h0m. This PR address this inconsistency by making exact years display only the year, and no day(s).
**Which issue(s) this PR fixes**:
<!--
*Automatically closes linked issue when PR is merged.
Usage: `Fixes #<issue number>`, or `Fixes (paste link of issue)`.
_If PR is about `failing-tests or flakes`, please post the related issues/tests in a comment and do not use `Fixes`_*
-->
Fixes #89960
**Special notes for your reviewer**:
**Does this PR introduce a user-facing change?**:
<!--
If no, just write "NONE" in the release-note block below.
If yes, a release note is required:
Enter your extended release note in the block below. If the PR requires additional action from users switching to the new release, include the string "action required".
For more information on release notes see: https://git.k8s.io/community/contributors/guide/release-notes.md
-->
```release-note
NONE
```
**Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.**:
<!--
This section can be blank if this pull request does not require a release note.
When adding links which point to resources within git repositories, like
KEPs or supporting documentation, please reference a specific commit and avoid
linking directly to the master branch. This ensures that links reference a
specific point in time, rather than a document that may change over time.
See here for guidance on getting permanent links to files: https://help.github.com/en/articles/getting-permanent-links-to-files
Please use the following format for linking documentation:
- [KEP]: <link>
- [Usage]: <link>
- [Other doc]: <link>
-->
```docs
```
| 1medium
|
Title: Ability to choose file from UI
Body: Hey @tfranzel,
First of all, thanks for the great library!
I'd like to get a file picker to show up on the Swagger UI (I believe this is possible, but if not then let me know). My understanding was that I should be able to do that using the `FileUploadParser` and `extend_schema`. However, when I try to get it it just seems to think the field is a string field.
Here's the relevant part of my code:
```
class SamplesheetSerializer(Serializer):
file = serializers.FileField()
class SamplesheetView(APIView):
parser_classes = [FileUploadParser]
@extend_schema(request=SamplesheetSerializer)
@action(detail=False, methods=['post'])
def post(self, request: Request, filename: Optional[str] = None):
...
```
And here's the Swagger it generates:
<img width="1362" alt="Screen Shot 2021-07-12 at 1 57 49 PM" src="https://user-images.githubusercontent.com/7347808/125354855-276bdd00-e319-11eb-8007-c7da2ae931e3.png">
Any help/hints would be greatly appreciated! | 1medium
|
Title: TokenRegex rules file
Body: How to add a prop for tokenregex using python ?
[https://nlp.stanford.edu/software/tokensregex.html](url) mentions how to use a basic_ner.rules for rule based annotations, i have tried adding to prop but it didnt work. Something similar to [https://stackoverflow.com/questions/61235575/stanford-corenlp-tokensregex-error-while-parsing-the-rules-file-in-python](url) . Am i missing something ?
Or can you provide an example of how to pass on a rule file for tokenregex on the corenlp client | 1medium
|
Title: mac上点击ide左边的poco assistant里面poco inspector 按钮,ide直接崩溃
Body: 问题描述:mac上点击ide左边的poco assistan里面poco inspector 按钮,ide直接崩溃
mac版本:12.4
连接手机版本:Android 11
airtest版本:1.2.14
问题复线路径:连接手机-点击device id显示截图-点击左边的poco assistant选择android-点击poco inspector按钮,ide直接崩溃
log:
Thread 0 Crashed:: CrBrowserMain Dispatch queue: com.apple.main-thread
0 PyQt5.QtWebEngineWidgets.so 0x1072d6051 sipSubClass_QWebEngineView(void**) + 49
1 PyQt5.sip.so 0x1062d66a3 convertSubClass + 152
2 PyQt5.sip.so 0x1062d6563 sip_api_convert_from_type + 225
3 PyQt5.QtCore.so 0x106e6664d meth_QObject_sender(_object*, _object*) + 173
4 .Python 0x10639229b _PyCFunction_FastCallDict + 491
5 .Python 0x106414e27 call_function + 439
6 .Python 0x106411597 _PyEval_EvalFrameDefault + 27511
7 .Python 0x10641588f _PyEval_EvalCodeWithName + 2447
8 .Python 0x106416141 fast_function + 545
9 .Python 0x106414e01 call_function + 401
10 .Python 0x106411636 _PyEval_EvalFrameDefault + 27670
11 .Python 0x10641609d fast_function + 381
12 .Python 0x106414e01 call_function + 401
13 .Python 0x106411597 _PyEval_EvalFrameDefault + 27511
14 .Python 0x10641609d fast_function + 381
15 .Python 0x106414e01 call_function + 401
16 .Python 0x106411597 _PyEval_EvalFrameDefault + 27511
17 .Python 0x1064162bc _PyFunction_FastCallDict + 348
18 .Python 0x1063493e7 _PyObject_FastCallDict + 247
19 .Python 0x106349505 _PyObject_Call_Prepend + 149
20 .Python 0x106349220 PyObject_Call + 96
21 PyQt5.QtCore.so 0x106e7f9b8 PyQtSlot::call(_object*, _object*) const + 40
22 PyQt5.QtCore.so 0x106e7f8b7 PyQtSlot::invoke(void**, _object*, void*, bool) const + 375
23 PyQt5.QtCore.so 0x106e80418 PyQtSlotProxy::unislot(void**) + 88
24 PyQt5.QtCore.so 0x106e8038a PyQtSlotProxy::qt_metacall(QMetaObject::Call, int, void**) + 58
25 QtCore 0x107de2964 QObject::event(QEvent*) + 788
26 QtWidgets 0x109aecf12 QApplicationPrivate::notify_helper(QObject*, QEvent*) + 306
27 QtWidgets 0x109aee2ed QApplication::notify(QObject*, QEvent*) + 573
28 PyQt5.QtWidgets.so 0x1090fe50a sipQApplication::notify(QObject*, QEvent*) + 234
29 QtCore 0x107db954f QCoreApplication::notifyInternal2(QObject*, QEvent*) + 159
30 QtCore 0x107dba722 QCoreApplicationPrivate::sendPostedEvents(QObject*, int, QThreadData*) + 850
31 libqcocoa.dylib 0x1186343de 0x11860a000 + 173022
32 libqcocoa.dylib 0x118634c91 0x11860a000 + 175249
33 CoreFoundation 0x7ff8014d419b __CFRUNLOOP_IS_CALLING_OUT_TO_A_SOURCE0_PERFORM_FUNCTION__ + 17
34 CoreFoundation 0x7ff8014d4103 __CFRunLoopDoSource0 + 180
35 CoreFoundation 0x7ff8014d3e7d __CFRunLoopDoSources0 + 242
36 CoreFoundation 0x7ff8014d2898 __CFRunLoopRun + 892
37 CoreFoundation 0x7ff8014d1e5c CFRunLoopRunSpecific + 562
38 HIToolbox 0x7ff80a1795e6 RunCurrentEventLoopInMode + 292
39 HIToolbox 0x7ff80a17934a ReceiveNextEventCommon + 594
40 HIToolbox 0x7ff80a1790e5 _BlockUntilNextEventMatchingListInModeWithFilter + 70
41 AppKit 0x7ff803f111fd _DPSNextEvent + 927
42 AppKit 0x7ff803f0f8ba -[NSApplication(NSEvent) _nextEventMatchingEventMask:untilDate:inMode:dequeue:] + 1394
43 AppKit 0x7ff803f01f69 -[NSApplication run] + 586
44 libqcocoa.dylib 0x118633a8d 0x11860a000 + 170637
45 QtCore 0x107db50a2 QEventLoop::exec(QFlags<QEventLoop::ProcessEventsFlag>) + 418
46 QtCore 0x107db9c62 QCoreApplication::exec() + 402
47 PyQt5.QtWidgets.so 0x1091ed6f2 meth_QApplication_exec_(_object*, _object*) + 82
48 .Python 0x10639229b _PyCFunction_FastCallDict + 491
49 .Python 0x106414e27 call_function + 439
50 .Python 0x106411597 _PyEval_EvalFrameDefault + 27511
51 .Python 0x10641609d fast_function + 381
52 .Python 0x106414e01 call_function + 401
53 .Python 0x106411597 _PyEval_EvalFrameDefault + 27511
54 .Python 0x10641609d fast_function + 381
55 .Python 0x106414e01 call_function + 401
56 .Python 0x106411597 _PyEval_EvalFrameDefault + 27511
57 .Python 0x10641588f _PyEval_EvalCodeWithName + 2447
58 .Python 0x10640a954 PyEval_EvalCode + 100
59 AirtestIDE 0x10325e0c9 0x10325c000 + 8393
60 AirtestIDE 0x10325e69a 0x10325c000 + 9882
61 AirtestIDE 0x10325cec4 0x10325c000 + 3780
| 2hard
|
Title: Need help on customize relay.Connection
Body: I have types like this and I want the `CollectionNode` to show photos with filter `is_public=1` not all photos
**types:**
```py
from strawberry_django_plus import gql
from strawberry_django_plus.gql import relay
@gql.django.type(Photo, filters=PhotoFilter, order=PhotoOrder)
class PhotoNode(relay.Node):
id: gql.auto
title: gql.auto
@classmethod
def get_queryset(cls, queryset: QuerySet[Photo], _) -> QuerySet[Photo]:
return queryset.filter(is_public=True)
@gql.django.type(Collection, filters=CollectionFilter, order=CollectionOrder)
class CollectionNode(relay.Node):
id: gql.auto
photos: relay.Connection["PhotoNode"] = relay.connection()
```
**schema :**
```py
@gql.type
class Query:
"""All available queries for this schema."""
# Photos
photo_list: relay.Connection[PhotoNode] = gql.django.connection(description='Return Photo connection with pagination information.')
# Collection
collection_list: relay.Connection[CollectionNode] = gql.django.connection(description='Return Collection connection with pagination information.')
schema = gql.Schema(
query=Query,
extensions=[
SchemaDirectiveExtension,
DjangoOptimizerExtension,
],
)
```
`photo_list` query is fine and shows public photos but `collection_list -> photos` not working and show all photos | 1medium
|
Title: Fix dynamic discovery error in e2e
Body: Actually fixes #51910 (I blame the reviewer of #51915, definitely not the author)
The helper function never identified dynamic discovery errors | 0easy
|
Title: [Feature Request] do not display multiple timezones as an option if they are identical
Body: I had a situation where the check's time zone was "Etc/UTC", and the browser's time zone was also UTC, but I had all 3 displayed:
UTC, Etc/UTC & Browser's time zone. This was very confusing. In reality, clicking any of these did not change anything because in fact they were representing the same time zone.

| 1medium
|
Title: Treating OpenAI responses exclusively as either content or function calls, but they can be both
Body: `OpenAIAgentModel.request_stream` makes an assumption about the response being exclusively text or tool calls, but a single OpenAI response can be both. A response that contains both triggers an uncaught exception in `OpenAIStreamTextResponse` (see below for examples).
Here we're assuming that a response exclusively contains `content` or `tool_calls`:
https://github.com/pydantic/pydantic-ai/blob/d595c084b2dbaa9e1b433bcfb0d7ba4af1be42c2/pydantic_ai_slim/pydantic_ai/models/openai.py#L199-L217
An OpenAI response can contain both. Here's a demonstration using OpenAI's client:
```python
from devtools import debug
from openai import OpenAI
client = OpenAI()
tools = [
{
"type": "function",
"function": {
"name": "get_weather",
"parameters": {
"type": "object",
"properties": {
"location": {"type": "string"},
"unit": {"type": "string", "enum": ["c", "f"]},
},
"required": ["location", "unit"],
"additionalProperties": False,
},
},
}
]
completion = client.chat.completions.create(
model="gpt-4o",
messages=[
{
"role": "system",
"content": "Always tell the user what you are about to do",
},
{
"role": "user",
"content": "What 1+1 and what's the weather like in Paris today?",
},
],
tools=tools,
tool_choice="auto",
)
debug(completion)
```
Output:
```sh
completion: ChatCompletion(
id='chatcmpl-Ab71YFy1O6SNnnV6uDIBov3Rn8Sk0',
choices=[
Choice(
finish_reason='tool_calls',
index=0,
logprobs=None,
message=ChatCompletionMessage(
content=(
'1 + 1 equals 2. \n'
'\n'
'Now, I will look up the current weather in Paris.'
),
refusal=None,
role='assistant',
audio=None,
function_call=None,
tool_calls=[
ChatCompletionMessageToolCall(
id='call_Ubvr33St36ChbOUbLMQNy2Ot',
function=Function(
arguments='{"location":"Paris","unit":"c"}',
name='get_weather',
),
type='function',
),
],
),
),
],
created=1733408500,
model='gpt-4o-2024-08-06',
object='chat.completion',
service_tier=None,
system_fingerprint='fp_7f6be3efb0',
usage=CompletionUsage(
completion_tokens=40,
prompt_tokens=72,
total_tokens=112,
completion_tokens_details=CompletionTokensDetails(
accepted_prediction_tokens=0,
audio_tokens=0,
reasoning_tokens=0,
rejected_prediction_tokens=0,
),
prompt_tokens_details=PromptTokensDetails(
audio_tokens=0,
cached_tokens=0,
),
),
) (ChatCompletion)
```
The equivalent Pydantic AI example will raise an exception:
```python
from __future__ import annotations as _annotations
import asyncio
from typing import Any
from devtools import debug
from pydantic_ai import Agent
weather_agent = Agent(
"openai:gpt-4o",
system_prompt="Always tell the user what you are about to do",
)
@weather_agent.tool_plain
async def get_weather(location: str, unit: str) -> dict[str, Any]:
debug(location, unit)
return {"result": "123"}
async def main():
prompt = "What 1+1 and what's the weather like in Paris today?"
async with weather_agent.run_stream(prompt) as result:
async for text in result.stream(debounce_by=0.01):
print(text)
debug(result)
if __name__ == "__main__":
asyncio.run(main())
```
Output:
```sh
1
1 +
1 + 1 equals
1 + 1 equals 2
1 + 1 equals 2.
Now,
1 + 1 equals 2.
Now, I will
1 + 1 equals 2.
Now, I will get the
1 + 1 equals 2.
Now, I will get the weather information
1 + 1 equals 2.
Now, I will get the weather information for Paris
1 + 1 equals 2.
Now, I will get the weather information for Paris today.
Traceback (most recent call last):
File "/tmp/pydantic-ai/weather.py", line 30, in <module>
asyncio.run(main())
File "/Users/siavash/.local/share/uv/python/cpython-3.11.6-macos-aarch64-none/lib/python3.11/asyncio/runners.py", line 190, in run
return runner.run(main)
^^^^^^^^^^^^^^^^
File "/Users/siavash/.local/share/uv/python/cpython-3.11.6-macos-aarch64-none/lib/python3.11/asyncio/runners.py", line 118, in run
return self._loop.run_until_complete(task)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/siavash/.local/share/uv/python/cpython-3.11.6-macos-aarch64-none/lib/python3.11/asyncio/base_events.py", line 653, in run_until_complete
return future.result()
^^^^^^^^^^^^^^^
File "/tmp/pydantic-ai/weather.py", line 24, in main
async for text in result.stream(debounce_by=0.01):
File "/tmp/pydantic-ai/.venv/lib/python3.11/site-packages/pydantic_ai/result.py", line 152, in stream
async for text in self.stream_text(debounce_by=debounce_by):
File "/tmp/pydantic-ai/.venv/lib/python3.11/site-packages/pydantic_ai/result.py", line 191, in stream_text
async for _ in group_iter:
File "/tmp/pydantic-ai/.venv/lib/python3.11/site-packages/pydantic_ai/_utils.py", line 198, in async_iter_groups
item = done.pop().result()
^^^^^^^^^^^^^^^^^^^
File "/tmp/pydantic-ai/.venv/lib/python3.11/site-packages/pydantic_ai/models/openai.py", line 286, in __anext__
assert choice.delta.content is not None, f'Expected delta with content, invalid chunk: {chunk!r}'
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AssertionError: Expected delta with content, invalid chunk: ChatCompletionChunk(id='chatcmpl-Ab7Am8lOPLxVN1pySDvqpH16EHO38', choices=[Choice(delta=ChoiceDelta(content=None, function_call=None, refusal=None, role=None, tool_calls=[ChoiceDeltaToolCall(index=0, id='call_SObzzp2KWfVzNUruHE5f7e1T', function=ChoiceDeltaToolCallFunction(arguments='', name='get_weather'), type='function')]), finish_reason=None, index=0, logprobs=None)], created=1733409072, model='gpt-4o-2024-08-06', object='chat.completion.chunk', service_tier=None, system_fingerprint='fp_7f6be3efb0', usage=None)
``` | 2hard
|
Title: How to remove a component from nlp pipeline? or should I create(maybe load) nlp object with same statistical model for every different pipeline?
Body: I am a newbie for spacy...
| 0easy
|
Title: Deleting RoI
Body: <!-- Please search existing issues to avoid creating duplicates. -->
### Environment Information
- geemap version: 0.8.18
- Python version: 3.9
- Operating System: Debian
### Description
If the user has drawn many ROIs, it works perfectly when you apply any kinda processing on the last drawn RoI, However, if you delete the last drawn RoI it will keep applying the processing on the RoI you deleted instead of using the previous RoI.
Describe what you were trying to get done.
Tell us what happened, what went wrong, and what you expected to happen.
| 1medium
|
Title: Problem with multiple where conditions
Body: The following code always return empty result, but the table has records:
```
users = await User.query.where(or_(User.id==10,User.id==15)).gino.all()
users = await User.query.where(User.id>=10).where(User.id<=15)).gino.all()
users = await User.query.where(User.id.between(10,15)).gino.all()
``` | 1medium
|
Title: ValueError on first epoch
Body: - PyTorch-Forecasting version: 0.10.3
- PyTorch version: 1.13.0
- Python version: 3.9.12
- Operating System: Mac OS 12.6
### Expected behavior
I'm trying to do good ol' stock price prediction using TFT. I've followed the stallion example, changing the dataset configuration as required. I'm expecting the call to `trainer.fit` to work as it would normally...
### Actual behaviour
...but it fails on the first training iteration with the traceback included below. I'm almost certain this is a trivial configuration mistake. Please let me know what I'm forgetting to include.
### Code to reproduce the problem
**Example dataset: MSFT, GOOG, AMZN, AAPL, META- 512 day ticker history (excl. weekends)**

**Colab notebook here:**
[https://colab.research.google.com/drive/1lF6DWoje_qD0PZysNu6UHaoqK9nDstBH?usp=sharing](https://colab.research.google.com/drive/1lF6DWoje_qD0PZysNu6UHaoqK9nDstBH?usp=sharing)
**Traceback:**
```
ValueError Traceback (most recent call last)
Cell In [94], line 117
102 tft = TemporalFusionTransformer.from_dataset(
103 training,
104 learning_rate = 0.03,
(...)
112 reduce_on_plateau_patience = 4
113 )
115 tft.size()
--> 117 trainer.fit(
118 tft,
119 train_dataloaders = [train_dataloader],
120 val_dataloaders = [val_dataloader]
121 )
[SNIPPED]
File ~/opt/miniconda3/lib/python3.9/site-packages/pytorch_lightning/strategies/strategy.py:378, in Strategy.training_step(self, *args, **kwargs)
376 with self.precision_plugin.train_step_context():
377 assert isinstance(self.model, TrainingStep)
--> 378 return self.model.training_step(*args, **kwargs)
File ~/opt/miniconda3/lib/python3.9/site-packages/pytorch_forecasting/models/base_model.py:410, in BaseModel.training_step(self, batch, batch_idx)
406 def training_step(self, batch, batch_idx):
407 """
408 Train on batch.
409 """
--> 410 x, y = batch
411 log, out = self.step(x, y, batch_idx)
412 return log
ValueError: not enough values to unpack (expected 2, got 1)
```
| 1medium
|
Title: [Bug]: 4.0下如果数据集过多生成F0会导致爆显存
Body: ### 系统平台版本号
Windows11 22H2
### GPU 型号
NVIDIA GeForce RTX 3060
### Python版本
3.8.17
### PyTorch版本
2.0.1+cu118
### sovits分支
4.0(默认)
### 数据集来源(用于判断数据集质量)
UVR处理过的人物干声
### 出现问题的环节或执行的命令
python preprocess_hubert_f0.py
### 情况描述
在4.0中,如果数据集过多时(约1.6GB,1178条音频)生成f0就会爆显存,并且貌似没有什么解决方法
### 日志
```python
https://pastebin.com/1gh6SaUf
```
### 补充说明
_No response_ | 2hard
|
Title: Notebooks with only markdown cell edits don't get republished
Body: ### Describe the bug
Using `jupyter-cache`, if I edit only markdown cells in a notebook, the notebook is not re-executed (which is fine), but nor is the target HTML document updated with the edited markdown content.
### Reproduce the bug
Edit just the markdown cells in a notebook and then rebuild with `jupyter book build .`. The content is not updated in the final Jupyter Book.
### List your environment
jupyter book --version
Jupyter Book : 0.12.2
External ToC : 0.2.3
MyST-Parser : 0.15.2
MyST-NB : 0.13.2
Sphinx Book Theme : 0.1.5
Jupyter-Cache : 0.4.3
NbClient : 0.5.1 | 1medium
|
Title: Understanding Paragraph Extraction
Body: I am trying to understand how I would use Trafilatura to extract text, but by generically preserving paragraph information. Is this possible through the library?
I currently do this.
```
parsed = tra.bare_extraction(
downloaded,
include_comments=False,
)
```
Which does a very decent job of the extraction... the text is returned but it is all in one string. Is there a way to return chunks of sentences that return paragraphs?
Thank you for your time | 1medium
|
Title: How to set the window size...I'm a novice
Body: | 0easy
|
Title: different implementations of face_align.py (mode=arcface) results in different results
Body: In this repo, there are two implementations of face_align.py
https://github.com/deepinsight/insightface/blob/master/python-package/insightface/utils/face_align.py
https://github.com/deepinsight/insightface/blob/master/web-demos/src_recognition/face_align.py
By modifying this example,
https://github.com/deepinsight/insightface/blob/master/examples/demo_analysis.py
I got the following results using buffalo_l model pack.
det_size was set to 320 in both cases.
and image_size (used in estimate_norm of face_align.py) was set to 112, 224 and 256 respectively.
(after alignment, the results were resized to 256x256 for comparison)
Upper row : the results of python-package face_align.py
Lower row : that of web-demos face_align.py

In mode='arcface'
When image_size was 112 and 224, there were no differences.
However, if 256 is used, face_align.py of python-package yields a different result.
It seems that whether "diff_x" is used or not affects the result.
In 256 case, face_align.py of web-demos has the correct result... right?
In mode='None'
All of the results are same.

One more question,
In which configuration, recognition Model of antelopev2 and buffalo_l were trained?
I mean the hyperparameters to get the aligned training set (image_size of estimate_norm, det_size .. etc)
| 1medium
|
Title: Configure codecov to not comment on PRs until all coverage reports are in
Body: This will stop codecov prematurely reporting a drop in coverage. Should be doable via the `after_n_builds` config option ([doc](https://docs.codecov.com/docs/notifications#preventing-notifications-until-after-n-builds)) | 0easy
|
Title: fix node does not register
Body:
<!-- Thanks for sending a pull request! Here are some tips for you:
1. If this is your first time, please read our contributor guidelines: https://git.k8s.io/community/contributors/guide#your-first-contribution and developer guide https://git.k8s.io/community/contributors/devel/development.md#development-guide
2. Please label this pull request according to what type of issue you are addressing, especially if this is a release targeted pull request. For reference on required PR/issue labels, read here:
https://git.k8s.io/community/contributors/devel/sig-release/release.md#issuepr-kind-label
3. Ensure you have added or ran the appropriate tests for your PR: https://git.k8s.io/community/contributors/devel/sig-testing/testing.md
4. If you want *faster* PR reviews, read how: https://git.k8s.io/community/contributors/guide/pull-requests.md#best-practices-for-faster-reviews
5. Follow the instructions for writing a release note: https://git.k8s.io/community/contributors/guide/release-notes.md
6. If the PR is unfinished, see how to mark it: https://git.k8s.io/community/contributors/guide/pull-requests.md#marking-unfinished-pull-requests
-->
**What type of PR is this?**
/kind bug
**What this PR does / why we need it**:
fix node does not register
**Which issue(s) this PR fixes**:
<!--
*Automatically closes linked issue when PR is merged.
Usage: `Fixes #<issue number>`, or `Fixes (paste link of issue)`.
_If PR is about `failing-tests or flakes`, please post the related issues/tests in a comment and do not use `Fixes`_*
-->
related issue: https://github.com/kubernetes/kubernetes/issues/9085
Fixes #71665
**Special notes for your reviewer**:
**Does this PR introduce a user-facing change?**:
<!--
If no, just write "NONE" in the release-note block below.
If yes, a release note is required:
Enter your extended release note in the block below. If the PR requires additional action from users switching to the new release, include the string "action required".
-->
```release-note
NONE
```
**Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.**:
<!--
This section can be blank if this pull request does not require a release note.
Please use the following format for linking documentation or pass the
section below:
- [KEP]: <link>
- [Usage]: <link>
- [Other doc]: <link>
-->
```docs
```
| 1medium
|
Title: [Feature]: Add option to translate warning `Tried to log to step %d that is less than the current step` to an error
Body: ### Description
We introduced a bug that logged one step into the future, and ended up losing quite a bit of useful logging data, e.g.
```
wandb: WARNING Tried to log to step 3 that is less than the current step 4. Steps must be monotonically increasing, so this data will be ignored. See https://wandb.me/define-metric to log data out of order.
```
which happens here:
https://github.com/wandb/wandb/blob/9153d0ecfa5e82a706618242a567500e7fe68e25/core/internal/stream/handler.go#L1009-L1016
### Suggested Solution
Is there a way to make configure this into a fatal error, such that we fail fast rather than needing to monitor stdout?
Or is there a way to "listen" to the log messages for WandB so we can fail fast ourselves? | 1medium
|
Title: How to retrain the GLIP model on the Object365 dataset
Body: How to retrain the GLIP model on the Object365 dataset?
Since I made some modifications to the GLIP model, I need to perform some pre-training again to improve performance. I replaced `_base_ = [../_base_/datasets/coco_detection.py]` with `_base_ = [../_base_/datasets/objects365v1_detection.py]` in `glip_atss_swin-t_a_fpn_dyhead_16xb2_ms-2x_funtune_coco.py` to train on Object365. Is this correct? Are any additional steps required? | 1medium
|
Title: 如何发送本地音乐文件
Body: 我用
```
itchat.send(msg='@fil@{}'.format('童话.mp3'), toUserName='filehelper')
```
没有成功发送,也没有报错信息。
ps. 文本文档可以正常发送 | 1medium
|
Title: wtf this error
Body: Last Error Received:
Process: Ensemble Mode
If this error persists, please contact the developers with the error details.
Raw Error Details:
AssertionError: ""
Traceback Error: "
File "UVR.py", line 4716, in process_start
File "separate.py", line 475, in seperate
File "separate.py", line 619, in demix_demucs
File "demucs\apply.py", line 185, in apply_model
File "demucs\apply.py", line 211, in apply_model
File "demucs\apply.py", line 245, in apply_model
File "demucs\utils.py", line 490, in result
File "demucs\apply.py", line 260, in apply_model
File "torch\nn\modules\module.py", line 1194, in _call_impl
File "demucs\hdemucs.py", line 691, in forward
File "demucs\hdemucs.py", line 602, in _spec
File "demucs\hdemucs.py", line 36, in pad1d
"
Error Time Stamp [2023-09-23 01:16:02]
Full Application Settings:
vr_model: Choose Model
aggression_setting: 10
window_size: 512
batch_size: 2
crop_size: 256
is_tta: False
is_output_image: False
is_post_process: False
is_high_end_process: False
post_process_threshold: 0.2
vr_voc_inst_secondary_model: No Model Selected
vr_other_secondary_model: No Model Selected
vr_bass_secondary_model: No Model Selected
vr_drums_secondary_model: No Model Selected
vr_is_secondary_model_activate: False
vr_voc_inst_secondary_model_scale: 0.9
vr_other_secondary_model_scale: 0.7
vr_bass_secondary_model_scale: 0.5
vr_drums_secondary_model_scale: 0.5
demucs_model: Choose Model
segment: Default
overlap: 0.25
shifts: 2
chunks_demucs: Auto
margin_demucs: 44100
is_chunk_demucs: False
is_chunk_mdxnet: False
is_primary_stem_only_Demucs: False
is_secondary_stem_only_Demucs: False
is_split_mode: True
is_demucs_combine_stems: True
demucs_voc_inst_secondary_model: No Model Selected
demucs_other_secondary_model: No Model Selected
demucs_bass_secondary_model: No Model Selected
demucs_drums_secondary_model: No Model Selected
demucs_is_secondary_model_activate: False
demucs_voc_inst_secondary_model_scale: 0.9
demucs_other_secondary_model_scale: 0.7
demucs_bass_secondary_model_scale: 0.5
demucs_drums_secondary_model_scale: 0.5
demucs_pre_proc_model: No Model Selected
is_demucs_pre_proc_model_activate: False
is_demucs_pre_proc_model_inst_mix: False
mdx_net_model: Choose Model
chunks: Auto
margin: 44100
compensate: Auto
is_denoise: False
is_invert_spec: False
is_mixer_mode: False
mdx_batch_size: Default
mdx_voc_inst_secondary_model: No Model Selected
mdx_other_secondary_model: No Model Selected
mdx_bass_secondary_model: No Model Selected
mdx_drums_secondary_model: No Model Selected
mdx_is_secondary_model_activate: False
mdx_voc_inst_secondary_model_scale: 0.9
mdx_other_secondary_model_scale: 0.7
mdx_bass_secondary_model_scale: 0.5
mdx_drums_secondary_model_scale: 0.5
is_save_all_outputs_ensemble: True
is_append_ensemble_name: False
chosen_audio_tool: Manual Ensemble
choose_algorithm: Min Spec
time_stretch_rate: 2.0
pitch_rate: 2.0
is_gpu_conversion: True
is_primary_stem_only: True
is_secondary_stem_only: False
is_testing_audio: False
is_add_model_name: False
is_accept_any_input: False
is_task_complete: False
is_normalization: False
is_create_model_folder: False
mp3_bit_set: 320k
save_format: WAV
wav_type_set: PCM_16
help_hints_var: False
model_sample_mode: False
model_sample_mode_duration: 30
demucs_stems: All Stems | 2hard
|
Title: Doc2vec corpus_file mode skips some documents during training
Body: ### Problem description
During training of Doc2Vec on corpusfile, some documents are skipped. I think it is because of the way how corpusfile is partitioned. Some lines are processed by two or more workers while some are not processed at all. This behavior could be acceptable for Word2Vec and FasText as the same word occurs several times in different lines. But that is not the case with Doc2Vec where each document corresponds to exactly one line and if that line is skipped, corresponding document vector will not be trained.
### Steps to reproduce
documents.txt
```
a very long document with huge number of words in it
several
short
documents
```
script.py
```
from gensim.models import Doc2Vec
import copy
offsets, start_lines = Doc2Vec._get_offsets_and_start_doctags_for_corpusfile('documents.txt', 2)
print("Offsets for workers: ", offsets)
model = Doc2Vec(sample=0, workers=2, min_count=1, vector_size=5, seed=1)
model.build_vocab(corpus_file='documents.txt')
old_vectors = copy.copy(model.docvecs.vectors_docs)
model.train(corpus_file='documents.txt', total_examples=model.corpus_count,
total_words=model.corpus_total_words, epochs=10)
new_vectors = copy.copy(model.docvecs.vectors_docs)
for i in range(len(old_vectors)):
if all(old_vectors[i] == new_vectors[i]):
print("vector {} did not change".format(i))
else:
print("vector {} changed".format(i))
```
output
```
Offsets for workers: [0, 0]
vector 0 changed
vector 1 did not change
vector 2 did not change
vector 3 did not change
```
| 1medium
|
Title: Users can see and assign automatisch Policies from Users/Sites/Clients they do not belong to.
Body: I would expect that Automation-Policies are limited the same way as Policy-Overview is?
You should not be able to see Policies created from Users who belong only to specific Clients or Sites.
Also it should not be possible to assign them.
 | 1medium
|
Title: dashboard cpu usage too high using dev branch
Body: try switching to dev branch and rebuilding the dashboard,
cpu usage is too high, It's even worse after opening xray config settings or other dialogs. | 1medium
|
Title: Device Plugin failure handling in kubelet is racy
Body: As of now Kubelet removes a resource owned by device plugins as soon as it notices that a plugin has failed (plugin socket is removed, or ListWatch fails). This behavior is undesirable because trivial plugin restarts would result in Node Capacity changes which is most likely going to be monitored in production clusters resulting in alerts.
As discussed in https://github.com/kubernetes/kubernetes/issues/53395, we should have kubelet reduce the capacity and allocatable to `0` if a device plugin has failed and then after a timeout remove the resource completely from the Node object.
This timeout ensures that random restarts do not raise alerts for production cluster admins, while continue failure does raise an alert. It also let's us track scenarios where all devices fail while the device plugin stays healthy.
Similarly, during startup, kubelet should wait to known plugins to re-register prior to declaring a plugin to be unavailable.
I'd ideally expect this bug to be fixed as part of beta.
@vikaschoudhary16 @RenaudWasTaken @jiayingz @kubernetes/sig-node-bugs @ConnorDoyle | 2hard
|
Title: 只有CPU,并且无法正常使用的看这里
Body: 本人使用的预训练文件: @miven 也就是这位up的 https://www.bilibili.com/video/BV1uh411B7AD/
使用main分支下载的zip文件,解压配置好预训练文件后运行出现的问题包括 : 2秒杂音, 无法编码(在98%后卡住)
如果你出现的问题和我一样,可以尝试和我一样步骤
首先是这个 https://github.com/babysor/MockingBird/issues/209
将文件 `synthesizer/hparams.py` 中的: `use_gst` 和 ` use_ser_for_gst` 均设置为 `False`
然后是这个 https://github.com/babysor/MockingBird/issues/37
将文件`synthesizer/utils/symbols.py` 中的
```python
_characters = 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz1234567890!\'(),-.:;? '
```
改为
```python
_characters = 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz12340!\'(),-.:;? '
```
由于我是直接使用 `python .\demo_toolbox.py --cpu ` 命令直接运行的
所以我没有测试 https://github.com/babysor/MockingBird/issues/242 这个是否有效 | 0easy
|
Title: Docs - porting from flask-jwt
Body: Thanks for the hard work.
Looking though the docs it appears this extension is similar to flask-jwt but not identical. Would be nice to have a section in the docs on how to port to the newer library and any associated concerns there might be. So far I've noticed:
- JWT --> JWTManager(app)
- Need to create an auth view method
- Config:
- JWT_AUTH_HEADER_PREFIX --> JWT_HEADER_TYPE
- JWT_EXPIRATION_DELTA --> JWT_ACCESS_TOKEN_EXPIRES
- No leeway
Is that a good start? | 1medium
|
Title: Update after terminal state
Body: I think there's a little bug in many of your scripts in that you update the returns for the last step with a post-terminal step. Thus, your value (policy) functions wind up growing (unbounded?) near the terminal state. For example, in rl2/mountaincar you have a "train" boolean but it is never set to false for the last step. | 1medium
|
Title: htx exchange and VolatilityFilter
Body: <!--
Have you searched for similar issues before posting it?
Did you have a VERY good look at the [documentation](https://www.freqtrade.io/en/latest/) and are sure that the question is not explained there
Please do not use the question template to report bugs or to request new features.
-->
## Describe your environment
* Operating system: Windows Server 2022
* Python Version: current miniconda (`python -V`)
* CCXT version: _____ (`pip freeze | grep ccxt`)
* Freqtrade Version: 2024.10 (`freqtrade -V` or `docker compose run --rm freqtrade -V` for Freqtrade running in docker)
## Your question
With the htx exchange (dryrun), using a simple VolumePairList of 250 assets (or a different number), I do not have any pairs in the whitelist, as I add the VolatilityFilter, even when I set it to max_volatility 0.99, with no minimum. Does htx not send any volatility information, or can I set a volatility range in a different way? If you think, you need to test it, just do it as simple, as I have described it: A VolumePairlist with 250 assets, and the above very simple volatility rule. No other restrictions. I look forward to your feedback.
*Ask the question you have not been able to find an answer in the [Documentation](https://www.freqtrade.io/en/latest/)*
| 1medium
|
Title: 请教一下,怎么做Self-instruction?非常感谢
Body: | 3misc
|
Title: ABOUT SRC_TOKENS
Body: In TransformerEncoderBase class, it's forward() function has a parameter 'src_tokens': tokens in the source language of shape `(batch, src_len)`.
It's a tensor of indexes, suppoes that:
[ [10, 52, 138, ....],
[53, 108, 52, ....],
...............
[28, 82, 106, ....] ]
How can i get the word in the raw input text that corresponds to each index?
Suppose that:
[ ['I', 'want', 'to', ...],
['Today', 'I', 'have',...],
...........................
['this', 'movie', 'is', ...] ]
Thank you very much!
<img width="511" alt="image" src="https://github.com/facebookresearch/fairseq/assets/105722903/fd2db0d8-a565-4711-9af4-c6ac7ba0473a">
| 0easy
|
Title: Automated cherry pick of #101084: Updating EndpointSlice validation to match Endpoints
Body: Cherry pick of #101084 on release-1.18.
#101084: Updating EndpointSlice validation to match Endpoints
For details on the cherry pick process, see the [cherry pick requests](https://git.k8s.io/community/contributors/devel/sig-release/cherry-picks.md) page. | 1medium
|
Title: [FEATURE-REQUEST] Getting dtype of columns as they are when rendered in a pandas dataframe?
Body: **Description**
Hello, I would like to get the dtype of the columns as they are when the vaex dataframe is turned into a pandas dataframe.
Basically, vaex is using its own dtype.
```python
import vaex
df=vaex.from_items(("a",[1,2,3]),("b",[1.1, 2.1, 3.1]))
type(df.dtypes['a'])
Out[64]: vaex.datatype.DataType
```
But,
```python
type(vf[:1].to_pandas_df().dtypes.to_dict()['a'])
Out[65]: numpy.dtype[int64]
```
Please, is there any way to get the result of the 2nd method without having vaex to compute a row? (in example above, I am making vaex computing the 1st row) If it is possible, I would like to prevent it, because I am using this information in a setup step.
Thanks for your help! | 1medium
|
Title: [Bug] Why is the model not stopping and not producing any output? pls check screenshot.
Body: ### Checklist
- [x] 1. I have searched related issues but cannot get the expected help.
- [ ] 2. The bug has not been fixed in the latest version.
- [x] 3. Please note that if the bug-related issue you submitted lacks corresponding environment info and a minimal reproducible demo, it will be challenging for us to reproduce and resolve the issue, reducing the likelihood of receiving feedback.
- [x] 4. If the issue you raised is not a bug but a question, please raise a discussion at https://github.com/sgl-project/sglang/discussions/new/choose Otherwise, it will be closed.
- [x] 5. Please use English, otherwise it will be closed.
### Describe the bug
here is my starting log :
```
python -m sglang.launch_server --model-path /home/drc-whlab/james/Qwen2___5-32B-Instruct-GPTQ-Int4 --host 0.0.0.0 --tp 2 --host 0.0.0.0 --port 7777 --max-running-requests 5 --dtype half --trust-remote-code --context-length 8192
INFO 03-22 08:17:44 __init__.py:190] Automatically detected platform cuda.
[2025-03-22 08:17:46] server_args=ServerArgs(model_path='/home/drc-whlab/james/Qwen2___5-32B-Instruct-GPTQ-Int4', tokenizer_path='/home/drc-whlab/james/Qwen2___5-32B-Instruct-GPTQ-Int4', tokenizer_mode='auto', skip_tokenizer_init=False, load_format='auto', trust_remote_code=True, dtype='half', kv_cache_dtype='auto', quantization=None, quantization_param_path=None, context_length=8192, device='cuda', served_model_name='/home/drc-whlab/james/Qwen2___5-32B-Instruct-GPTQ-Int4', chat_template=None, is_embedding=False, revision=None, host='0.0.0.0', port=7777, mem_fraction_static=0.87, max_running_requests=5, max_total_tokens=None, chunked_prefill_size=2048, max_prefill_tokens=16384, schedule_policy='fcfs', schedule_conservativeness=1.0, cpu_offload_gb=0, page_size=1, tp_size=2, stream_interval=1, stream_output=False, random_seed=678263484, constrained_json_whitespace_pattern=None, watchdog_timeout=300, dist_timeout=None, download_dir=None, base_gpu_id=0, gpu_id_step=1, log_level='info', log_level_http=None, log_requests=False, log_requests_level=0, show_time_cost=False, enable_metrics=False, decode_log_interval=40, api_key=None, file_storage_path='sglang_storage', enable_cache_report=False, reasoning_parser=None, dp_size=1, load_balance_method='round_robin', ep_size=1, dist_init_addr=None, nnodes=1, node_rank=0, json_model_override_args='{}', lora_paths=None, max_loras_per_batch=8, lora_backend='triton', attention_backend='flashinfer', sampling_backend='flashinfer', grammar_backend='outlines', speculative_algorithm=None, speculative_draft_model_path=None, speculative_num_steps=5, speculative_eagle_topk=4, speculative_num_draft_tokens=8, speculative_accept_threshold_single=1.0, speculative_accept_threshold_acc=1.0, speculative_token_map=None, enable_double_sparsity=False, ds_channel_config_path=None, ds_heavy_channel_num=32, ds_heavy_token_num=256, ds_heavy_channel_type='qk', ds_sparse_decode_threshold=4096, disable_radix_cache=False, disable_cuda_graph=False, disable_cuda_graph_padding=False, enable_nccl_nvls=False, disable_outlines_disk_cache=False, disable_custom_all_reduce=False, disable_mla=False, disable_overlap_schedule=False, enable_mixed_chunk=False, enable_dp_attention=False, enable_ep_moe=False, enable_torch_compile=False, torch_compile_max_bs=32, cuda_graph_max_bs=8, cuda_graph_bs=None, torchao_config='', enable_nan_detection=False, enable_p2p_check=False, triton_attention_reduce_in_fp32=False, triton_attention_num_kv_splits=8, num_continuous_decode_steps=1, delete_ckpt_after_loading=False, enable_memory_saver=False, allow_auto_truncate=False, enable_custom_logit_processor=False, tool_call_parser=None, enable_hierarchical_cache=False, enable_flashinfer_mla=False, flashinfer_mla_disable_ragged=False, warmups=None, debug_tensor_dump_output_folder=None, debug_tensor_dump_input_file=None, debug_tensor_dump_inject=False)
[2025-03-22 08:17:46] gptq quantization is not fully optimized yet. The speed can be slower than non-quantized models.
INFO 03-22 08:17:49 __init__.py:190] Automatically detected platform cuda.
INFO 03-22 08:17:49 __init__.py:190] Automatically detected platform cuda.
INFO 03-22 08:17:49 __init__.py:190] Automatically detected platform cuda.
[2025-03-22 08:17:51 TP1] gptq quantization is not fully optimized yet. The speed can be slower than non-quantized models.
[2025-03-22 08:17:51 TP0] gptq quantization is not fully optimized yet. The speed can be slower than non-quantized models.
[2025-03-22 08:17:51 TP1] gptq quantization is not fully optimized yet. The speed can be slower than non-quantized models.
[2025-03-22 08:17:51 TP1] Init torch distributed begin.
[2025-03-22 08:17:51 TP0] gptq quantization is not fully optimized yet. The speed can be slower than non-quantized models.
[2025-03-22 08:17:51 TP0] Init torch distributed begin.
[2025-03-22 08:17:52 TP0] sglang is using nccl==2.21.5
[2025-03-22 08:17:52 TP1] sglang is using nccl==2.21.5
[2025-03-22 08:17:52 TP0] Init torch distributed ends. mem usage=0.13 GB
[2025-03-22 08:17:52 TP1] Init torch distributed ends. mem usage=0.13 GB
[2025-03-22 08:17:52 TP0] Load weight begin. avail mem=15.31 GB
[2025-03-22 08:17:52 TP1] Load weight begin. avail mem=15.29 GB
[2025-03-22 08:17:52 TP0] Compute capability below sm80. Use float16 due to lack of bfloat16 support.
[2025-03-22 08:17:52 TP1] Compute capability below sm80. Use float16 due to lack of bfloat16 support.
[2025-03-22 08:17:52 TP0] The following error message 'operation scheduled before its operands' can be ignored.
[2025-03-22 08:17:52 TP1] The following error message 'operation scheduled before its operands' can be ignored.
Loading safetensors checkpoint shards: 0% Completed | 0/5 [00:00<?, ?it/s]
Loading safetensors checkpoint shards: 20% Completed | 1/5 [00:00<00:02, 1.77it/s]
Loading safetensors checkpoint shards: 40% Completed | 2/5 [00:01<00:01, 1.65it/s]
Loading safetensors checkpoint shards: 60% Completed | 3/5 [00:02<00:01, 1.41it/s]
Loading safetensors checkpoint shards: 80% Completed | 4/5 [00:02<00:00, 1.30it/s]
Loading safetensors checkpoint shards: 100% Completed | 5/5 [00:03<00:00, 1.25it/s]
Loading safetensors checkpoint shards: 100% Completed | 5/5 [00:03<00:00, 1.33it/s]
[2025-03-22 08:17:57 TP0] Load weight end. type=Qwen2ForCausalLM, dtype=torch.float16, avail mem=6.12 GB, mem usage=9.19 GB.
[2025-03-22 08:17:57 TP1] Load weight end. type=Qwen2ForCausalLM, dtype=torch.float16, avail mem=6.10 GB, mem usage=9.19 GB.
[2025-03-22 08:17:57 TP0] KV Cache is allocated. #tokens: 33676, K size: 2.06 GB, V size: 2.06 GB
[2025-03-22 08:17:57 TP1] KV Cache is allocated. #tokens: 33676, K size: 2.06 GB, V size: 2.06 GB
[2025-03-22 08:17:57 TP0] Memory pool end. avail mem=1.87 GB
[2025-03-22 08:17:57 TP1] Memory pool end. avail mem=1.85 GB
[2025-03-22 08:17:57 TP0] Capture cuda graph begin. This can take up to several minutes. avail mem=1.25 GB
[2025-03-22 08:17:57 TP1] Capture cuda graph begin. This can take up to several minutes. avail mem=1.24 GB
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 5/5 [00:03<00:00, 1.54it/s]
[2025-03-22 08:18:00 TP1] Registering 645 cuda graph addresses
[2025-03-22 08:18:00 TP0] Registering 645 cuda graph addresses
[2025-03-22 08:18:00 TP1] Capture cuda graph end. Time elapsed: 3.25 s. avail mem=0.82 GB. mem usage=0.42 GB.
[2025-03-22 08:18:00 TP0] Capture cuda graph end. Time elapsed: 3.25 s. avail mem=0.84 GB. mem usage=0.42 GB.
[2025-03-22 08:18:01 TP0] max_total_num_tokens=33676, chunked_prefill_size=2048, max_prefill_tokens=16384, max_running_requests=5, context_len=8192
[2025-03-22 08:18:01 TP1] max_total_num_tokens=33676, chunked_prefill_size=2048, max_prefill_tokens=16384, max_running_requests=5, context_len=8192
[2025-03-22 08:18:01] INFO: Started server process [2560610]
[2025-03-22 08:18:01] INFO: Waiting for application startup.
[2025-03-22 08:18:01] INFO: Application startup complete.
[2025-03-22 08:18:01] INFO: Uvicorn running on http://0.0.0.0:7777 (Press CTRL+C to quit)
[2025-03-22 08:18:02] INFO: 127.0.0.1:36748 - "GET /get_model_info HTTP/1.1" 200 OK
[2025-03-22 08:18:02 TP0] Prefill batch. #new-seq: 1, #new-token: 6, #cached-token: 0, token usage: 0.00, #running-req: 0, #queue-req: 0,
[2025-03-22 08:18:05] INFO: 127.0.0.1:36764 - "POST /generate HTTP/1.1" 200 OK
[2025-03-22 08:18:05] The server is fired up and ready to roll!
```
### Reproduction

### Environment
```
pip list
Package Version
--------------------------------- -------------------
aiohappyeyeballs 2.6.1
aiohttp 3.11.14
aiohttp-cors 0.8.0
aiosignal 1.3.2
airportsdata 20250224
annotated-types 0.7.0
anthropic 0.49.0
anyio 4.9.0
astor 0.8.1
asttokens 3.0.0
attrs 25.3.0
blake3 1.0.4
cachetools 5.5.2
certifi 2025.1.31
charset-normalizer 3.4.1
click 8.1.8
cloudpickle 3.1.1
colorful 0.5.6
compressed-tensors 0.9.1
cuda-bindings 12.8.0
cuda-python 12.8.0
datasets 3.4.1
decorator 5.2.1
decord 0.6.0
depyf 0.18.0
dill 0.3.8
diskcache 5.6.3
distlib 0.3.9
distro 1.9.0
einops 0.8.1
executing 2.2.0
fastapi 0.115.11
filelock 3.18.0
flashinfer-python 0.2.3+cu124torch2.5
frozenlist 1.5.0
fsspec 2024.12.0
gguf 0.10.0
google-api-core 2.24.2
google-auth 2.38.0
googleapis-common-protos 1.69.2
grpcio 1.71.0
h11 0.14.0
hf_transfer 0.1.9
httpcore 1.0.7
httptools 0.6.4
httpx 0.28.1
huggingface-hub 0.29.3
idna 3.10
importlib_metadata 8.6.1
interegular 0.3.3
ipython 9.0.2
ipython_pygments_lexers 1.1.1
jedi 0.19.2
Jinja2 3.1.6
jiter 0.9.0
jsonschema 4.23.0
jsonschema-specifications 2024.10.1
lark 1.2.2
litellm 1.63.12
llguidance 0.7.5
lm-format-enforcer 0.10.11
MarkupSafe 3.0.2
matplotlib-inline 0.1.7
mistral_common 1.5.4
modelscope 1.24.0
mpmath 1.3.0
msgpack 1.1.0
msgspec 0.19.0
multidict 6.2.0
multiprocess 0.70.16
nest-asyncio 1.6.0
networkx 3.4.2
ninja 1.11.1.3
numpy 1.26.4
nvidia-cublas-cu12 12.4.5.8
nvidia-cuda-cupti-cu12 12.4.127
nvidia-cuda-nvrtc-cu12 12.4.127
nvidia-cuda-runtime-cu12 12.4.127
nvidia-cudnn-cu12 9.1.0.70
nvidia-cufft-cu12 11.2.1.3
nvidia-curand-cu12 10.3.5.147
nvidia-cusolver-cu12 11.6.1.9
nvidia-cusparse-cu12 12.3.1.170
nvidia-ml-py 12.570.86
nvidia-nccl-cu12 2.21.5
nvidia-nvjitlink-cu12 12.4.127
nvidia-nvtx-cu12 12.4.127
openai 1.67.0
opencensus 0.11.4
opencensus-context 0.1.3
opencv-python-headless 4.11.0.86
orjson 3.10.15
outlines 0.1.11
outlines_core 0.1.26
packaging 24.2
pandas 2.2.3
parso 0.8.4
partial-json-parser 0.2.1.1.post5
pexpect 4.9.0
pillow 11.1.0
pip 25.0.1
platformdirs 4.3.7
prometheus_client 0.21.1
prometheus-fastapi-instrumentator 7.1.0
prompt_toolkit 3.0.50
propcache 0.3.0
proto-plus 1.26.1
protobuf 6.30.1
psutil 7.0.0
ptyprocess 0.7.0
pure_eval 0.2.3
py-cpuinfo 9.0.0
py-spy 0.4.0
pyarrow 19.0.1
pyasn1 0.6.1
pyasn1_modules 0.4.1
pycountry 24.6.1
pydantic 2.10.6
pydantic_core 2.27.2
Pygments 2.19.1
python-dateutil 2.9.0.post0
python-dotenv 1.0.1
python-multipart 0.0.20
pytz 2025.1
PyYAML 6.0.2
pyzmq 26.3.0
ray 2.44.0
referencing 0.36.2
regex 2024.11.6
requests 2.32.3
rpds-py 0.23.1
rsa 4.9
safetensors 0.5.3
sentencepiece 0.2.0
setproctitle 1.3.5
setuptools 75.8.0
sgl-kernel 0.0.5
sglang 0.4.4
six 1.17.0
smart-open 7.1.0
sniffio 1.3.1
stack-data 0.6.3
starlette 0.46.1
sympy 1.13.1
tiktoken 0.9.0
tokenizers 0.21.1
torch 2.5.1
torchao 0.9.0
torchaudio 2.5.1
torchvision 0.20.1
tqdm 4.67.1
traitlets 5.14.3
transformers 4.48.3
triton 3.1.0
typing_extensions 4.12.2
tzdata 2025.1
urllib3 2.3.0
uv 0.6.9
uvicorn 0.34.0
uvloop 0.21.0
virtualenv 20.29.3
vllm 0.7.2
watchfiles 1.0.4
wcwidth 0.2.13
websockets 15.0.1
wheel 0.45.1
wrapt 1.17.2
xformers 0.0.28.post3
xgrammar 0.1.15
xxhash 3.5.0
yarl 1.18.3
zipp 3.21.0
``` | 2hard
|
Title: Running argument in a callback breaks when provided with a dictionary as an component_id instead of a string
Body: Dash version - 2.6.1
This breaks:
running=[(Output({"type": "dwn_panel_download_btn", "page": "dp_wiki"}, "disabled"), True, False)]
This works:
running=[(Output("dwn_panel_download_btn", "disabled"), True, False)]
**Screenshots**

| 1medium
|
Title: ONNX format is running too slowly on both GPU and CPU
Body: ### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and found no similar bug report.
### YOLOv5 Component
_No response_
### Bug
I installed torch version 2.2.1+cu121, onnx 1.16, and onnxruntime-gpu
Then, I exported the model using this command

and loaded it into a C++ code using OpenCV, but the inference is too slow. When I printed the time taken in inference it was the same for CPU and GPU.
### Environment
_No response_
### Minimal Reproducible Example
_No response_
### Additional
_No response_
### Are you willing to submit a PR?
- [ ] Yes I'd like to help by submitting a PR! | 2hard
|
Title: Updating config at runtime
Body: I have a use case where I need to be able to dynamically "install/uninstall" apps at runtime. I guess this would entail closing the connection and re-init with the new config. However, the issue with this approach is that this would cause connections to all databases to close? Is there a way to update the config/apps for just a particular connection without affecting other database connections? | 1medium
|
Title: [REQUEST] Add support for 3-digits HEX colors
Body: Consider posting in https://github.com/textualize/rich/discussions for feedback before raising a feature request.
> Have you checked the issues for a similar suggestions?
Yes
> **How would you improve Rich?**
Adding support for 3 digit colors.
> Give as much detail as you can. Example code of how you would like it to work would help.
<img src="https://github.com/Textualize/rich/assets/30776937/35f5b20c-59a0-4bc7-92b4-ac4dca7da735" width=37.5%>
---
> **What problem does it solve for you?**
Writing all the letters. 🙂
> What problem do you have that this feature would solve? I may be able to suggest an existing way of solving it.
If color contains 3 digits, repeat each letter, thus expanding the color code.
```py
def color_expand(color: str) -> str:
if color.startswith("#"):
if len(color) == 4: # "#xyz"
return "#{0}".format("".join(c * 2 for c in list(color)[1:])) # skip "#"
else:
return color
else:
return color
``` | 0easy
|
Title: Include source distribution
Body: According to https://pypi.org/project/flask-rest-api/#files this library currently only provides wheels to PyPI. Best practice is to upload source distributions as well. Fixing should just be a matter of running `python setup.py sdist upload`.
Thanks for maintaining flask-rest-api, hope to try it soon! (Would have tried it already if not for this issue, which makes it harder for me to use it on some internal infrastructure.) | 0easy
|
Title: clean_figure() error for plot() line outside the box
Body: ```python
import matplotlib.pyplot as plt
import tikzplotlib
plt.xlim(0, 1)
plt.plot([5, 6], [2, 3])
tikzplotlib.clean_figure()
```
Error:
```
IndexError: index 0 is out of bounds for axis 0 with size 0
```
At least for `scatter` instead of `plot` it doesn't give the error. | 1medium
|
Title: Automated cherry pick of #96716: Bump node-problem-detector to v0.8.5
Body: Cherry pick of #96716 on release-1.18.
#96716: Bump node-problem-detector to v0.8.5
For details on the cherry pick process, see the [cherry pick requests](https://git.k8s.io/community/contributors/devel/sig-release/cherry-picks.md) page. | 0easy
|
Title: AddMetaPaths may cause memory leak issue
Body: ### 🐛 Describe the bug
Hi! I am studying the adversarial attack on heterogeneous graph neural networks (e.g. HAN).
Normally, we call the AddMetaPaths function once to train HAN.
```
import torch_geometric.transforms as T
from torch_geometric.datasets import DBLP
path = osp.join(osp.dirname(osp.realpath(__file__)), './data/DBLP')
# APA, APCPA, APTPA
metapaths = [[('author', 'paper'), ('paper', 'author')],
[('author', 'paper'), ('paper', 'conference'), ('conference', 'paper'), ('paper', 'author')],
[('author', 'paper'), ('paper', 'term'), ('term', 'paper'), ('paper', 'author')]]
transform = T.AddMetaPaths(metapaths=metapaths, drop_orig_edge_types=True, drop_unconnected_node_types=True)
dataset = DBLP(path, transform=transform)
data = dataset[0]
```
The training process and model are similar to the example in examples/hetero/han_imdb.py, and it is no problem.
Then, I want to change the graph and perform targeted attacks on HAN. Hence, we need to call AddMetaPaths many times.
```
for id in target_ids:
mod_data = attack(ori_data)
transform = T.AddMetaPaths(metapaths=metapaths, drop_orig_edge_types=True, drop_unconnected_node_types=True)
new_metadata = transform(mod_data)
gnn(new_metadata.x_dict, new_metadata.edge_index_dict)
eval()
```
The procedure will be killed. I use top and memory_profiler and find that as the number of iterations increases, `new_metadata = transform(mod_data)` will consume a significant amount of memory space. I tried `del mod_data` or `del new_metadata` at the end of the iteration, but the problem still exists. The following code can be used to reproduce the issue without attack
```
import torch_geometric.transforms as T
from torch_geometric.datasets import DBLP
path = osp.join(osp.dirname(osp.realpath(__file__)), './data/DBLP')
# APA, APCPA, APTPA
metapaths = [[('author', 'paper'), ('paper', 'author')],
[('author', 'paper'), ('paper', 'conference'), ('conference', 'paper'), ('paper', 'author')],
[('author', 'paper'), ('paper', 'term'), ('term', 'paper'), ('paper', 'author')]]
transform = T.AddMetaPaths(metapaths=metapaths, drop_orig_edge_types=True, drop_unconnected_node_types=True)
dataset = DBLP(path, transform=None)
oridata = dataset[0]
for idx in range(2000):
data = transform(oridata)
```
It seems that `mod_data` will not cause the problem. If I need to call AddMetaPaths in each iteration, What is the right way to avoid memory leak? The environment is torch_geometric-2.5.2, torch-2.1.2.
### Versions
Collecting environment information...
PyTorch version: 2.1.2
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.1 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.31
Python version: 3.8.19 (default, Mar 20 2024, 19:58:24) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.4.0-42-generic-x86_64-with-glibc2.17
Is CUDA available: True
CUDA runtime version: 12.0.76
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 3090
GPU 1: NVIDIA GeForce RTX 3090
GPU 2: NVIDIA GeForce RTX 3090
GPU 3: NVIDIA GeForce RTX 3090
Nvidia driver version: 525.125.06
cuDNN version: Probably one of the following:
/usr/local/cuda-12.0/targets/x86_64-linux/lib/libcudnn.so.8
/usr/local/cuda-12.0/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8
/usr/local/cuda-12.0/targets/x86_64-linux/lib/libcudnn_adv_train.so.8
/usr/local/cuda-12.0/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8
/usr/local/cuda-12.0/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8
/usr/local/cuda-12.0/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8
/usr/local/cuda-12.0/targets/x86_64-linux/lib/libcudnn_ops_train.so.8
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 57 bits virtual
CPU(s): 48
On-line CPU(s) list: 0-47
Thread(s) per core: 2
Core(s) per socket: 12
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 106
Model name: Intel(R) Xeon(R) Silver 4310 CPU @ 2.10GHz
Stepping: 6
Frequency boost: enabled
CPU MHz: 877.503
CPU max MHz: 3300.0000
CPU min MHz: 800.0000
BogoMIPS: 4200.00
Virtualization: VT-x
L1d cache: 1.1 MiB
L1i cache: 768 KiB
L2 cache: 30 MiB
L3 cache: 36 MiB
NUMA node0 CPU(s): 0-11,24-35
NUMA node1 CPU(s): 12-23,36-47
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 invpcid_single ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local wbnoinvd dtherm ida arat pln pts avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq rdpid md_clear pconfig flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] numpy==1.24.3
[pip3] torch==2.1.2
[pip3] torch-cluster==1.6.3+pt21cu118
[pip3] torch_geometric==2.5.2
[pip3] torch-scatter==2.1.2+pt21cu118
[pip3] torch-sparse==0.6.18+pt21cu118
[pip3] torch-spline-conv==1.2.2+pt21cu118
[pip3] torchaudio==2.1.2
[pip3] torchvision==0.16.2
[pip3] triton==2.1.0
[conda] blas 1.0 mkl
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] libjpeg-turbo 2.0.0 h9bf148f_0 pytorch
[conda] mkl 2023.1.0 h213fc3f_46344
[conda] mkl-service 2.4.0 py38h5eee18b_1
[conda] mkl_fft 1.3.8 py38h5eee18b_0
[conda] mkl_random 1.2.4 py38hdb19cb5_0
[conda] numpy 1.24.3 py38hf6e8229_1
[conda] numpy-base 1.24.3 py38h060ed82_1
[conda] pyg 2.5.2 py38_torch_2.1.0_cu118 pyg
[conda] pytorch 2.1.2 py3.8_cuda11.8_cudnn8.7.0_0 pytorch
[conda] pytorch-cuda 11.8 h7e8668a_5 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torch-cluster 1.6.3+pt21cu118 pypi_0 pypi
[conda] torch-scatter 2.1.2+pt21cu118 pypi_0 pypi
[conda] torch-sparse 0.6.18+pt21cu118 pypi_0 pypi
[conda] torch-spline-conv 1.2.2+pt21cu118 pypi_0 pypi
[conda] torchaudio 2.1.2 py38_cu118 pytorch
[conda] torchtriton 2.1.0 py38 pytorch
[conda] torchvision 0.16.2 py38_cu118 pytorch | 2hard
|
Title: Pandas DataFrame to Hadoop using Hive and Impala uses incorrect data type
Body: # In short, issue is that SQLAlchemy seems to be trying to create following, and error is that type TEXT should be type STRING or VARCHAR
CREATE TABLE test_pandas_to_hive (
index BIGINT,
name TEXT
)
# Code starts here
import pandas as pd
from sqlalchemy import *
from sqlalchemy.engine import create_engine
from sqlalchemy.schema import *
# Connection to Impala Hadoop
engine = create_engine(
'impala://<my_org_url>',
connect_args={'port': <my_org_port>,
'auth_mechanism': 'GSSAPI',
'database': 'my_db_scratch_space'},
)
# Simple dataset
df = pd.DataFrame({'name' : ['User 1', 'User 2', 'User 3']})
# Now write to Hadoop
df.to_sql('test_pandas_to_hive', con=engine)
# Get the following error, seems to be trying to create a table in Hadoop, but the data type is wrong
# Is trying to create a column of data type 'text', which doesn't exist
# Should be of data type 'string' or 'varchar' instead
---------------------------------------------------------------------------
HiveServer2Error Traceback (most recent call last)
~/.conda/envs/Python3venv/lib/python3.6/site-packages/sqlalchemy/engine/base.py in _execute_context(self, dialect, constructor, statement, parameters, *args)
1243 self.dialect.do_execute(
-> 1244 cursor, statement, parameters, context
1245 )
...
~/.conda/envs/Python3venv/lib/python3.6/site-packages/impala/hiveserver2.py in err_if_rpc_not_ok(resp)
702 resp.status.statusCode != TStatusCode.SUCCESS_WITH_INFO_STATUS and
703 resp.status.statusCode != TStatusCode.STILL_EXECUTING_STATUS):
--> 704 raise HiveServer2Error(resp.status.errorMessage)
705
706
DBAPIError: (impala.error.HiveServer2Error) AnalysisException: Syntax error in line 4:
name TEXT
^
Encountered: IDENTIFIER
Expected: ARRAY, BIGINT, BINARY, BOOLEAN, CHAR, DATE, DATETIME, DECIMAL, REAL, FLOAT, INTEGER, MAP, SMALLINT, STRING, STRUCT, TIMESTAMP, TINYINT, VARCHAR
CAUSED BY: Exception: Syntax error
[SQL:
CREATE TABLE test_pandas_to_hive (
index BIGINT,
name TEXT
)
]
(Background on this error at: http://sqlalche.me/e/dbapi)
| 1medium
|
Title: Bug when calling GLMInfluence(mdl).cooks_distance with V0.14.4
Body: #### Describe the bug
When calculating Cook's distance on GLMs, statsmodels throws an error with this message
'GLMResults' object has no attribute 'get_hat_matrix'
The problem is in line 1358 of \statsmodels\stats\outliers_influence.py. The line reads
```
return self.results.get_hat_matrix()
```
But it should read
```
return self.results.get_hat_matrix_diag()
```
I updated that line, and everything now works smoothly.
#### Expected Output
The Cook's distance instead of an error message.
#### Output of ``import statsmodels.api as sm; sm.show_versions()``
<details>
INSTALLED VERSIONS
------------------
Python: 3.11.7.final.0
statsmodels
===========
Installed: 0.14.4
Required Dependencies
=====================
cython: Not installed
numpy: 2.1.2
scipy: 1.14.1
pandas: 2.2.3
dateutil: 2.9.0.post0
patsy: 0.5.6
Optional Dependencies
=====================
matplotlib: 3.9.2
backend: module://matplotlib_inline.backend_inline
cvxopt: Not installed
joblib: Not installed
Developer Tools
================
IPython: 8.28.0
jinja2: 3.1.4
sphinx: Not installed
pygments: 2.18.0
pytest: Not installed
virtualenv: Not installed
</details> | 0easy
|
Title: pure keyword_only_args without var_position_args is not handled correctly in libdoc
Body: an interface like this:
```python
@keyword(tags=("Getter", "BrowserControl", "Assertion"))
def get_console_log(
self,
assertion_operator: Optional[AssertionOperator] = None,
assertion_expected: Optional[Any] = None,
message: Optional[str] = None,
*,
full: bool = False,
last: Union[int, timedelta, None] = None,
) -> Dict:
...
```
Is not recognised :
```json
{
"name": "Get Console Log",
"args": [
{
"name": "assertion_operator",
"types": [
"AssertionOperator",
"None"
],
"typedocs": {
"AssertionOperator": "AssertionOperator",
"None": "None"
},
"defaultValue": "None",
"kind": "POSITIONAL_OR_NAMED",
"required": false,
"repr": "assertion_operator: AssertionOperator | None = None"
},
{
"name": "assertion_expected",
"types": [
"Any",
"None"
],
"typedocs": {
"None": "None"
},
"defaultValue": "None",
"kind": "POSITIONAL_OR_NAMED",
"required": false,
"repr": "assertion_expected: Any | None = None"
},
{
"name": "message",
"types": [
"str",
"None"
],
"typedocs": {
"str": "string",
"None": "None"
},
"defaultValue": "None",
"kind": "POSITIONAL_OR_NAMED",
"required": false,
"repr": "message: str | None = None"
},
{
"name": "full",
"types": [
"bool"
],
"typedocs": {
"bool": "boolean"
},
"defaultValue": "False",
"kind": "POSITIONAL_OR_NAMED",
"required": false,
"repr": "full: bool = False"
},
{
"name": "last",
"types": [
"int",
"timedelta",
"None"
],
"typedocs": {
"int": "integer",
"timedelta": "timedelta",
"None": "None"
},
"defaultValue": "None",
"kind": "POSITIONAL_OR_NAMED",
"required": false,
"repr": "last: int | timedelta | None = None"
}
],
"doc": "",
"shortdoc": "",
"tags": [
"Assertion",
"BrowserControl",
"Getter"
],
"source": "",
"lineno": 981
}
```
I would consider that as a critical bug and should be fixed asap.
Because it anyway will take long time until we (Browser) can require at least 6.1 as minimal Robot version.
I am fixing it for now in Browser library with an argument `*_` but that is not good at all. | 2hard
|
Title: nbval not working with parallel/distributed pytest
Body: Parallel/distributed testing with pytest (pytest-xdist) fails: it seems that all cells are executed in parallel instead of sequentially, which will obviously not work for most notebooks.
To reproduce, create a notebook doing dependent things in different cells, for example import a package in the first cell and then use it in other packages. Install pytest-xdist. Then, try for example:
```shell
purest -n auto --nbval <notebook file>
```
(the `-n auto` option asking pytest to use all available CPU cores on the machine) | 2hard
|
Title: 执行shell方法报错:Argument list too long: '/usr/local/lib/python3.9/site-packages/airtest/core/android/static/adb/linux/adb'
Body: **描述问题bug**
airtest连接手机后,执行:shell("am force-stop " + _app) 方法,会报错

**python 版本:** `python3.9`
**airtest 版本:** `1.2.7`
> airtest版本通过`pip freeze`可以命令可以查到
**设备:**
- 型号: [华为手机]
- 系统: [e.g. Android 8.1]
- (别的信息)
**其他相关环境信息**
这个问题,偶现
| 1medium
|
Title: Disable cookies
Body: How can I disable cookies? | 0easy
|
Title: Unclear information in Explained variance
Body: ### Describe the issue linked to the documentation
Hi, the text in Explained variance page is somewhat unclear, so I want to propose a clearer text. On line 1005, the detail says this:
> "The Explained Variance score is similar to the R^2 score, with the notable difference that it does not account for systematic offsets in the prediction. Most often the R^2 score should be preferred."
### Suggest a potential alternative/fix
I propose to change it like this:
> "The Explained Variance score is similar to the R^2 score, with the notable difference that **R^2 score also accounts for systematic offsets in the prediction (i.e., the intercept of the linear function). This means that R^2 score changes with different systematic offsets, whereas Explained Variance does not.** Most often the R^2 score should be preferred." | 0easy
|
Title: [Migrated] Slim handler and python decouple problem
Body: Originally from: https://github.com/Miserlou/Zappa/issues/1513 by [giovannicimolin](https://github.com/giovannicimolin)
<!--- Provide a general summary of the issue in the Title above -->
When using slim_handler and python-decouple, the zappa app fails to deploy because the .env file is not found by python-decouple.
Python 2.7
## Expected Behavior
python-decouple working when using slim_handler.
## Actual Behavior
Python-decouple doesn't find the .env file.
`UndefinedValueError: EMAIL_DEFAULT_FROM not found. Declare it as envvar or define a default value.`
## Possible Fix
Change location of .env file when using Slim_handler to include it on the base directory from where python is called.
## Steps to Reproduce
1. Use slim_handler and python-decouple
2. Zappa tail outputs
`UndefinedValueError: EMAIL_DEFAULT_FROM not found. Declare it as envvar or define a default value.`
## Your Environment
<!--- Include as many relevant details about the environment you experienced the bug in -->
* Zappa version used: Zappa 0.45.1
* Operating System and Python version: Arch Linux (lastest) - Python 2.7
* The output of `pip freeze`:
```
amqp==2.2.2
anyjson==0.3.3
argcomplete==1.9.2
autopep8==1.3.5
awsebcli==3.12.4
base58==0.2.4
billiard==3.5.0.3
blessed==1.14.2
boto3==1.6.19
botocore==1.9.19
celery==4.1.0
cement==2.8.2
certifi==2018.1.18
cffi==1.11.5
cfn-flip==1.0.3
chardet==3.0.4
click==6.7
colorama==0.3.7
dj-database-url==0.5.0
dj-static==0.0.6
Django==1.11.11
django-celery-beat==1.1.1
django-celery-results==1.0.1
django-cors-headers==2.2.0
django-extra-fields==0.10
django-filter==1.1.0
django-queryinspect==1.0.0
django-rest-auth==0.9.2
django-rest-framework==0.1.0
django-silk==3.0.0
django-simple-history==1.9.0
django-storages==1.6.6
django-tables2==1.21.1
django-toolbelt==0.0.1
djangorestframework==3.7.7
djangorestframework-jsonapi==2.4.0
djangoutils==0.1.1
dockerpty==0.4.1
docopt==0.6.2
docutils==0.14
durationpy==0.5
et-xmlfile==1.0.1
future==0.16.0
futures==3.1.1
geopy==1.12.0
gprof2dot==2016.10.13
hjson==3.0.1
idna==2.6
inflection==0.3.1
jdcal==1.3
Jinja2==2.10
jmespath==0.9.3
kappa==0.6.0
kombu==4.1.0
lambda-packages==0.19.0
MarkupSafe==1.0
odfpy==1.3.6
openpyxl==2.5.1
pathspec==0.5.5
placebo==0.8.1
psycopg2-binary==2.7.4
pyasn1==0.4.2
pycodestyle==2.4.0
pycparser==2.18
Pygments==2.2.0
python-dateutil==2.6.1
python-decouple==3.1
python-slugify==1.2.4
pytz==2018.3
PyYAML==3.12
redis==2.10.6
requests==2.18.4
s3transfer==0.1.13
semantic-version==2.5.0
singledispatch==3.4.0.3
six==1.11.0
smarturls==0.1.7
sqlparse==0.2.4
static3==0.7.0
tablib==0.12.1
tabulate==0.7.5
termcolor==1.1.0
toml==0.9.4
tqdm==4.19.1
troposphere==2.2.1
unicodecsv==0.14.1
Unidecode==1.0.22
urllib3==1.22
wcwidth==0.1.7
websocket-client==0.47.0
Werkzeug==0.12
whitenoise==3.3.1
wsgi-request-logger==0.4.6
xlrd==1.1.0
xlwt==1.3.0
zappa==0.45.1
```
* Your `zappa_settings.py`:
```
{
"dev": {
"aws_region": "us-east-1",
"django_settings": "appname.settings.production",
"profile_name": "default",
"project_name": "appname",
"runtime": "python2.7",
"s3_bucket": "zappa-er0hnd366",
"slim_handler": true
}
}
``` | 1medium
|
Title: Endpoints, EndpointSlice, and EndpointSliceMirroring controllers do not clean up resources when Service selector is removed
Body: **What happened**:
When a Service selector is removed, the corresponding Endpoints and EndpointSlice resources are not removed by their controllers.
**What you expected to happen**:
The corresponding Endpoints and EndpointSlice resources to be removed by their controllers.
**How to reproduce it (as minimally and precisely as possible)**:
Remove a selector from a Service that has matching Pods.
For more context, this was discovered in https://github.com/kubernetes/enhancements/pull/1713.
/sig network
/priority important-longterm
/assign | 1medium
|
Title: Is `v = np.array(v.dt.to_pydatetime())` still necessary?
Body: As far as I can tell, the lines
https://github.com/plotly/plotly.py/blob/960adb9b9a89387d05343497de1df5d3df592698/packages/python/plotly/_plotly_utils/basevalidators.py#L101-L108
were introduced in https://github.com/plotly/plotly.py/pull/1163 to introduce issues with displaying numpy datetime64 arrays
However, have the numpy datetime64 issues since been fixed? From having built Polars from source, then here's what I see on the master branch:

Looks like it displays fine
If I apply the diff
```diff
--- a/packages/python/plotly/_plotly_utils/basevalidators.py
+++ b/packages/python/plotly/_plotly_utils/basevalidators.py
@@ -95,20 +95,7 @@ def copy_to_readonly_numpy_array(v, kind=None, force_numeric=False):
# Handle pandas Series and Index objects
if pd and isinstance(v, (pd.Series, pd.Index)):
- if v.dtype.kind in numeric_kinds:
- # Get the numeric numpy array so we use fast path below
- v = v.values
- elif v.dtype.kind == "M":
- # Convert datetime Series/Index to numpy array of datetimes
- if isinstance(v, pd.Series):
- with warnings.catch_warnings():
- warnings.simplefilter("ignore", FutureWarning)
- # Series.dt.to_pydatetime will return Index[object]
- # https://github.com/pandas-dev/pandas/pull/52459
- v = np.array(v.dt.to_pydatetime())
- else:
- # DatetimeIndex
- v = v.to_pydatetime()
+ v = v.values
```
then it looks like pandas datetime Series still display fine

---
Asking in the context of https://github.com/plotly/plotly.py/pull/4790, as `copy_to_readonly_numpy_array` would need to handle other kinds of inputs (not just pandas series / index)
A plain conversion to numpy would be a lot faster than going via stdlib datetime objects
```
In [23]: %time np.array(s.dt.to_pydatetime())
CPU times: user 325 ms, sys: 8.34 ms, total: 333 ms
Wall time: 360 ms
Out[23]:
array([datetime.datetime(2000, 1, 1, 0, 0),
datetime.datetime(2000, 1, 1, 1, 0),
datetime.datetime(2000, 1, 1, 2, 0), ...,
datetime.datetime(2114, 1, 29, 13, 0),
datetime.datetime(2114, 1, 29, 14, 0),
datetime.datetime(2114, 1, 29, 15, 0)], dtype=object)
In [24]: %time s.to_numpy()
CPU times: user 46 μs, sys: 0 ns, total: 46 μs
Wall time: 51.5 μs
Out[24]:
array(['2000-01-01T00:00:00.000000000', '2000-01-01T01:00:00.000000000',
'2000-01-01T02:00:00.000000000', ...,
'2114-01-29T13:00:00.000000000', '2114-01-29T14:00:00.000000000',
'2114-01-29T15:00:00.000000000'], dtype='datetime64[ns]')
``` | 1medium
|
Title: add GET /orgs/:org/outside_collaborators
Body: Hi,
Could you please add GET /orgs/:org/outside_collaborators ? https://developer.github.com/v3/orgs/outside_collaborators/
It's something like this in orgs.py:
```
def outside_collaborators(self, filter=None, number=-1, etag=None):
"""Iterate over collaborators of this organization.
:param str filter: (optional), filter members returned by this method.
Can be one of: ``"2fa_disabled"``, ``"all",``. Default: ``"all"``.
Filtering by ``"2fa_disabled"`` is only available for organization
owners with private repositories.
:param int number: (optional), number of members to return. Default:
-1 will return all available.
:param str etag: (optional), ETag from a previous request to the same
endpoint
:returns: generator of :class:`User <github3.users.User>`
"""
headers = {}
params = {}
if filter in self.members_filters:
params['filter'] = filter
url = self._build_url('outside_collaborators', base_url=self._api)
return self._iter(int(number), url, users.ShortUser, params=params,
etag=etag, headers=headers)
``` | 1medium
|
Title: When should the `NapariApplication` be instantiated and its actions registered?
Body: ## 🧰 Task
### When is `app` created?
When we initialize a `napari.Viewer`, we must also instantiate the `NapariApplication` [object](https://github.com/napari/napari/blob/main/napari/_app_model/_app.py#L18) (which is our `app-model` app). The application is created when the `get_app` [class method](https://github.com/napari/napari/blob/main/napari/_app_model/_app.py#L40) is first called.
Currently, this first call happens during execution of the `intialize_plugins` [function](https://github.com/napari/napari/blob/main/napari/plugins/__init__.py#L25), which is called first thing on `Viewer.__init__` and [registers npe2 plugin actions](https://github.com/napari/napari/blob/main/napari/plugins/_npe2.py#L346).
Is it strange that the `NapariApplication`, which is so critical to the `Viewer` and its function, is created in this little function rather than explicitly before we register manifest actions etc.? It seemed to @lucyleeow and I that it was a little hidden and potentially could lead to issues? So wanted to open for discussion.
### When are `actions` registered?*
\* I say actions here for brevity but these steps also include registering providers and processors.
- [Internal non-qt actions](https://github.com/napari/napari/blob/main/napari/_app_model/_app.py#L35): as soon as we build the app, on `NapariApplication.__init__`. This seems normal to me.
- [Internal qt actions](https://github.com/napari/napari/blob/main/napari/_qt/qt_main_window.py#L180): on `QtMainWindow.__init__`. Also seems normal to me I guess - not sure if there's meaning to moving it up or down in the function.
- [Non-qt plugin actions](https://github.com/napari/napari/blob/main/napari/plugins/_npe2.py#L334): on `initialize_plugins` as above (which is up first on `Viewer.__init__`). This means it happens **after** internal non-qt actions and **before** internal/plugin qt actions. I think this is also ok.
- [Qt plugin actions](https://github.com/napari/napari/blob/main/napari/plugins/_npe2.py#L335): immediately after non-qt plugin actions, if qt is available. This means it's **before** qt actions, providers and processors are registered with the viewer, even though some plugin actions (if executed) may request the `Viewer` provider. @lucyleeow and I thought maybe this was kinda weird, and we should potentially consider splitting this out so that qt plugin actions are only registered after `init_qactions`?
Just a final note: in practice none of this seems to have caused any issues, and it may never. Everything is registered and available by the time the `Viewer` is returned. We just wanted to explicitly consider whether this is the right order of operations and whether there might be any potentially unintended consequences of the current setup down the line. | 1medium
|
Title: HeatMapWithTime can not work
Body: #### Please add a code sample or a nbviewer link, copy-pastable if possible
```python
# Your code here
HeatMapWithTime can not work
version python:3.7.4
```
#### Problem description
https://nbviewer.jupyter.org/github/python-visualization/folium/blob/master/examples/HeatMapWithTime.ipynb
this demo can not work
#### Expected Output
#### Output of ``folium.__version__``
'0.10.0' | 1medium
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.