date
stringlengths 10
10
| nb_tokens
int64 60
629k
| text_size
int64 234
1.02M
| content
stringlengths 234
1.02M
|
---|---|---|---|
2018/03/17
| 214 | 755 |
<issue_start>username_0: Hi I have a table containing records of status of each member of an organization. I would like to find out who among the members are still active based on the latest status provided in the table.Hee is a sample of records in the table:
[](https://i.stack.imgur.com/a7lCI.jpg)<issue_comment>username_1: You can get the last status for each name using a subquery in the `where` clause:
```
select t.*
from t
where t.date = (select max(t2.date) from t t2 where t2.name = t.name);
```
You can use an additional conditional expression to determine who is active.
Upvotes: 3 [selected_answer]<issue_comment>username_2: ```
SELECT * FROM YourTable WHERE Status='Active';
```
Upvotes: 0
|
2018/03/17
| 336 | 1,106 |
<issue_start>username_0: React native documentation says python2 is required but what if python3 is installed?
will I get errors in react native app if python3 is installed?
I am using windows 10 and I am going to run it on Android.
also, I read python is necessary from [this question](https://stackoverflow.com/questions/42442732/is-it-necessary-to-install-python-for-react-native-on-windows-for-react-native).<issue_comment>username_1: Yes. You can work with `react native` and `python 3`.
Upvotes: 5 [selected_answer]<issue_comment>username_2: **Edit**
Now it is mentioned,
>
> Python v2.7, v3.5, v3.6, v3.7, or v3.8
>
>
>
So I believe Python 3 should be good to go.
**Old Answer**
I would say if they have specifically mentioned Python 2 then there must be some reason. If you go to [this link](https://github.com/nodejs/node-gyp#on-windows), then you can see they have mentioned
>
> python (v2.7 recommended, v3.x.x is not supported)
>
>
>
So there might be some compatibility issue in some module or something. Just check everything you require is working as expected.
Upvotes: 1
|
2018/03/17
| 411 | 1,114 |
<issue_start>username_0: I have the next list:
```
abclist = ['a', 'b', 'c', 'd', 'e']
```
With the above list I how to create the next one?
```
Reversed_part = ['c', 'b', 'a', 'd', 'e']
```
Only the first 3 items are reversed and the last two stay in the same order.<issue_comment>username_1: This is one way.
```
lst = ['a', 'b', 'c', 'd', 'e']
def partial_reverse(lst, start, end):
"""Indexing (start/end) inputs begins at 0 and are inclusive."""
return lst[:start] + lst[start:end+1][::-1] + lst[end+1:]
partial_reverse(lst, 0, 2) # ['c', 'b', 'a', 'd', 'e']
```
Upvotes: 2 <issue_comment>username_2: You can do it using a combination of `reversed` method & `string slicing`
**Ex:**
```
abclist = ['a', 'b', 'c', 'd', 'e']
print(list(reversed(abclist[:3]))+abclist[-2:])
```
**Output:**
```
['c', 'b', 'a', 'd', 'e']
```
Upvotes: 0 <issue_comment>username_3: ```
abclist = ['a', 'b', 'c', 'd', 'e']
quantityToReverse = 3
remainder = len(abclist) - quantityToReverse
reverseArray = list(reversed(abclist[:quantityToReverse]))+abclist[-remainder:]
print(reverseArray)
```
Upvotes: 0
|
2018/03/17
| 675 | 2,568 |
<issue_start>username_0: I have a class named "Ciudad". When I want to add a migration to create the datatable, EF uses the name "dbo.Ciudads". I actually wanted the name to be "Ciudades" (with an additional 'e') so I changed it manually.
After updating the databse the table dbo.Ciudades was created successfully. I even created a small script to populate it and it run ok.
However, when I want to query "Ciudades" from the context, I get an exception because it tries to query the table "dbo.Ciudads" (without the additional 'e') which doesn't actually exists. It is an **InvalidOperationException: "The model backing the 'ApplicationDbContext' context has changed since the database was created"**
So I ran "add-migration foo" and it generates the following migration:
```
public partial class foo : DbMigration
{
public override void Up()
{
RenameTable(name: "dbo.Ciudads", newName: "Ciudades");
}
public override void Down()
{
RenameTable(name: "dbo.Ciudades", newName: "Ciudads");
}
}
```
It seems strange because in my database I DO have the table exactly as I wanted with the name "Ciudades". Nevertheless, when I try to update the database with this migration, I get the following exception:
>
> System.Data.SqlClient.SqlException (0x80131904): Either the parameter @objname is ambiguous or the claimed @objtype (OBJECT) is wrong.
>
>
>
I imagine I get this exception because the table "Ciudads" doesn't exist. Am I correct?
If so, where it is getting that table from? I did a search on the Entire Solution for the word "ciudads" and nothing came up.<issue_comment>username_1: It's probably worth downloading microsoft SQL server management studio dev edition.
Upvotes: -1 <issue_comment>username_2: You can use this approach .
```
public override void Up()
{
RenameTable("dbo.Ciudads", "Ciudades");
}
public override void Down()
{
RenameTable("dbo.Ciudades", "Ciudads");
}
```
Or
If you want to specify your table name.You can use fluent API.
```
modelBuilder.Entity().ToTable("t\_Department");
```
Upvotes: 0 <issue_comment>username_3: Thanks to @IvanStoev for the solution.
The problem was that I manually changed the table name in the generated migration class. Later I used FluentAPI to specify the name "Ciudades" but it was to late.
As per @IvanStoev suggestion I deleted everything and created the migrations again, but this time using FluentAPI BEFORE creating the migrations. The table was then generated with the name I intended and everything is working fine now.
Upvotes: 1
|
2018/03/17
| 645 | 2,110 |
<issue_start>username_0: in Controller
```
public function index(Request $request)
{
$search_cons = $request->all();
$search_con = $search_cons->name; //error place
return $search_cons.$search_con;
}
```
`->name` this place has the error
Trying to get property of non-object
Or in blade.view
```
{{$search\_cons->name}}
has the error
```
Trying to get property of non-object
But if I use
```
$search_cons=$request->input('name');
```
on controller
the blade view
`{{$search\_cons}}` will work ok!
I want the `$search=request->all()` so I can freely use `$search->name` on my blade view
How can I fix the question?
PS: I tried `$resquest('name')` still not to work
`Request::all()` ->tell me the<issue_comment>username_1: `$search_cons` is an array, not an object:
```
$search_con = $search_cons['name']
```
Upvotes: 1 <issue_comment>username_2: When you do `$request->all()` it returns all the inputs submitted in `array` format. So in your case, you can do
```
$search_cons = $request->all(); // dd($search_cons) so you can see its structure
$search_con = $search_cons['name']; // instead of ->name since it's not an object anymore
```
And anyway, you can skip the `$request->all()` thing - you can actually just do this directly:
`$request->name`.
EDIT:
You can cast the array as an object using `(object)`
```
$search_cons = (object) $request->all();
```
this will still let you use `$search_cons->name`
Upvotes: 2 [selected_answer]<issue_comment>username_3: ```
public function index(Request $request)
{
$search_cons = $request->all(); //returns array
$search_con = $search_cons['name']; //error place
return $search_cons.$search_con;
}
```
Or you can do like this
```
request()->name //request() isa global helper function
```
Upvotes: 1 <issue_comment>username_4: Try
In Controller
```
public function index(Request $request)
{
$search = $request->all();
return ['search' => $search];
}
```
In Blade
>
> Name : {{$search['name'] ? $search['name'] : ''}}
>
>
>
>
>
Upvotes: 0
|
2018/03/17
| 580 | 2,297 |
<issue_start>username_0: I am able to successfully retrieve a document in Firebase Firestore in JavaScript. But would like to validate that document against a user's entry in order to auth that individual into the page.
Using console log I see that I have retrieved the record but it is not matching it against the text inputted by the end user.
How is this accomplished?
I am using Vuejs. And want to password protect one page.
In jsfiddle: <https://jsfiddle.net/oL4bondy/4/>
HTML
```
Please log in first.
--------------------
Submit
```
JS
```
export default {
name: "add-post",
data() {
return {
password: "",
isLoggedIn: ""
};
},
methods: {
validatePin() {
const docRef = db.collection("passwords").doc("user");
docRef
.get()
.then(function(doc) {
if (doc.exists) {
console.log("Document data:", doc.data());
} else {
console.log("No such document!");
}
})
.catch(function(error) {
console.log("Error getting document:", error);
});
let password = this.doc.data;
if (this.userInput === password) {
this.isLoggedIn = true;
}
}
}
};
```<issue_comment>username_1: There are a few problems, but mostly it's related to the fact that the document is loaded from Firestore asynchronously. The solution is to move all the code that needs the document **into** the `then()` that is called once the document is loaded. So:
```
validatePin() {
const docRef = db.collection("passwords").doc("user");
docRef
.get()
.then(function(doc) {
if (doc.exists) {
console.log("Document data:", doc.data());
let password = this.doc.data;
if (this.userInput === password) {
this.isLoggedIn = true;
// TODO: do whatever needs to happen once the user is logged in
}
} else {
console.log("No such document!");
}
})
.catch(function(error) {
console.log("Error getting document:", error);
});
}
```
Upvotes: 0 <issue_comment>username_2: Use `Firebase.auth()` for this. In particular, the `.onAuthStateChanged` method/listener will fire after page load. That's where you want to process user/UI access.
Upvotes: 1
|
2018/03/17
| 1,152 | 4,630 |
<issue_start>username_0: I am using the Forms provided by Laravel Collective and it is correctly installed.
What I'm trying to do is to validate the Form in `PictureController.php`
Just like this:
```html
php
namespace App\Http\Controllers;
use Illuminate\Http\Request;
use App\Picture;
use DB;
class PictureController extends Controller
{
/**
* Create a new controller instance.
*
* @return void
*/
public function __construct()
{
$this-middleware('auth');
}
/**
* Display a listing of the resource.
*
* @return \Illuminate\Http\Response
*/
public function index()
{
$pictures = Picture::all();
return view('welcome')->with('pictures', $pictures);
}
/**
* Show the form for creating a new resource.
*
* @return \Illuminate\Http\Response
*/
public function create()
{
return view('pictures.publicize');
}
/**
* Store a newly created resource in storage.
*
* @param \Illuminate\Http\Request $request
* @return \Illuminate\Http\Response
*/
public function store(Request $request)
{
$this->validate($request, [
'hashtag' => 'required',
]);
return 123;
}
/**
* Display the specified resource.
*
* @param int $id
* @return \Illuminate\Http\Response
*/
public function show($id)
{
//
}
/**
* Show the form for editing the specified resource.
*
* @param int $id
* @return \Illuminate\Http\Response
*/
public function edit($id)
{
//
}
/**
* Update the specified resource in storage.
*
* @param \Illuminate\Http\Request $request
* @param int $id
* @return \Illuminate\Http\Response
*/
public function update(Request $request, $id)
{
//
}
/**
* Remove the specified resource from storage.
*
* @param int $id
* @return \Illuminate\Http\Response
*/
public function destroy($id)
{
//
}
}
```
And this is the code snippet of `publicize.blade.php` which contains this form.
```html
{!! Form::open(array(
'action' => 'PictureController@store',
'method' => 'POST',
'files' => true
))
!!}
{{ Form::label('hashtag', 'HashTag') }}
{{ Form::text('hashtag', '', ['placeholder' => 'Eg. #Happiness', 'class' => 'form-control', 'name' => 'hashtag' /\*'required'\*/]) }}
{{ Form::submit('Publicize', ['class' => 'btn btn-primary']) }}
{!! Form::close() !!}
```
And in the `routes/web.php`, this is the code written...
```html
Route::resource('/', 'PictureController');
```
I have also submitted whole code on GitHub. Just look at [this commit](https://github.com/KumarAbhirup/myTaswir/tree/e3209c3983addb501d9d291f4560560b42850c77) to take a look at all the files.
**The problem is..**
When I look into the Source Code in browser, I see that `action` attribute in the form equals the home url, i.e `/public/`. It looks like the form is not connected to the controller and method I specified in `publicize.blade.php`. How can I make that form work properly with validations?
**Thanks for help in advance.**<issue_comment>username_1: There are a few problems, but mostly it's related to the fact that the document is loaded from Firestore asynchronously. The solution is to move all the code that needs the document **into** the `then()` that is called once the document is loaded. So:
```
validatePin() {
const docRef = db.collection("passwords").doc("user");
docRef
.get()
.then(function(doc) {
if (doc.exists) {
console.log("Document data:", doc.data());
let password = this.doc.data;
if (this.userInput === password) {
this.isLoggedIn = true;
// TODO: do whatever needs to happen once the user is logged in
}
} else {
console.log("No such document!");
}
})
.catch(function(error) {
console.log("Error getting document:", error);
});
}
```
Upvotes: 0 <issue_comment>username_2: Use `Firebase.auth()` for this. In particular, the `.onAuthStateChanged` method/listener will fire after page load. That's where you want to process user/UI access.
Upvotes: 1
|
2018/03/17
| 830 | 2,532 |
<issue_start>username_0: I have the following file that contains 2 columns :
```
A:B:IP:80 apples
C:D:IP2:82 oranges
E:F:IP3:84 grapes
```
How is possible to split the file in 2 other files, each column in a file like this:
File1
```
A:B:IP:80
C:D:IP2:82
E:F:IP3:84
```
File2
```
apples
oranges
grapes
```<issue_comment>username_1: Perl 1-liner using (abusing) the fact that `print` goes to `STDOUT`, i.e. file descriptor `1`, and `warn` goes to `STDERR`, i.e. file descriptor `2`:
```
# perl -n means loop over the lines of input automatically
# perl -e means execute the following code
# chomp means remove the trailing newline from the expression
perl -ne 'chomp(my @cols = split /\s+/); # Split each line on whitespace
print $cols[0] . "\n";
warn $cols[1] . "\n"' col1 2>col2
```
You could, of course, just use `cut -b` with the appropriate columns, but then you would need to read the file twice.
Upvotes: 2 [selected_answer]<issue_comment>username_2: Try:
```
awk '{print $1>"file1"; print $2>"file2"}' file
```
After runningl that command, we can verify that the desired files have been created:
```
$ cat file1
A:B:IP:80
C:D:IP2:82
E:F:IP3:84
```
And:
```
$ cat file2
apples
oranges
grapes
```
### How it works
* `print $1>"file1"`
This tells awk to write the first column to `file1`.
* `print $2>"file2"`
This tells awk to write the second column to `file2`.
Upvotes: 2 <issue_comment>username_3: Here's an awk solution that'll work with any number of columns:
```
awk '{for(n=1;n<=NF;n++)print $n>"File"n}' input.txt
```
This steps through each field on the line and prints the field to a different output file based on the column number.
Note that blank fields -- or rather, lines with fewer fields than other lines, will cause line numbers to mismatch. That is, if your input is:
```
A 1
B
C 3
```
Then `File2` will contain:
```
1
3
```
If this is a concern, mention it in an update to your question.
---
You could of course do this in bash alone, in a number of ways. Here's one:
```
while read -r line; do
a=($line)
for m in "${!a[@]}"; do
printf '%s\n' "${a[$m]}" >> File$((m+1))
done
done < input.txt
```
This reads each line of input into `$line`, then word-splits `$line` into values in the `$a[]` array. It then steps through that array, printing each item to the appropriate file, named for the index of the array (plus one, since bash arrays start at zero).
Upvotes: 1
|
2018/03/17
| 895 | 2,990 |
<issue_start>username_0: I have a directory on my server that I need to clear out using PHP. I need to delete all files and folders inside this directory, however, I do not want to delete the main directory itself. Everything I have read, examples I have found, they all seem to delete everything inside the directory given but then after it also deletes the given directory. One function I tried is below, but again it also deletes the main directory and I can't have that.
```
php
function rrmdir($dir) {
if (is_dir($dir)) {
$objects = scandir($dir);
foreach ($objects as $object) {
if ($object != "." && $object != "..") {
if (filetype($dir."/".$object) == "dir")
rrmdir($dir."/".$object);
else unlink ($dir."/".$object);
}
}
reset($objects);
rmdir($dir);
}
}
?
```<issue_comment>username_1: Perl 1-liner using (abusing) the fact that `print` goes to `STDOUT`, i.e. file descriptor `1`, and `warn` goes to `STDERR`, i.e. file descriptor `2`:
```
# perl -n means loop over the lines of input automatically
# perl -e means execute the following code
# chomp means remove the trailing newline from the expression
perl -ne 'chomp(my @cols = split /\s+/); # Split each line on whitespace
print $cols[0] . "\n";
warn $cols[1] . "\n"' col1 2>col2
```
You could, of course, just use `cut -b` with the appropriate columns, but then you would need to read the file twice.
Upvotes: 2 [selected_answer]<issue_comment>username_2: Try:
```
awk '{print $1>"file1"; print $2>"file2"}' file
```
After runningl that command, we can verify that the desired files have been created:
```
$ cat file1
A:B:IP:80
C:D:IP2:82
E:F:IP3:84
```
And:
```
$ cat file2
apples
oranges
grapes
```
### How it works
* `print $1>"file1"`
This tells awk to write the first column to `file1`.
* `print $2>"file2"`
This tells awk to write the second column to `file2`.
Upvotes: 2 <issue_comment>username_3: Here's an awk solution that'll work with any number of columns:
```
awk '{for(n=1;n<=NF;n++)print $n>"File"n}' input.txt
```
This steps through each field on the line and prints the field to a different output file based on the column number.
Note that blank fields -- or rather, lines with fewer fields than other lines, will cause line numbers to mismatch. That is, if your input is:
```
A 1
B
C 3
```
Then `File2` will contain:
```
1
3
```
If this is a concern, mention it in an update to your question.
---
You could of course do this in bash alone, in a number of ways. Here's one:
```
while read -r line; do
a=($line)
for m in "${!a[@]}"; do
printf '%s\n' "${a[$m]}" >> File$((m+1))
done
done < input.txt
```
This reads each line of input into `$line`, then word-splits `$line` into values in the `$a[]` array. It then steps through that array, printing each item to the appropriate file, named for the index of the array (plus one, since bash arrays start at zero).
Upvotes: 1
|
2018/03/17
| 916 | 3,497 |
<issue_start>username_0: I've tried this many different ways... don't know why this is redirecting still. I suppose in the past I've always used a button instead of a submit input and as such I never ran into this issue. However, I think it's time to get to the bottom of this!
HTML FORM
```
Contact
-------
Use the form below to send to contact me via email. I will be in touch soon after receiving your message.
Select Service
Group Walking
Private Walking
Pet Sitting
```
JAVASCRIPT CODE
```
function formSubmit(e) {
e.preventDefault();
return false;
console.log("Ajax Init");
var form = e.target,
data = new FormData(),
xhr = new XMLHttpRequest();
for (var i = 0, ii = form.length - 1; i < ii; ++i) {
var input = form[i];
data.append(input.name, input.value);
if (input.getAttribute("name") !== "_gotcha") {
if (input.value === "" || input.value === null || input.value === "undefined") {
alert("Please fill out all form fields before submitting");
break;
}
}
}
xhr.open(form.method.toUpperCase(), form.action, true);
if (document.getElementById("_gotcha").value.length == 0){
xhr.send(data);
} else {
break;
}
xhr.onloadend = function () {
// done
for (var i = 0, ii = form.length - 1; i < ii; ++i) {
var input = form[i];
input.value = "";
}
alert("Message Sent - Thank You");
};
};
```<issue_comment>username_1: It seems a better option is to use [onsubmit attribute](https://stackoverflow.com/a/5196138/1022914).
```js
function formSubmit(form) {
console.log("Ajax Init");
var data = new FormData(form), // simpler
xhr = new XMLHttpRequest();
for (var i = 0, ii = form.length - 1; i < ii; ++i) {
var input = form[i];
//data.append(input.name, input.value);
if (input.getAttribute("name") !== "_gotcha") {
if (input.value === "" || input.value === null || input.value === "undefined") {
alert("Please fill out all form fields before submitting");
// something went wrong, prevent form from submitting
return false;
}
}
}
xhr.open(form.method.toUpperCase(), form.action, true);
if (document.getElementById("_gotcha").value.length == 0) {
xhr.send(data);
} else {
// something went wrong, prevent form from submitting
return false;
}
xhr.onloadend = function() {
// done
for (var i = 0, ii = form.length - 1; i < ii; ++i) {
var input = form[i];
input.value = "";
}
alert("Message Sent - Thank You");
};
// everything went ok, submit form
return true;
};
```
```html
Contact
-------
Use the form below to send to contact me via email. I will be in touch soon after receiving your message.
Select Service
Group Walking
Private Walking
Pet Sitting
```
Upvotes: 2 <issue_comment>username_2: i suggest to use jquery inside of core javascript becuase in javascript it to mush code want to write , i write for you in jquery
**step 1:** : give id to form tag `id="myForm"`
**step 2:** : write script like this
```
$('#myForm').submit(function(e){
e.preventDefualt();
data = $(this)..serialize();
});
```
Upvotes: -1
|
2018/03/17
| 955 | 3,250 |
<issue_start>username_0: please help me with the question below, I'm a beginner at programming trying to get my head around it
```
public class LandVille {
private int[][] land;
private boolean hasHouse;
public static void main(String args[]) {
LandVille landVille = new LandVille(3, 4);
landVille.displayLand();
}
// Task A - constructor
LandVille(int numRows, int numColumns) {
land = new int[numRows][numColumns];
for (int i = 0; i < numRows; ++i) {
for (int j = 0; j < numColumns; ++j) {
land[i][j] = 0;
}
}
hasHouse = false;
}
// Task B
public void displayLand() {
for (int i = 0; i < land.length; ++i) {
for (int j = 0; j < land[i].length; ++j) {
System.out.print(land[i][j] + " ");
}
System.out.println();
}
}
// Task C
public void clearLand() {
for (int i = 0; i < land.length; ++i) {
for (int j = 0; j < land[i].length; ++j) {
land[i][j] = 0;
}
}
hasHouse = false;
}
}
```
From the main method, ask the player for the row and column of the land. The number of rows
and the number of columns should be greater than 0 and less than or equal to 10. If any input is
not correct, show an error message and ask for that input again.
If all inputs are correct, create an object of LandVille class from main method. The row and column
values are passed as the parameter of its constructor.<issue_comment>username_1: It seems a better option is to use [onsubmit attribute](https://stackoverflow.com/a/5196138/1022914).
```js
function formSubmit(form) {
console.log("Ajax Init");
var data = new FormData(form), // simpler
xhr = new XMLHttpRequest();
for (var i = 0, ii = form.length - 1; i < ii; ++i) {
var input = form[i];
//data.append(input.name, input.value);
if (input.getAttribute("name") !== "_gotcha") {
if (input.value === "" || input.value === null || input.value === "undefined") {
alert("Please fill out all form fields before submitting");
// something went wrong, prevent form from submitting
return false;
}
}
}
xhr.open(form.method.toUpperCase(), form.action, true);
if (document.getElementById("_gotcha").value.length == 0) {
xhr.send(data);
} else {
// something went wrong, prevent form from submitting
return false;
}
xhr.onloadend = function() {
// done
for (var i = 0, ii = form.length - 1; i < ii; ++i) {
var input = form[i];
input.value = "";
}
alert("Message Sent - Thank You");
};
// everything went ok, submit form
return true;
};
```
```html
Contact
-------
Use the form below to send to contact me via email. I will be in touch soon after receiving your message.
Select Service
Group Walking
Private Walking
Pet Sitting
```
Upvotes: 2 <issue_comment>username_2: i suggest to use jquery inside of core javascript becuase in javascript it to mush code want to write , i write for you in jquery
**step 1:** : give id to form tag `id="myForm"`
**step 2:** : write script like this
```
$('#myForm').submit(function(e){
e.preventDefualt();
data = $(this)..serialize();
});
```
Upvotes: -1
|
2018/03/17
| 467 | 1,736 |
<issue_start>username_0: I changing HTML near the top of my website using JS and its not working I was having a similar problem earlier because the quotations were messing it up but i tried to fix it with that method but it still isn't working.
```js
document.getElementById("accTabs").innerHTML = "SIGN IN";
```<issue_comment>username_1: ```js
document.getElementById("accTabs").innerHTML = "SIGN IN";
```
Please try like this snippet.
Your error is raised because you used quote incorrectly.
```
document.getElementById("accTabs").innerHTML = "";
```
If you try like this code, then double quote inside double quote will cause the problem.
You need to change this like below to escape the problem
```
document.getElementById("accTbs").innerHTML = "";
```
>
> You could also use ES6 template literal. then you can use double quote and single quote inside string without any problem.
>
>
>
```
document.getElementById("accTabs").innerHTML = ``;
```
Upvotes: 2 [selected_answer]<issue_comment>username_2: ```js
document.getElementById("accTabs").innerHTML = "SIGN IN";
```
Upvotes: 0 <issue_comment>username_3: try using **template strings** as shown below
```
document.getElementById("accTabs").innerHTML = `SIGN IN`;
```
This would work.
Learn more about Templete strings [Here](http://wesbos.com/template-strings-html/)
Upvotes: 0 <issue_comment>username_4: Just properly escape your `"` with `\"` and use `'` to wrap the string, reducing escapes
```js
document.getElementById("accTabs").innerHTML = 'SIGN IN';
```
```html
id01
```
Moreover, you could also try ES6's backtick ` to even completely avoid escaping.
```js
document.getElementById("accTabs").innerHTML = `SIGN IN`;
```
```html
id01
```
Upvotes: 0
|
2018/03/17
| 675 | 2,661 |
<issue_start>username_0: I have enabled `CTRL`+`ALT`+`DELETE` secure attention sequence (SAS) for windows logon using local security policy. (secpol.msc , Security Settings->Local Policies->Security Options->Interactive Logon: Do not require `CTRL`+`ALT`+`DEL` -> Disabled )
Currently the machine is using a facial based custom credential provider for login in Windows 10. In the current setup if the custom credential provider fails during authentication, it falls back to normal windows based logon (Password / Pin).
I have disabled the password, pin based mechanism through the group policy ( gpedit.msc, Computer Configuration ->Administrative Templates->System->Logon , Exclude Credential Providers ). This works fine as password and pin cannot be used for authentication. But the login page is still displayed.
How to always go back to `Ctrl`+`Alt`+`Del` logon page if the custom credential provider fails to do any authentication so that the user can retry ?
Is it possible to Control through group policy? Do I have to manage through the credential provider source so the fallback always goes back `Ctrl`+`Alt`+`Del` page.
Additional Info: <https://learn.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2003/cc780332(v=ws.10)>
Ref section - Winlogon Desktop Dialog Boxes:
In other words it is about switching from "Log On to Windows" desktop "Welcome to Windows" desktop automatically.
Additional Info on the flow:
When Winlogon.exe detects the SAS (Ctrl+Alt+Del), it launches this LogonUI.exe process,which initializes our custom credential provider.
In the normal use case , when our credential provider succeeds , user enters his credentials and the LogonUI.exe process terminates.
Now in the second case, when our custom credential provider fails, desktop becomes blank or if fast user switching is enabled, it displays the switch user button.
In the correct use case , I have to fallback to SAS (Ctrl+Alt+Del)<issue_comment>username_1: ```
*pcpgsr = CPGSR_RETURN_NO_CREDENTIAL_FINISHED;
return hr; // return to LogonUI
```
**CPGSR\_RETURN\_NO\_CREDENTIAL\_FINISHED** will return from your module to windows system without accepting your security structure. Also use unadvise to cleanup while returning from Serialization call.
Upvotes: 1 <issue_comment>username_2: Do you solve your issue?
I think in the new scenario of credential providers (versus GINA) it is impossible to control this behaviour.
If `ctrl`+`alt`+`del` is enabled there is no **legal** way to eliminate and/or simulate this secure attention sequence. Have a look at this [article](https://stackoverflow.com/q/51747455/3868464).
Upvotes: 0
|
2018/03/17
| 1,553 | 5,679 |
<issue_start>username_0: I am processing a large directory every night. It accumulates around 1 million files each night, half of which are `.txt` files that I need to move to a different directory according to their contents.
Each `.txt` file is pipe-delimited and contains only 20 records. Record 6 is the one that contains the information I need to determine which directory to move the file to.
Example Record:
```
A|CHNL_ID|4
```
In this case the file would be moved to `/out/4`.
This script is processing at a rate of 80,000 files per hour.
Are there any recommendations on how I could speed this up?
```
opendir(DIR, $dir) or die "$!\n";
while ( defined( my $txtFile = readdir DIR ) ) {
next if( $txtFile !~ /.txt$/ );
$cnt++;
local $/;
open my $fh, '<', $txtFile or die $!, $/;
my $data = <$fh>;
my ($channel) = $data =~ /A\|CHNL_ID\|(\d+)/i;
close($fh);
move ($txtFile, "$outDir/$channel") or die $!, $/;
}
closedir(DIR);
```<issue_comment>username_1: Try something like:
```
print localtime()."\n"; #to find where time is spent
opendir(DIR, $dir) or die "$!\n";
my @txtFiles = map "$dir/$_", grep /\.txt$/, readdir DIR;
closedir(DIR);
print localtime()."\n";
my %fileGroup;
for my $txtFile (@txtFiles){
# local $/ = "\n"; #\n or other record separator
open my $fh, '<', $txtFile or die $!;
local $_ = join("", map {<$fh>} 1..6); #read 6 records, not whole file
close($fh);
push @{ $fileGroup{$1} }, $txtFile
if /A\|CHNL_ID\|(\d+)/i or die "No channel found in $_";
}
for my $channel (sort keys %fileGroup){
moveGroup( @{ $fileGroup{$channel} }, "$outDir/$channel" );
}
print localtime()." finito\n";
sub moveGroup {
my $dir=pop@_;
print localtime()." <- start $dir\n";
move($_, $dir) for @_; #or something else if each move spawns sub process
#rename($_,$dir) for @_;
}
```
This splits the job into three main parts where you can time each part to find where most time is spent.
Upvotes: 3 [selected_answer]<issue_comment>username_2: You are being hurt by the sheer number of files in a single directory.
I created `80_000` files and ran your script which completed in 5.2 seconds. This is on an older laptop with CentOS7 and v5.16. But with half a million files† it takes nearly 7 minutes. Thus the problem is not about the performance of your code per se (but which can also be tightened).
Then one solution is simple: run the script out of a cron, say every hour, as files are coming. While you move the `.txt` files also move the others elsewhere and there will never be too many files; the script will always run in seconds. In the end you can move those other files back, if needed.
Another option is to store these files on a partition with a different filesystem, say ReiserFS. However, this doesn't at all address the main problem of having way too many files in a directory.
Another partial fix is to replace
```
while ( defined( my $txtFile = readdir DIR ) )
```
with
```
while ( my $path = <"$dir/*txt"> )
```
which results in a 1m:12s run (as opposed to near 7 minutes). Don't forget to adjust file-naming since `<>` above returns the full path to the file. Again, this doesn't really deal with the problem.
If you had control over how the files are distributed you would want a 3-level (or so) deep directory structure, which can be named using files' MD5, what would result in a very balanced distribution.
---
† File names and their content were created as
```
perl -MPath::Tiny -wE'
path("dir/s".$_.".txt")->spew("A|some_id|$_\n") for 1..500_000
'
```
Upvotes: 3 <issue_comment>username_3: This is the sort of task that I often perform. Some of these were already mentioned in various comments. None of these are special to Perl and the biggest wins will come from changing the environment rather than the language.
* Segment files into separate directories to keep the directories small. Larger directories take longer to read (sometimes exponentially). This happens in whatever produces the files. The file path would be something like *.../ab/cd/ef/filename.txt* where the *ab/cd/ef* come from some function that has unlikely collisions. Or maybe it's like *.../2018/04/01/filename.txt*.
* You probably don't have much control over the producer. I'd investigate making it add lines to a single file. Something else makes separate files out of that later.
* Run more often and move processed files somewhere else (again, possibly with hashing.
* Run continually and poll the directory periodically to check for new files.
* Run the program in parallel. If you have a lot of idle cores, get them working on it to. You'd need something to decide who gets to work on what.
* Instead of creating files, shove them into a lightweight data store, such as Redis. Or maybe a heavyweight data store.
* Don't actually read the file contents. Use File::Mmap instead. This is often a win for very large files but I haven't played with it much on large collections of small files.
* Get faster spinning disks or maybe an SSD. I had the misfortune where I accidentally created millions of files in a single directory on a slow disk.
Upvotes: 2 <issue_comment>username_4: I don't think anyone brought it up but have you considered running a long term process that uses filesystem notifications as near realtime events, instead of processing in batch? Im sure CPAN will have something for Perl 5, there is a built in object in Perl 6 for this to illustrate what I mean <https://docs.perl6.org/type/IO::Notification> Perhaps someone else can chime in on what is a good module to use in P5?
Upvotes: 2
|
2018/03/17
| 2,394 | 9,757 |
<issue_start>username_0: I'm unit testing to see if a method is called.
```
[Fact]
public void Can_Save_Project_Changes()
{
//Arrange
var user = new AppUser() { UserName = "JohnDoe", Id = "1" };
Mock mockRepo = new Mock();
Mock> userMgr = GetMockUserManager();
userMgr.Setup(x => x.FindByNameAsync(It.IsAny())).ReturnsAsync(new AppUser() { UserName = "JohnDoe", Id = "1" });
var contextUser = new ClaimsPrincipal(new ClaimsIdentity(new Claim[]
{
new Claim(ClaimTypes.Name, user.UserName),
new Claim(ClaimTypes.NameIdentifier, user.Id),
}));
Mock tempData = new Mock();
ProjectController controller = new ProjectController(mockRepo.Object, userMgr.Object)
{
TempData = tempData.Object,
ControllerContext = new ControllerContext
{
HttpContext = new DefaultHttpContext() { User = contextUser }
}
};
Project project = new Project()
{
Name = "Test",
UserID = "1",
};
//Act
Task result = controller.EditProject(project);
//Assert
mockRepo.Setup(m => m.SaveProject(It.IsAny(), user));
//This line still throws an error
mockRepo.Verify(m => m.SaveProject(It.IsAny(), user));
Assert.IsType>(result);
var view = result.Result as ViewResult;
Assert.Equal("ProjectCharts", view.ViewName);
Assert.Equal("Project", view.Model.ToString());
}
```
While debugging, I can verify that the method is actually called in the controller,
```
//This controller line is touched walking through the code
repository.SaveProject(project, user);
//but this repo line is not touched
public void SaveProject(Project project, AppUser user)
```
Debugging doesn't actually show entrance into the repository method. The exact error is below
>
> Expected invocation on the mock at least once, but was never performed: m => m.SaveProject(, JohnDoe)
>
>
> No setups configured.
> Performed invocations:
> IRepository.ProjectClass
> IRepository.SaveProjects(ProjectClass, JohnDoe)'
>
>
>
When I do an actual integration test, the `SaveProject` method is touched in the repository and seems to work properly. I've also tried assigning every `Project` property within the unit test but got the same error result<issue_comment>username_1: I'm going to go a step further than Yoshi's comment.
The `Performed invocations` message tells you the method was called but not with the parameters that you were verifying. My guess based on the messages is that there's something wrong with the first parameter.
You would need to post the test for me to be able to be more specific.
**Update (after the Test was added)**
Change `userMgr.Setup` to return your 'user' variable, not a duplicate. Despite what I said earlier, this was the cause of your failure - the code being tested was being given a duplicate, and Moq was correctly saying that your method had not been called with `user` because it had been called with the duplicate. So changing it to this fixes the problem:
```
userMgr.Setup(x => x.FindByNameAsync(It.IsAny())).ReturnsAsync(user);
```
This could be made even stronger if the use of `It.IsAny()` can be avoided: if the specific string that is expected as a parameter is set up as part of the test setup, then give the value instead.
I suspect both of the "1" strings need to be identical to make this work, so rather than duplicate the string declare a local variable and use that instead of both strings.
I would suggest never using values like 1; prefer to randomly type something, so that it doesn't coincidentally pass. By which I mean, imagine a method which takes two integers as parameters: when calling Setup or Verify for that method, if you use the same value for both those integers, the test could pass even if your code has mistakenly swapped the values over (passing each into the wrong parameter). If you use different values when calling Setup or Verify, then it will only work when the correct value is passed in the correct parameter.
`mockRepo.Setup` is redundant. Setup allows you to specify how the class behaves but there is nothing else after that on the line, so its redundant and can be removed. Some people use setup along with VerifyAll but you might want to read this discussion about using [VerifyAll](http://russellallen.info.gridhosted.co.uk/post/2011/04/15/Moq-asserts-Verify()-vs-VerifyAll().aspx).
Now change your verify back to using `project` rather than `It.IsAny()`. I would expect it to work.
**Update 2**
Consider a tiled roof. Each tile is responsible for protecting one small part of the roof, slightly overlapping the ones below it. That tiled roof is like a collection of unit tests when using mocking.
Each 'tile' represents one test fixture, covering one class in the real code. The 'overlapping' represents the interaction between the class and the things it uses, which has to be defined using mocks, which are tested using things like Setup and Verify (in Moq).
If this mocking is done badly, then the gaps between the tiles will be big, and your roof could leak (i.e. your code might not work). Two examples of how mocking can be done badly:
1. Not checking the parameters which are given to the dependencies, by using `It.IsAny` when you really don't need to.
2. Incorrectly defining the behaviour of the mock compared to how the real dependency would behave.
That last one is your biggest risk; but it's no different than the risk of writing bad unit tests (regardless of whether it involves mocking). If I wrote a unit test which exercised the code under test but then failed to make any assertions, or made an assertion about something that doesn't matter, that would be a weak test. Using `It.IsAny` is like saying "I don't care what this value is", and means you're missing the opportunity to assert what that value *should* be.
There are times when it's not possible to specify the value, where you have to use `It.IsAny`, and one other case I'll come back to in a second is also OK. Otherwise, you should always try to specify what the parameters are, either exactly, or at least using `It.Is(comparison lambda)`. The one other time it's ok to use `It.IsAny()` is when you are verifying that a call has *not* been made, using `Times.Never` as a parameter to `Verify`. In this case, it is usually a good idea to always use it, since it checks the call has not been made with any parameter (avoiding the possibility that you have simply made an error on what parameters are given).
If I wrote some unit tests which gave me 100% code coverage; but didn't test all the possible scenarios, that would be weak unit testing. Do I have any tests to try to find these badly written tests? No, and people who don't use mocking don't have tests like that either.
Going back to the tiled roof analogy... if I didn't have mocking, and had to test each part using the real dependencies here's what my roof would look like. I could have a tile for all of the bits at the bottom edge of the roof. No problem so far. For what would have been the next set of tiles up the roof, for what would have been one tile, I need a triangular tile, covering where that tile would have gone, and covering the tiles below it (even though they are already covered by a tile). Still, not too bad. But 15 tiles further up the roof, this is going to get exhausting.
Bringing that to a real world scenario, imagine I'm testing a client-side piece of code, which uses two WCF services, one of which is a third party that charges per use, one of which is protected by windows authentication, maybe one of those services has complex logic in its business layer before reaching the data layer and interacting with a database, and somewhere in there, I might have some caching. I daresay writing decent tests for this without mocking could be described as overly-convoluted, if it's even possible (in one person's lifetime)...
Unless you use mocking, which allows you to...
1. Test your code that depends on the third-party code, without making calls into it (acknowledging the risks mentioned earlier about mocking that accurately).
2. Simulate what would happened if a user with or without the right permissions called the protected WCF service (think about how you would do that from automated tests without mocking)
3. Test separate parts of code in isolation, which is particularly valuable where complex business logic is involved. This exponentially reduces the number of paths through the code that need to be tested, reducing the cost of writing the tests, and of maintaining them. Imagine the complexity of having to set up the database with all the prerequisites, not just for the data layer tests, but for all of the tests up the call stack. Now what happens when there is a database change?
4. Test caching by Verifying how many times your mock's method was called.
(For the record, speed of execution of the tests has never played any part in my decision to use mocking.)
Luckily mocking is simple, requiring barely any level of comprehension above what I have spelled out here. As long as you acknowledge that using mocking is a compromise compared to full-on integration testing, it yields the kind of savings in development and maintenance time that any product manager will be grateful for. So try to keep the gaps between your tiles small.
Upvotes: 3 [selected_answer]<issue_comment>username_2: Try to setup your method like this:
mockRepo.Setup(m => m.SaveProject(It.IsAny(),It.IsAny())
And then verify using It.IsAny as well.
Or just use It.IsAny for the parameters you do not want (or cannot) check properly for some reason. You can also create custom matchers in the later case.
As mentioned in other comments. The problem is likely to be on the arguments that you have setup you mock to expect.
Upvotes: 1
|
2018/03/17
| 2,056 | 8,603 |
<issue_start>username_0: suppose i have an object like this:
```
user = {
Fullname: "name",
password: "<PASSWORD>",
confirm: "anonymous"
}
```
so i want to check value of this object is empty or not and password have to be at least 6 character and maximum 30 character. Thank you for advanced
and here my code but it didn't work. plz show me where am i wrong?
```
var user = {
username,
password,
confirm,
};
function check(a) {
if (user.username != "") {
return true
}
if (6 < user.password < 30) {
return true
}
if (user.password = user.confirm) {
return true
} else {
document.write("wrong");
return false
}
console.log(user = {
username = "adc",
password = "<PASSWORD>",
confirm = "<PASSWORD>",
};);
}
```<issue_comment>username_1: I'm going to go a step further than Yoshi's comment.
The `Performed invocations` message tells you the method was called but not with the parameters that you were verifying. My guess based on the messages is that there's something wrong with the first parameter.
You would need to post the test for me to be able to be more specific.
**Update (after the Test was added)**
Change `userMgr.Setup` to return your 'user' variable, not a duplicate. Despite what I said earlier, this was the cause of your failure - the code being tested was being given a duplicate, and Moq was correctly saying that your method had not been called with `user` because it had been called with the duplicate. So changing it to this fixes the problem:
```
userMgr.Setup(x => x.FindByNameAsync(It.IsAny())).ReturnsAsync(user);
```
This could be made even stronger if the use of `It.IsAny()` can be avoided: if the specific string that is expected as a parameter is set up as part of the test setup, then give the value instead.
I suspect both of the "1" strings need to be identical to make this work, so rather than duplicate the string declare a local variable and use that instead of both strings.
I would suggest never using values like 1; prefer to randomly type something, so that it doesn't coincidentally pass. By which I mean, imagine a method which takes two integers as parameters: when calling Setup or Verify for that method, if you use the same value for both those integers, the test could pass even if your code has mistakenly swapped the values over (passing each into the wrong parameter). If you use different values when calling Setup or Verify, then it will only work when the correct value is passed in the correct parameter.
`mockRepo.Setup` is redundant. Setup allows you to specify how the class behaves but there is nothing else after that on the line, so its redundant and can be removed. Some people use setup along with VerifyAll but you might want to read this discussion about using [VerifyAll](http://russellallen.info.gridhosted.co.uk/post/2011/04/15/Moq-asserts-Verify()-vs-VerifyAll().aspx).
Now change your verify back to using `project` rather than `It.IsAny()`. I would expect it to work.
**Update 2**
Consider a tiled roof. Each tile is responsible for protecting one small part of the roof, slightly overlapping the ones below it. That tiled roof is like a collection of unit tests when using mocking.
Each 'tile' represents one test fixture, covering one class in the real code. The 'overlapping' represents the interaction between the class and the things it uses, which has to be defined using mocks, which are tested using things like Setup and Verify (in Moq).
If this mocking is done badly, then the gaps between the tiles will be big, and your roof could leak (i.e. your code might not work). Two examples of how mocking can be done badly:
1. Not checking the parameters which are given to the dependencies, by using `It.IsAny` when you really don't need to.
2. Incorrectly defining the behaviour of the mock compared to how the real dependency would behave.
That last one is your biggest risk; but it's no different than the risk of writing bad unit tests (regardless of whether it involves mocking). If I wrote a unit test which exercised the code under test but then failed to make any assertions, or made an assertion about something that doesn't matter, that would be a weak test. Using `It.IsAny` is like saying "I don't care what this value is", and means you're missing the opportunity to assert what that value *should* be.
There are times when it's not possible to specify the value, where you have to use `It.IsAny`, and one other case I'll come back to in a second is also OK. Otherwise, you should always try to specify what the parameters are, either exactly, or at least using `It.Is(comparison lambda)`. The one other time it's ok to use `It.IsAny()` is when you are verifying that a call has *not* been made, using `Times.Never` as a parameter to `Verify`. In this case, it is usually a good idea to always use it, since it checks the call has not been made with any parameter (avoiding the possibility that you have simply made an error on what parameters are given).
If I wrote some unit tests which gave me 100% code coverage; but didn't test all the possible scenarios, that would be weak unit testing. Do I have any tests to try to find these badly written tests? No, and people who don't use mocking don't have tests like that either.
Going back to the tiled roof analogy... if I didn't have mocking, and had to test each part using the real dependencies here's what my roof would look like. I could have a tile for all of the bits at the bottom edge of the roof. No problem so far. For what would have been the next set of tiles up the roof, for what would have been one tile, I need a triangular tile, covering where that tile would have gone, and covering the tiles below it (even though they are already covered by a tile). Still, not too bad. But 15 tiles further up the roof, this is going to get exhausting.
Bringing that to a real world scenario, imagine I'm testing a client-side piece of code, which uses two WCF services, one of which is a third party that charges per use, one of which is protected by windows authentication, maybe one of those services has complex logic in its business layer before reaching the data layer and interacting with a database, and somewhere in there, I might have some caching. I daresay writing decent tests for this without mocking could be described as overly-convoluted, if it's even possible (in one person's lifetime)...
Unless you use mocking, which allows you to...
1. Test your code that depends on the third-party code, without making calls into it (acknowledging the risks mentioned earlier about mocking that accurately).
2. Simulate what would happened if a user with or without the right permissions called the protected WCF service (think about how you would do that from automated tests without mocking)
3. Test separate parts of code in isolation, which is particularly valuable where complex business logic is involved. This exponentially reduces the number of paths through the code that need to be tested, reducing the cost of writing the tests, and of maintaining them. Imagine the complexity of having to set up the database with all the prerequisites, not just for the data layer tests, but for all of the tests up the call stack. Now what happens when there is a database change?
4. Test caching by Verifying how many times your mock's method was called.
(For the record, speed of execution of the tests has never played any part in my decision to use mocking.)
Luckily mocking is simple, requiring barely any level of comprehension above what I have spelled out here. As long as you acknowledge that using mocking is a compromise compared to full-on integration testing, it yields the kind of savings in development and maintenance time that any product manager will be grateful for. So try to keep the gaps between your tiles small.
Upvotes: 3 [selected_answer]<issue_comment>username_2: Try to setup your method like this:
mockRepo.Setup(m => m.SaveProject(It.IsAny(),It.IsAny())
And then verify using It.IsAny as well.
Or just use It.IsAny for the parameters you do not want (or cannot) check properly for some reason. You can also create custom matchers in the later case.
As mentioned in other comments. The problem is likely to be on the arguments that you have setup you mock to expect.
Upvotes: 1
|
2018/03/17
| 1,241 | 5,543 |
<issue_start>username_0: Lets say I have three models and tables i.e. Schools, Class and students.
The School and Class are in one to many relation and Class and students are also in one to many relationship. Now I am creating a RESTFUL API to send all these data at once in this JSON format:
```
[{
"1": {
"school_id": 1,
"school_name": "Havard",
"Class": {
"1": {
"class_id": 1,
"school_id": 1,
"class": "6",
"Student": {
"1": {
"Student_fname": "SomeName",
"Student_lname": "Last Name"
},
"2": {
"Student_fname": "Another",
"Student_lname": "AnotherLast"
}
},
"2": {
"class_id": 2,
"school_id": 28,
"class": "7",
"Student": {
"1": {
"Student_fname": "Anotherone",
"Student_lname": "Last"
},
"2": {
"Student_fname": "New",
"Student_lname": "Newer"
}
}
}
}
}
}
},
{
"2": {
"school_id": 1,
"school_name": "AnotherSchool",
"Class": {
"1": {
"class_id": 1,
"school_id": 1,
"class": "6",
"Student": {
"1": {
"Student_fname": "SomeName",
"Student_lname": "Last Name"
},
"2": {
"Student_fname": "Another",
"Student_lname": "AnotherLast"
}
},
"2": {
"class_id": 2,
"school_id": 28,
"class": "7",
"Student": {
"1": {
"Student_fname": "Anotherone",
"Student_lname": "Last"
},
"2": {
"Student_fname": "New",
"Student_lname": "Newer"
}
}
}
}
}
}
}
]
```
How do I achieve this format? Is my this json format correct? Is there a better way to achieve it?<issue_comment>username_1: You can use a library like <https://github.com/thephpleague/fractal>, <https://github.com/dingo/api> or do it by making transformer methods in your controllers(or classes if you prefer)
Some pseudo code
```
php
class UserController extends Controller
{
public function index(Request $request) {
$users = Users::all();
$response = [];
foreach ($users as $user) {
$response[] = $this-transform($user);
}
}
public function transform(User $user) {
return [
'username' => $user->username,
'age' => $user->age,
'class' => [
'name' => $user->school->name,
'something' => $user->school->something
]
];
}
}
```
I higly recommend doing the transform-part in other classes than in the controller itself.
Upvotes: 1 <issue_comment>username_2: Look into Laravel API Resources.
<https://laravel.com/docs/5.6/eloquent-resources>
Upvotes: 2 [selected_answer]<issue_comment>username_3: I think should be like this
```
[
"1": {
"school_id": 1,
"school_name": "Havard",
"Class": [
"1": {
"class_id": 1,
"school_id": 1,
"class": "6",
"Student": [
"1": {
"Student_fname": "SomeName",
"Student_lname": "Last Name"
}
]
}
]
}
]
```
Upvotes: -1 <issue_comment>username_4: For people visiting this question for a solution about JSON Formatting in Laravel, I recommend you use **JSON formatter** Extension on Chrome to test your API Responses instead of losing time for a simple task.
It's just my personal recommendation, you can create or use a plugin to manage JSON responses on your code anyway.
Upvotes: 1 <issue_comment>username_5: If like me, you want to format `JSON` `string` using `Laravel` `collection` please use the technique below:
```
$json = collect([
'name' => 'Test',
'description' => 'Another awesome laravel package',
'license' => 'MIT'
])->toJson(JSON_PRETTY_PRINT | JSON_UNESCAPED_UNICODE | JSON_UNESCAPED_SLASHES);
echo $json;
```
The output generated would have the below format:
```
{
"name": "Test",
"description": "Another awesome laravel package",
"license": "MIT"
}
```
The same technique can be applied to results of `eloquent` query. And writing `JSON` to files to `storage`.
Reference [link](https://dev.to/vittorioe/quick-tip-creating-pretty-json-files-with-laravel-nel)
Upvotes: 0
|
2018/03/17
| 444 | 1,423 |
<issue_start>username_0: I'm trying to log in to: <https://localhost:5500/em/login>
First, I entered my credentials at the bottom right area. Then, I clicked the button.
After that, a pop-up appeared at the top of the page.
Why is it asking me this?
What should I input?
I tried putting the same values, but I still couldn't get through.
[](https://i.stack.imgur.com/AbeRd.png)<issue_comment>username_1: Got the solution from: <https://docs.oracle.com/en/database/oracle/oracle-database/12.2/admqs/getting-started-with-database-administration.html#GUID-06C767B0-B435-4C6B-9123-C39C030DF457>
>
> 3.4.3 Starting EM Express for a PDB To start EM Express for a PDB, ensure that the PDB is open in read/write mode and then try one of the
> following methods described in this topic (in the order shown):
>
>
> Connect to the CDB$ROOT container for the CDB that includes the PDB,
> and issue the following SQL statement to configure the global port for
> the CDB:
>
>
> exec dbms\_xdb\_config.setglobalportenabled(TRUE);
>
>
>
Upvotes: 3 [selected_answer]<issue_comment>username_2: One Note, if you are trying to log into the Container rather than the Pluggable database, put the Container name as "CDB$ROOT". I tried my container name and it didn't work, but putting that in allowed me to connect to the Container itself. Good luck!
Upvotes: 3
|
2018/03/17
| 701 | 1,833 |
<issue_start>username_0: A point represents a coordinate in an x-y plane. It is supported by the following functions:
```
Point * make_point(double x, double y)
double x_of(Point *p)
double y_of(Point *p)
void print_point(Point *p)
```
A function `Point * mid_point` that accepts two points as arguments and returns a point that is the mid-point of these two input coordinates.
when trying to run on online simulator for `mid_point(make_point(1.0, 1.0), make_point(3.0, 3.0))`it shows wrong
```
Point * mid_point(Point *x, Point *y) {
int mid = make_point((x_of(x)+x_of(x))/2,(y_of(y)+y_of(y))/2);
print_point(mid);
}
```<issue_comment>username_1: You have one extra bracket in the line:
```
return make_point( |(| (x_of (x) + x_of(x) ) / 2, (y_of(y) + y_of(y) ) / 2);
//Extra bracket ^ ^open ^close ^open ^close
```
Also I'm pretty sure you meant to say:
```
return make_point((x_of (x) + x_of(|y|) ) / 2, (y_of(|x|) + y_of(y) ) / 2);
^^^ ^^^
```
Also I would return a Point copy from make\_point(double x, double y) instead of Point\* pointer, otherwise you'll have to return a static variable or a global.
I think this is simpler:
```
#include
typedef struct Point
{
double x, y;
}Point;
Point getMidPointOf(Point p1, Point p2)
{
Point newP = { (p1.x + p2.x) / 2.0, (p1.y + p2.y) / 2.0 };
return newP;
}
int main()
{
Point p1 = { 5.0, 15.0 };
Point p2 = { 15.0, 30.0 };
Point p3 = getMidPointOf(p1, p2);
printf("Point value = (%f, %f)", p3.x, p3.y);
return 0;
}
```
Upvotes: 1 <issue_comment>username_2: ```
Point * mid_point(Point *a, Point *b) {
Point * m;
m = make_point((x_of (a) + x_of(b) ) / 2, (y_of(a) + y_of(b) ) / 2);
print_point(m);
return m;
}
```
Upvotes: 0
|
2018/03/17
| 626 | 1,989 |
<issue_start>username_0: this code will be used for Full Calendar JS :)
This is my PHP Loop, I got the data with my 'events' table on mysql db.
```
$query = "SELECT * FROM events";
$listEvents = mysqli_query($DB, $query);
while($row = mysqli_fetch_array($listEvents))
{
$id = $row['event_id'] ;
$title = $row['event_name'] ;
$start = $row['event_start'] ;
$end = $row['event_end'] ;
$url = $row['event_root'] ;
}
```
And this is my code in Full Calendar, its the original from Full Calendar
```
events: [{
id: 'php echo "$id" ; ?',
title: 'php echo "$title" ; ?',
start: 'php echo "$start" ; ?',
end: 'php echo "$end" ; ?',
url: 'php echo "$url" ; ?',
}],
```
Though the JSON displays my PHP Loop, but it only displays one row from my DB.
Thanks Everyone, I appreciate all your responses.<issue_comment>username_1: Just simply add all your data with appropriate keys in array then encode this data to json
**PHP**
```
$events = array();
while($row = mysqli_fetch_array($listEvents))
{
$event['id'] = $row['event_id'] ;
$event['title'] = $row['event_name'] ;
$event['start'] = $row['event_start'] ;
$event['end'] = $row['event_end'] ;
$event['url'] = $row['event_root'] ;
$events[]=$event;
}
```
**JS**
```
events: php echo json_encode($events);?,
```
Upvotes: 0 <issue_comment>username_2: You need to create an array outside of loop, and then assign all values to that array and then finally encode that array.
Do like below:-
```
$events = [];//create an array
while($row = mysqli_fetch_assoc($listEvents)){//use assoc for lighter array iteration
$events[] = ['id'=> $row['event_id'],'title'=> $row['event_name'],'start'=> $row['event_start'], 'end'=> $row['event_end'], 'url'=> $row['event_root'] ] ; //assign all values to array
}
```
***JS:-***
```
events: php echo json_encode($events);?,
```
Upvotes: 3 [selected_answer]
|
2018/03/17
| 548 | 1,776 |
<issue_start>username_0: I'm trying to create a Like/Unlike button with jQuery! Here's what I did,
Here's the HTML,
```
{% if request.user in likes.all %}Unlike{% else %}Like{% endif %}
```
Here's the jQuery,
```
$('.like-button').click(function(){
var x = $(this);
x.toggleClass('like-button');
if(x.hasClass('like-button')){
x.text('Like');
} else {
x.text('Unlike');
}
});
```
When I press `Like` button it works fine & text changes into `Unlike`, but problem is after refreshing the webpage when I press `Unlike` button it takes 2 clicks to turn back into `Like`. How can we solve it or is there any other better way to do this?
Thank You :)<issue_comment>username_1: Just simply add all your data with appropriate keys in array then encode this data to json
**PHP**
```
$events = array();
while($row = mysqli_fetch_array($listEvents))
{
$event['id'] = $row['event_id'] ;
$event['title'] = $row['event_name'] ;
$event['start'] = $row['event_start'] ;
$event['end'] = $row['event_end'] ;
$event['url'] = $row['event_root'] ;
$events[]=$event;
}
```
**JS**
```
events: php echo json_encode($events);?,
```
Upvotes: 0 <issue_comment>username_2: You need to create an array outside of loop, and then assign all values to that array and then finally encode that array.
Do like below:-
```
$events = [];//create an array
while($row = mysqli_fetch_assoc($listEvents)){//use assoc for lighter array iteration
$events[] = ['id'=> $row['event_id'],'title'=> $row['event_name'],'start'=> $row['event_start'], 'end'=> $row['event_end'], 'url'=> $row['event_root'] ] ; //assign all values to array
}
```
***JS:-***
```
events: php echo json_encode($events);?,
```
Upvotes: 3 [selected_answer]
|
2018/03/17
| 914 | 2,694 |
<issue_start>username_0: I had this simple code which uses python re module's placeholder..
```
(?P...)
```
My goal is to get the value(number) just before the "/" character. The value's length can range from 4 and so on...
I have written this code below.
```
import re
s = "hello world 1234/book"
x = r"^h.*(?P[0-9].\*)/.\*$"
y = re.search(x, s)
print y.group('test')
```
However it just returns:
```
>>> 4
```
My ideal result is:
```
>>> 1234
```
I would really appreciate any hint.<issue_comment>username_1: If you need to get the value (number) just before the `"/"` character, you could use your `regex` as this:
```
import re
s = "hello world 1234/book"
y = re.search(r"^h.*?(?P[0-9].\*)/.\*$", s)
print(y.group('test'))
# 1234
```
Or simply without placeholders:
```
import re
s = "hello world 1234/book"
y = re.search(r"([0-9]+)/.*", s)
print(y.group(1))
# 1234
```
Upvotes: 0 <issue_comment>username_2: In regex you don't need to worry about what is the starting string unless you want to match it. In your case you don't need to write `.*` before the placeholder you used.
so you can just use the regex like below:
```
s = "hello world 1234/book"
re.search(r'(?P[0-9].\*)/.\*$', s).group(1) # '1234'
```
Upvotes: 1 <issue_comment>username_3: First, your problem has nothing at all to do with placeholders or named groups; you're doing that part just fine. To verify that, try capturing every piece as an unnamed group and printing out what you get:
```
>>> s = "hello world 1234/book"
>>> x = r"(^h.*)([0-9].*)(/.*$)"
>>> y = re.search(x, s)
>>> print(y.groups())
('hello world 123', '4', '/book')
```
See? You're getting the exact same `'4'` for the group you care about.
Printing out all the groups is usually a helpful test of what's going on. For an even better test, use one of the many online regex debuggers, which can highlight the matches and explain them to you. For example, [regex101](https://regex101.com/r/0bL8Iy/2/) shows (via coloring) that the `^h.*` is matching `hello world 123` even without changing your regex to make it more debuggable.
Anyway, the problem with your existing regex is that the first `.*` will match as much as possible while allowing the rest of the pattern to work—which means it'll match `'hello world 123'`, as you can see above, instead of just `'hello world '`, as you want.
If you want to toggle the greediness of a pattern, put a `?` on the end:
```
>>> s = "hello world 1234/book"
>>> x = r"(^h.*?)([0-9].*)(/.*$)"
>>> y = re.search(x, s)
>>> print(y.groups())
('hello world ', '1234', '/book')
```
And now you can remove the groups you don't want and restore the name to the one you do want.
Upvotes: 0
|
2018/03/17
| 821 | 2,756 |
<issue_start>username_0: I am using the following function in my model:
```
function uploadsinglepicture($uploadpath){
$config['upload_path'] =$uploadpath;
$config['allowed_types'] = 'gif|jpg|png';
$config['max_size'] = '';
$config['max_width'] = '';
$config['max_height'] = '';
$this->load->library('upload', $config);
if ( ! $this->upload->do_upload('userfile'))
{
$error = array('error' => $this->upload->display_errors());
print_r($error);
if($this->input->post('id') == ''){
$insertion['image'] = '';
}
//$this->load->view('upload_form', $error);
}
else{
$data = array('upload_data' => $this->upload->data());
$insertion['image'] = $data['upload_data']['file_name'];
}
$image = $insertion['image'];
return $image;
}
```
This is how I access the function in controller:
```
if(!empty($this->input->post())){
$path= base_url().'assets/front/img';
$this->general->uploadsinglepicture($path);
redirect(base_url().'admin/home/index/sliderupated');
}
```
but the error I get is:
>
> Array ( [error] => The upload path does not appear to be valid.
> )
>
>
>
If I print `$path`, this is what I get
```
http://localhost/site/assets/front/img/
```
and that opens in the browser as real path. My code in view is as below
```
Upload Picture
```
How can I fix the error?<issue_comment>username_1: please update this line,
```
from , $path= base_url().'assets/front/img';
to, $path= FCPATH.'assets/front/img';
```
Upvotes: 4 [selected_answer]<issue_comment>username_2: **You can use `FCPATH` this :**
```
$path= FCPATH.'assets/front/img';
```
for more : <https://www.codeigniter.com/user_guide/general/reserved_names.html>
Upvotes: 1 <issue_comment>username_3: Instead of giving `base_url()` you need to give base directory. So change your code to:
```
if(!empty($this->input->post())){
$path= FCPATH.'assets/front/img';
$this->general->uploadsinglepicture($path);
redirect(base_url().'admin/home/index/sliderupated');
}
```
Upvotes: 2 <issue_comment>username_4: **FCPATH:** front controller path where index.php exists or **root** folder
**APPPATH:** **application** folder
Create a **uploads** folder under **root** folder:
```
$config['upload_path'] = realpath(FCPATH.'uploads');
```
Create a **uploads** folder under **application** folder:
```
$config['upload_path'] = realpath(APPPATH.'uploads');
```
In case: If you have created the **uploads** folder outside of **root** folder:
```
$config['upload_path'] = realpath($_SERVER['DOCUMENT_ROOT'].'/uploads');
```
Upvotes: 0
|
2018/03/17
| 1,113 | 4,010 |
<issue_start>username_0: Let's say I want to design a naive application, which has the following three tables:
[](https://i.stack.imgur.com/GLsjm.png)
There are two many-to-many relationship:
1. A customer may has many favourite avatars, an avatar may be used by many customers;
2. A product may has many product images, a product image may be used by many products;
So we could add two junction tables to accomplish the relationship above:
[](https://i.stack.imgur.com/6RvVo.png)
The `customer_image` table closely resembles the `product_image` table, is it possible to create a generic junction table like the following?
[](https://i.stack.imgur.com/R3Eqc.png)
As the `generic_map` table will be used by different models (customer, product and etc), I removed the foreign key constraints.
Any suggestion? thanks.<issue_comment>username_1: It is possible to create a generic table for the relation, But as a good database using separate tables for mapping is better.
If you want to use a generic table for mapping, you can create a `generic_map` table with fields `id`,`customer_id`,`image_id`,`product_id`.
Upvotes: 0 <issue_comment>username_2: It's not clear *why* you would want to do this. What benefit does it give you over creating a separate table for each many-to-many relationship?
It also creates a lot of unnecessary complexity.
Constraints
===========
The fact that you *must* remove the foreign key constraints to make your table work should be a strong clue that this is a bad strategy.
Duplicates
==========
So if you do a join from the `image` table to the `generic_map.to_id` and find the image maps to a given value for `from_id`, such as 1234, how would you know if this value references a `customer` with id 1234, or a `product` with id 1234?
Normally a many-to-many mapping table has a UNIQUE constraint on the pair of columns that reference each entity. But in your `generic_map` can you have such a constraint?
What if both `customer` 1234 and `product` 1234 each want to map to the same image? Do you store the mapping row twice in the `generic_map` table? If so, what happens when you join from `customer` through `generic_map` to `image`? You'd get duplicates from the join.
Also, what if just the `customer` 1234 wants to reference a specific image, but the `product` with the same id 1234 does not want that image? How would you make it clear that the `product` 1234 should *not* join to that row in the `generic_map` table?
So you commented:
>
> I forgot to declare a `type` column in the `generic_table`, the `type` column is of type `ENUM('customer', 'product')`. Via this column, I could know a `generic.from_id` value of 1234 means customer id or product id.
>
>
>
But this leads into another problem...
Queries
=======
You must remember to put conditions on the extra `type` column on every query you run:
```
SELECT ...
FROM customer AS c
JOIN generic_map AS m ON c.id=m.from_id AND m.type='customer'
JOIN image AS i ON m.to_id
```
If you (or someone else on your team who maintains this code) forgets to include this condition, then you will get the strange effects I described above.
Performance
===========
Queries against smaller tables are usually faster. But now you are guaranteed to have a large table.
Suppose if you had separate tables, your customer\_image table had 10,000,000 rows. Your product\_image table had 1,000 rows. You should be concerned that your query for product image needs to search through 10,001,000 rows instead of 1,000 rows. Indexes can help, but it would be even better for performance if the tables were separate.
The above are just a few problems. They keep unraveling like a loose thread.
Do yourself a favor — just use two tables, one for each many-to-many relationship. It will make everything easier.
Upvotes: 3 [selected_answer]
|
2018/03/17
| 814 | 3,268 |
<issue_start>username_0: I'm submitting this question for two reasons: I'm having difficulties using `DBreeze` database engine running under `Mono`, but I've found a workaround, it may help others resolving this exact problem (I post the workaround as an answer), and if anyone else knows a better solution I'd be thankful for the help.
The problem is [DBreeze](https://github.com/hhblaze/DBreeze) is working properly on Windows, but on Linux with Mono right after initalization of the engine and the first insert it throws the following exception:
```
Unhandled Exception: DBreeze.Exceptions.DBreezeException: Getting table "@utt2"
from the schema failed! ---> DBreeze.Exceptions.TableNotOperableException:
DBreeze.Scheme ---> DBreeze.Exceptions.DBreezeException: Rollback of the table
"DBreeze.Scheme" failed! ---> DBreeze.Exceptions.DBreezeException: Restore
rollback file "./DB/_DBreezeSchema" failed! --->
System.EntryPointNotFoundException: FlushFileBuffers
```
The problem is in the `DBreeze/Storage/FSR.cs` file, because it tries to call
```
[System.Runtime.InteropServices.DllImport("kernel32.dll", ExactSpelling = true, SetLastError = true)]
private static extern bool FlushFileBuffers(IntPtr hFile);
```
but this is not supported in Mono.
Question is: How to properly flush the filebuffer/call the equivalent as kernel32.dll's `FlushFileBuffers()` to write the buffer contents to the disk under Mono?<issue_comment>username_1: My workaround is the following:
Since this method would sync the data from the OS file buffer to the hard drive (or any block device) the native unix `fsync` method is able to do the same thing.
The workaround that worked for me was replacing the DllImport above with a custom function:
```
// [System.Runtime.InteropServices.DllImport("kernel32.dll", ExactSpelling = true, SetLastError = true)]
// private static extern bool FlushFileBuffers(IntPtr hFile);
private static bool FlushFileBuffers(IntPtr handle)
{
return Mono.Unix.Native.Syscall.fsync(handle.ToInt32()) == 0 ? true : false;
}
```
`fsync` returns `0` if there is no error and `-1` if there is error, the handle must be casted to `int` from `IntPtr`.
During compilation the parameter `/r:Mono.Posix.dll` must be added in the command line to access `fsync` native syscall.
I'm not completely sure this trick would work exactly as it was intended in the original code, because flushing the buffer has different levels depending on the OS, so please tell me if there is any better/proper solution for this problem, or this solution I've found is proper at all.
Upvotes: 1 <issue_comment>username_2: For .net40 and higher you can use a function.
```
#if NET40
public static void NET_Flush(FileStream mfs)
{
mfs.Flush(true);
}
#else
[System.Runtime.InteropServices.DllImport("kernel32.dll", ExactSpelling = true, SetLastError = true)]
private static extern bool FlushFileBuffers(IntPtr hFile);
public static void NET_Flush(FileStream mfs)
{
mfs.Flush();
IntPtr handle = mfs.SafeFileHandle.DangerousGetHandle();
if (!FlushFileBuffers(handle))
throw new System.ComponentModel.Win32Exception();
}
#endif
```
Upvotes: 0
|
2018/03/17
| 1,540 | 7,098 |
<issue_start>username_0: I want to do something like this:
```
statusReady: boolean = false;
jobsReady: boolean = false;
ready() {
return Promise.all([statusReady, jobsReady]);
}
```
...and the idea is basically so later I can do this:
```
this.ready().then(() => {
// Do stuff here when we're all ready
});
```
If everything is already true, I'd expect the promise to resolve immediately, and if anything is false, it waits for the statuses to be true, then resolves. I'd use the ready() function anywhere in which I need to make sure certain pieces are finished loading.<issue_comment>username_1: You can not use boolean in Promise.all() method. The [article](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Promise/all) in the MDN clearly state that only promise iterable can be passed in Promise.all() method.
Instead you can use two more Promises which will resolve when the value is true. In your code snippet I did not understand how those 2 value gonna change to true. If you can post how those 2 values gonna change I think we will help you to shape the code properly.
Upvotes: -1 <issue_comment>username_2: So if I am correct, you're asking how to create a promise that resolves as soon as a list of booleans are all true?
I created a function that does exactly that. It takes an object of booleans as its only arg, and returns a promise that resolves as soon as they are all true.
Here is the function:
```
function whenBoolListIsTruePromise (boolList) {
return new Promise(function (resolve, reject) {
// Check if all values of boolList are already true
var initializedAsTrue = true;
Object.values(boolList).forEach(function (currentBoolValue) {
if (!currentBoolValue) initializedAsTrue = false;
});
if (initializedAsTrue) {
resolve();
} else {
// The following watches changes to any of the bools in the boolList, and resolves the promise when all are true.
// 1. Copy boolList data to _actualData.
// 2. For each key-value pair in boolList, change it to a :set and :get property that accesses _actualData.
// 3. Every time a set is run, check if everything in actual data is now true. If it is, resolve promise.
var _actualData = Object.assign({}, boolList);
Object.entries(boolList).forEach(function ([boolName, boolValue]) {
Object.defineProperty(boolList, boolName, {
configurable: true,
set: function (newV) {
var allTrue = true;
_actualData[boolName] = newV;
Object.values(_actualData).forEach(function (currentBoolValue) {
if (!currentBoolValue) allTrue = false;
});
if (allTrue) {
// Remove :set and :get, bringing back to a regular simple object
Object.entries(_actualData).forEach(function ([boolName2, boolValue2]) {
Object.defineProperty(boolList, boolName2, {
configurable: true,
value: boolValue2,
});
});
resolve();
}
},
get: function () {
return _actualData[boolName];
}
});
});
}
});
}
```
You use this function by creating an object of booleans and passing it as an argument for this function. This function will return a promise that resolves when all of those booleans are set to true.
```
var myBoolList = {"statusReady": false, "jobsReady": false};
whenBoolListIsTruePromise(myBoolList).then(function () {
console.log("All values are now true!");
}, function (error) {
console.error(error);
});
myBoolList.statusReady = true;
myBoolList.jobsReady = true;
```
Here is a working example for you to look at:
```js
function whenBoolListIsTruePromise (boolList) {
return new Promise(function (resolve, reject) {
// Check if all values of boolList are already true
var initializedAsTrue = true;
Object.values(boolList).forEach(function (currentBoolValue) {
if (!currentBoolValue) initializedAsTrue = false;
});
if (initializedAsTrue) {
resolve();
} else {
// The following watches changes to any of the bools in the boolList, and resolves the promise when all are true.
// 1. Copy boolList data to _actualData.
// 2. For each key-value pair in boolList, change it to a :set and :get property that accesses _actualData.
// 3. Every time a set is run, check if everything in actual data is now true. If it is, resolve promise.
var _actualData = Object.assign({}, boolList);
Object.entries(boolList).forEach(function ([boolName, boolValue]) {
Object.defineProperty(boolList, boolName, {
configurable: true,
set: function (newV) {
var allTrue = true;
_actualData[boolName] = newV;
Object.values(_actualData).forEach(function (currentBoolValue) {
if (!currentBoolValue) allTrue = false;
});
if (allTrue) {
// Remove :set and :get, bringing back to a regular simple object
Object.entries(_actualData).forEach(function ([boolName2, boolValue2]) {
Object.defineProperty(boolList, boolName2, {
configurable: true,
value: boolValue2,
});
});
resolve();
}
},
get: function () {
return _actualData[boolName];
}
});
});
}
});
}
var myBoolList = {"statusReady": false, "jobsReady": false};
whenBoolListIsTruePromise(myBoolList).then(function () {
console.log("All values are now true!");
}, function (error) {
console.error(error);
});
myBoolList.statusReady = true;
myBoolList.jobsReady = true;
```
This function works by creating :set and :get accessors for each key in the boolList object. The :set function is ran whenever any of the things in the booList object are set (changed). I have made it so the :set function checks if all of the booleans are true, and resolves the promise if they are.
Here is a helpful article I found that talks a bit about :get and :set
<https://javascriptplayground.com/es5-getters-setters/>
Upvotes: 0
|
2018/03/17
| 697 | 2,595 |
<issue_start>username_0: I'm currently using Twilio for an SMS to email integration.
The code Twilio provided needs to live on a public URL in order for the integration to work. There's no reason any human would need to access this URL.
Unfortunately, the code living on a public URL means that random web crawlers can and do load the page where it lives, which triggers a blank email to send. At least I think that's what's doing it?
I cannot use an if...else to see whether any fields are empty because the integration sends boilerplate labels like "body" and "message" automatically (thus, the fields never look empty).
So, I'm wondering if there's someplace in my file structure where I can hide this PHP so that the integration can work but no random pings to the URL will trigger the email.
Edit: this php currently lives in the /usr/share/nginx/html folder as my index.php file. I'm not super experienced and it took me a lot of experimenting to find a place to put it where it actually ran successfully...<issue_comment>username_1: you should store the file outside the webroot if possible.
otherwise...
Use .htaccess to `deny from all`
Upvotes: 0 <issue_comment>username_2: Twilio developer evangelist here.
You have a couple of options here.
You can [add HTTP basic authentication to your server](https://www.digitalocean.com/community/tutorials/how-to-set-up-password-authentication-with-apache-on-ubuntu-14-04). Then, change your Twilio number's webhook URL to include the username and password in the URL. Like:
```
https://username:password@example.com/index.php
```
Alternatively, within your code you can [validate that the request came from Twilio](https://www.twilio.com/docs/api/security#validating-requests) using the `X-Twilio-Signature` header. This would look a bit like this:
```
php
// NOTE: This example uses the next generation Twilio helper library - for more
require_once '/path/to/vendor/autoload.php';
use Twilio\Security\RequestValidator;
// Your auth token from twilio.com/console
$token = "<PASSWORD>";
$signature = $_SERVER["HTTP_X_TWILIO_SIGNATURE"];
$validator = new RequestValidator($token);
$url = $_SERVER['SCRIPT_URI'];
$postVars = $_POST
if ($validator-validate($signature, $url, $postVars)) {
// This is from Twilio!
// The rest of your code to send the email goes here.
} else {
// This is not from Twilio. Return a 403 response
header('HTTP/1.1 403 Forbidden');
exit;
}
```
You can read more about [securing your webhook endpoints in the Twilio documentation](https://www.twilio.com/docs/api/security).
Upvotes: 1
|
2018/03/17
| 573 | 1,958 |
<issue_start>username_0: I want to get data from multiple api in django using `request.get()` method. Normally we fetched data like this:
```
response = requests.get('http://freegeoip.net/json/')
```
Is it possible to get data from 3 different api like this.
```
response=requests.get('http://freegeoip.net/json/','http://api.example.com','http://api.anotherexample.com');
```<issue_comment>username_1: you should store the file outside the webroot if possible.
otherwise...
Use .htaccess to `deny from all`
Upvotes: 0 <issue_comment>username_2: Twilio developer evangelist here.
You have a couple of options here.
You can [add HTTP basic authentication to your server](https://www.digitalocean.com/community/tutorials/how-to-set-up-password-authentication-with-apache-on-ubuntu-14-04). Then, change your Twilio number's webhook URL to include the username and password in the URL. Like:
```
https://username:password@example.com/index.php
```
Alternatively, within your code you can [validate that the request came from Twilio](https://www.twilio.com/docs/api/security#validating-requests) using the `X-Twilio-Signature` header. This would look a bit like this:
```
php
// NOTE: This example uses the next generation Twilio helper library - for more
require_once '/path/to/vendor/autoload.php';
use Twilio\Security\RequestValidator;
// Your auth token from twilio.com/console
$token = "YOUR_AUTH_TOKEN";
$signature = $_SERVER["HTTP_X_TWILIO_SIGNATURE"];
$validator = new RequestValidator($token);
$url = $_SERVER['SCRIPT_URI'];
$postVars = $_POST
if ($validator-validate($signature, $url, $postVars)) {
// This is from Twilio!
// The rest of your code to send the email goes here.
} else {
// This is not from Twilio. Return a 403 response
header('HTTP/1.1 403 Forbidden');
exit;
}
```
You can read more about [securing your webhook endpoints in the Twilio documentation](https://www.twilio.com/docs/api/security).
Upvotes: 1
|
2018/03/17
| 654 | 2,124 |
<issue_start>username_0: I have a usecase that I would like to split. Below is the output of a program in a txt file. I need the VALUE part alone to be written in another txt file to be read by another process.
**OUTPUT: "Id": "/Name/VALUE",**
I tried
```
$text.split('/')[1].split(' ')
```
but it resulted in giving me "Name"
On the whole, what am I doing is
```
$text = Get-Content C:\temp\file.txt
$text.split('/')[1].split(' ') | Out-File C:\temp\file1.txt
```<issue_comment>username_1: Do it step by step and look at the results, you'll see that
```
PS C:\> 'OUTPUT: "Id": "/Name/VALUE",'.Split('/')
OUTPUT: "Id": "
Name
VALUE",
```
The first line is index 0, the second line (Name) is index 1, the third line (VALUE",) is index 2.
You could get the VALUE by:
```
$text.Split('/')[2].Replace('",', '')
```
To take the Value line and remove the quote and comma.
Upvotes: 2 [selected_answer]<issue_comment>username_2: Method 1 with array of delimiter:
```
$content.split(@('"', '/'))[5]
```
Method 2 with multi split:
```
$content.split('"')[3].Split('/')[2]
```
Upvotes: 0 <issue_comment>username_3: To complement [username_1's helpful answer](https://stackoverflow.com/a/49333525/45375) with solutions based only on PowerShell's own `-split` and `-replace` operators:
```
PS> ('OUTPUT: "Id": "/Name/VALUE"' -split '/')[-1] -replace '"'
VALUE
```
Array index `-1` extracts the *last* element from the array of `/`-separated tokens that `-split '/'` outputs, and `-replace '"'` then removes any `"` instances from the result (in the absence of a replacement string, replaces them with the empty string, thereby effectively removing them).
---
With the help of a *regex* (regular expression), using `-replace` by itself is sufficient:
```
PS> 'OUTPUT: "Id": "/Name/VALUE"' -replace '.*/(.*?)"', '$1'
VALUE
```
* `.*/` greedily matches everything up to the *last* `/`
* `(.*?)"` non-greedily matches everything until a `"` is encountered; since `.*?` is enclosed in `(...)`, it is a so-called capture group that makes the sub-match available as `$1` in the replacement string.
Upvotes: 0
|
2018/03/17
| 668 | 2,237 |
<issue_start>username_0: When I import a module in a app.ts script, the '.js' file extension is missing in the import line of the compiled js file.
app.ts `import {ModuleA} from './ModuleA'`
compiled app.js `import {ModuleA} from './ModuleA';`
I include it in the html file like this
But the browser can't find the module 'ModuleA'.
It only work when I import like this `import {ModuleA} from './ModuleA.js'`
But I want to work by importing '.ts' module files, not '.js' module files.
I would have hoped the ts compilation add the '.js' extension to the import line.
Any suggestions?<issue_comment>username_1: Do it step by step and look at the results, you'll see that
```
PS C:\> 'OUTPUT: "Id": "/Name/VALUE",'.Split('/')
OUTPUT: "Id": "
Name
VALUE",
```
The first line is index 0, the second line (Name) is index 1, the third line (VALUE",) is index 2.
You could get the VALUE by:
```
$text.Split('/')[2].Replace('",', '')
```
To take the Value line and remove the quote and comma.
Upvotes: 2 [selected_answer]<issue_comment>username_2: Method 1 with array of delimiter:
```
$content.split(@('"', '/'))[5]
```
Method 2 with multi split:
```
$content.split('"')[3].Split('/')[2]
```
Upvotes: 0 <issue_comment>username_3: To complement [username_1's helpful answer](https://stackoverflow.com/a/49333525/45375) with solutions based only on PowerShell's own `-split` and `-replace` operators:
```
PS> ('OUTPUT: "Id": "/Name/VALUE"' -split '/')[-1] -replace '"'
VALUE
```
Array index `-1` extracts the *last* element from the array of `/`-separated tokens that `-split '/'` outputs, and `-replace '"'` then removes any `"` instances from the result (in the absence of a replacement string, replaces them with the empty string, thereby effectively removing them).
---
With the help of a *regex* (regular expression), using `-replace` by itself is sufficient:
```
PS> 'OUTPUT: "Id": "/Name/VALUE"' -replace '.*/(.*?)"', '$1'
VALUE
```
* `.*/` greedily matches everything up to the *last* `/`
* `(.*?)"` non-greedily matches everything until a `"` is encountered; since `.*?` is enclosed in `(...)`, it is a so-called capture group that makes the sub-match available as `$1` in the replacement string.
Upvotes: 0
|
2018/03/17
| 662 | 1,842 |
<issue_start>username_0: While trying to configure odoo 10 with Pycharm, I receive the following error. However running it from terminal works fine.
My openerp-server.conf file:
```
[options]
; This is the password that allows database operations:
admin_passwd = <PASSWORD>
db_host =False
db_port =False
db_user = odoo
db_password = <PASSWORD>
addons_path = odoo/addons,odoo/addons_st
xmlrpc_port = 8071
```
Error in Pycharm:
```
/usr/bin/python3.5 /home/odoo/workspace/odoo-10c/odoo-server -c openerp- server.conf
Traceback (most recent call last):
File "/home/odoo/workspace/odoo-10c/odoo-server", line 2, in \_\_import\_\_('pkg\_resources').declare\_namespace('odoo.addons')
File "/usr/local/lib/python3.5/dist-packages/setuptools-18.1-py3.5.egg/pkg\_resources/\_\_init\_\_.py", line 2203, in declare\_namespace
File "/usr/local/lib/python3.5/dist-packages/setuptools-18.1-py3.5.egg/pkg\_resources/\_\_init\_\_.py", line 2219, in declare\_namespace
File "/usr/local/lib/python3.5/dist-packages/setuptools-18.1-py3.5.egg/pkg\_resources/\_\_init\_\_.py", line 2186, in \_handle\_ns
File "", line 388, in \_check\_name\_wrapper
File "", line 809, in load\_module
File "", line 668, in load\_module
File "", line 265, in \_load\_module\_shim
File "", line 626, in \_exec
File "", line 665, in exec\_module
File "", line 222, in \_call\_with\_frames\_removed
File "/home/odoo/workspace/odoo-10c/odoo/\_\_init\_\_.py", line 57, in
import addons
ImportError: No module named 'addons'
```<issue_comment>username_1: Change your project interpretor to **python 2.5**, Odoo 10 cant run on **python 3.x.**
Python 3.x is for Odoo 11 and 11+ versions.
Upvotes: 3 [selected_answer]<issue_comment>username_2: odoo 8 support python version 2.7.16 with pip
odoo 9 also support python version 2.7
odoo 10 supports python version 2.5 and above
Upvotes: 0
|
2018/03/17
| 1,159 | 4,036 |
<issue_start>username_0: I have two tables in MySQL where one is referencing another.
The first one is the `users` table
[](https://i.stack.imgur.com/hlpWz.jpg)
The second table is the `event` table which contains the `creator_id` referencing the `id` in the users table. so `creator_id` is the foreign key.
[](https://i.stack.imgur.com/2fFIM.jpg)
Now my problem is how to get the name of each user that owns each column in the `event` table using laravel.
I'm clueless about how to do this. And the laravel doc I've read so far is not helping.
This is the model for the `event` table.
```
php
namespace App;
use Illuminate\Database\Eloquent\Model;
use Auth;
use App\signupModel;
class eventModel extends Model
{
//
public $table = "event";
protected $fillable = ['title', 'description', 'event_flier', 'event_logo', 'org_name', 'org_logo', 'free', 'state', 'city' ];
public function getUser(){
$id = Auth::user()-id;
return users::where('id', $this->id);
}
}
?>
```
The model for the users is:
```
php
namespace App;
use Illuminate\Database\Eloquent\Model;
class signupModel extends Model
{
//
public $table = "users";
protected $primaryKey = 'id';
protected $fillable = [
'first_name', 'last_name', 'email', 'password', 'gender', 'phoneNumber', 'twitter', 'facebook', 'instagram', 'dob'
];
}
</code
```
I just want a way where each user can see his data from the `event` table when logged in using laravel.
please help. Thanks.<issue_comment>username_1: Here `belongsTo` is used return the creator of the event.
```
namespace App;
use Illuminate\Database\Eloquent\Model;
use Auth;
use App\signupModel;
class eventModel extends Model
{
public $table = "event";
protected $fillable = ['title', 'description', 'event_flier', 'event_logo', 'org_name', 'org_logo', 'free', 'state', 'city' ];
public function getUser(){
return $this->belongsTo('App\signupModel', 'creator_id');
}
}
?>
```
In your controller, you can retrieve the creator of an event as follows
```
php
namespace App\Http\Controllers;
use Illuminate\Http\Request;
use App\eventModel;
use App\signupModel;
class HomeController extends Controller
{
* @return \Illuminate\Http\Response
*/
public function index()
{
$event = eventModel::find(1);
$creator = $event-getUser; // return user instance
dump($creator); // for check result
echo $creator->first_name; //print firstname of creator
}
}
```
Upvotes: 1 <issue_comment>username_2: you need to do in user model
```
public function getEvents(){
return $this->hasOne('Your event model', 'creator_id');
}
```
and import model in your controller and try like that
```
$events=User::with('getEvents')->get();
```
Or if you want to fetch one record you should try this in your controller
```
$events=User::find('Your Auth Id');
$creators=$events->getEvents;
```
Or you should try this one also
```
$users = DB::table('users')
->leftJoin('events', 'users.id', '=', 'events.creator_id')->where('users.id','=','Your user id')
->get();
```
Upvotes: 2 <issue_comment>username_3: >
> An Event belongs to its Owner(User)
>
>
>
If you follow the core Laravel syntax, the Event model must look like this:
```
class Event extends Model
{
public function user()
{
return $this->belongsTo(User::class);
}
}
```
And the User model must be like:
```
class User extends Model
{
public function event()
{
return $this->hasOne('App\Event');
}
}
```
And if so, you can use:
`$events = App\User::find(1)->event;`
This is a common case and you may use your logic according to your relationship type. Here is the [doc](https://laravel.com/docs/5.2/eloquent-relationships).
Upvotes: 2 [selected_answer]
|
2018/03/17
| 1,272 | 4,894 |
<issue_start>username_0: I've been really digging [FastJsonAPI](https://github.com/Netflix/fast_jsonapi). Works great.
Is it possible to set the `set_type` [serializer definition](https://github.com/Netflix/fast_jsonapi#serializer-definition) on a per-object basis?
i.e. I'm using Rails STI (single table inheritance). I have a mixed set of base objects and derivative objects, and I'd like to have different types for each.
This is a fake example of JSON output I would like:
```
{
"data": [
{
"attributes": {
"title": "Generic Vehicle"
},
"id": "1",
"type": "vehicle"
},
{
"attributes": {
"title": "Fast Car"
},
"id": "2",
"type": "car"
},
{
"attributes": {
"title": "Slow Car"
},
"id": "3",
"type": "car"
},
{
"attributes": {
"title": "Motorcycle"
},
"id": "4",
"type": "motorcycle"
}
]
}
```
I do have an object `type` attribute I can use, of course, since I'm using STI. But *I don't want to* use it as an attribute: I want to use it as the outside `type`, as in the JSON above.
serializer(s):
```
class VehicleSerializer
include FastJsonapi::ObjectSerializer
set_type :vehicle # can I tie this to individual objects, right here?
attributes :title
end
class CarSerializer < VehicleSerializer
set_type :car
attributes :title
end
class MotorcycleSerializer < VehicleSerializer
set_type :motorcycle
attributes :title
end
class TruckSerializer < VehicleSerializer
set_type :truck
attributes :title
end
```
You see, I have some controllers which only pull from a single object `type`, and for them, the single `CarSerializer` or whatever, works great. The trouble is when I use a controller like this, that aggregates multiple vehicle types in the index method:
```
require_relative '../serializers/serializers.rb'
class MultiVehiclesController < ApplicationController
def index
@vehicles = Vehicle.where(type: ["Car", "Motorcycle"])
# perhaps there's a way to modify the following line to use a different serializer for each item in the rendered query?
render json: VehicleSerializer.new(@vehicles).serializable_hash
end
def show
@vehicle = Vehicle.find(params[:id])
# I suppose here as well:
render json: VehicleSerializer.new(@vehicle).serializable_hash
end
end
```<issue_comment>username_1: This was not possible in version 1 of FastJsonAPI, which I was using for this. [It is now apparently possible](https://github.com/Netflix/fast_jsonapi/issues/126) as of [version 1.2](https://github.com/Netflix/fast_jsonapi/tree/dev#customizable-options) (although I have not yet tested it).
Upvotes: 2 [selected_answer]<issue_comment>username_2: I also have STI and want to use proper `type` when I render a collection of different classes. I ended up with a custom serializer to override `hash_for_collection`. Inside that method it is possible to lookup for a particular collection item serializer and call its `record_hash`
That is a bit slower than fast\_jsonapi / jsonapi-serializer implementation but now each item in a `data` collection has proper `type`
```ruby
class MixedCollectionSerializer < ApplicationSerializer
include SerializerResolverHelper
def hash_for_collection
serializable_hash = {}
data = []
included = []
fieldset = @fieldsets[self.class.record_type.to_sym]
@resource.each do |record|
record_klazz = jsonapi_serializer_class_resolver(record,false)
data << record_klazz.record_hash(record, fieldset, @includes, @params)
included.concat record_klazz.get_included_records(record, @includes, @known_included_objects, @fieldsets, @params) if @includes.present?
end
serializable_hash[:data] = data
serializable_hash[:included] = included if @includes.present?
serializable_hash[:meta] = @meta if @meta.present?
serializable_hash[:links] = @links if @links.present?
serializable_hash
end
end
```
```ruby
module SerializerResolverHelper
def jsonapi_serializer_class_resolver(resource, is_collection)
if resource.respond_to?(:first)
# if it is a collection and it contailns different types
first_item_class = resource.first.class
return MixedCollectionSerializer if resource.any? { |item| item.class != first_item_class }
end
JSONAPI::Rails.serializer_class(resource, is_collection)
rescue NameError
# if use STI it is necessary to resolve serializers manually
resource = resource.first if is_collection && resource.respond_to?(:first)
resource_name = case
when resource.is_a?(Vehicle) then resource.type.to_s
# another STI parent classes ...
end
"#{resource_name}Serializer".constantize if resource_name.present?
end
end
```
Upvotes: 0
|
2018/03/17
| 565 | 1,979 |
<issue_start>username_0: I have a markdown (MD) page on GitHub in a public repository that I want to add some additional information when someone clicks on a hyperlink. Ideally something similair to [Bootstrap's Popover](https://getbootstrap.com/docs/3.3/javascript/#popovers).
Is this possible and if so how?<issue_comment>username_1: Not directly with GitHub page, which only support [GFM (GitHub Flavored Markdown Spec)](https://github.github.com/gfm/)
What you can do is use your GitHub repo with those mardown pages as sources for generating a static website (where you can add any additional information to the HTML code *you* are generating).
A [static website generator like Hugo](https://gohugo.io/) can help.
See "[Using a static site generator other than Jekyll](https://help.github.com/articles/using-a-static-site-generator-other-than-jekyll/)" for the process.
Upvotes: 3 [selected_answer]<issue_comment>username_2: **Not exactly a [Bootstrap's Popover](https://getbootstrap.com/docs/3.3/javascript/#popovers) but** you can create **simple *"Tooltips"*** over links both here *(at stack-overflow)* and at GitHub, like:
```markdown
[Hover your mouse here to see the tooltip](https://stackoverflow.com/a/71729464/11465149 "This is a tooltip :)")
```
[Hover your mouse here to see the tooltip](https://stackoverflow.com/a/71729464/11465149 "This is a tooltip :)")
**Besides links**, in github *(only)* you can also add **"tooltips" on plain text** like:
```markdown
[id1]: ## "your hover text"
This is a [hover text][id1] example.
```
```markdown
This is a [hover text](## "your hover text") example.
```
[^ source](https://github.com/BoostIO/BoostNote-Legacy/issues/3579#issuecomment-855449583 "Soucrce of the examples above")
Upvotes: 4 <issue_comment>username_3: Not exactly. Surely this isn't web browser native... Chromium Android Mobile doesn't show the hover text. Could you post another answer in the feed for archived reference.
Upvotes: -1
|
2018/03/17
| 840 | 2,824 |
<issue_start>username_0: I have this bit of code to check if my MongoClient is already connected:
```
connect(): Promise {
const self = this;
if (this.client && this.client.isConnected()) {
return Promise.resolve(null);
}
return MongoClient.connect(this.uri).then(function (client) {
const db = client.db('local');
self.client = client;
self.coll = db.collection('oplog.rs');
return null;
});
}
```
The **problem** is that the isConnected method takes some mandatory arguments:
```
isConnected(name: string, options?: MongoClientCommonOption): boolean;
```
here is the info:
<http://mongodb.github.io/node-mongodb-native/3.0/api/MongoClient.html#isConnected>
so do I need to pass anything other than the database name? **What if I don't know what database it might be connected to?**
When I debug at runtime, I only see an options argument (just 1 argument, not two):
[](https://i.stack.imgur.com/N4MYa.png)
(look at the isConnected method on the far right in the image).<issue_comment>username_1: The documentation is incorrect. Looking at the [source code](http://mongodb.github.io/node-mongodb-native/3.0/api/lib_mongo_client.js.html) (line 395), the only supported parameter is an optional `options` object.
```
MongoClient.prototype.isConnected = function(options) {
options = options || {};
if (!this.topology) return false;
return this.topology.isConnected(options);
};
```
So ignore the docs and don't pass a database name.
Upvotes: 4 [selected_answer]<issue_comment>username_2: I have a suggestion about that:
```
const MongoClient = require('mongodb').MongoClient
, async = require('async')
const state = {
db: null,
mode: null,
}
// In the real world it will be better if the production uri comes
// from an environment variable, instead of being hard coded.
const PRODUCTION_URI = 'mongodb://127.0.0.1:27018/production'
, TEST_URI = 'mongodb://127.0.0.1:27018/test'
exports.MODE_TEST = 'mode_test'
exports.MODE_PRODUCTION = 'mode_production'
// To connect to either the production or the test database.
exports.connect = (mode, done) => {
if (state.db) {
return done()
}
const uri = mode === exports.MODE_TEST ? TEST_URI : PRODUCTION_URI
MongoClient.connect(uri, (err, db) => {
if (err) {
return done(err)
}
state.db = db
state.mode = mode
done()
})
}
```
You can call the database in your models as below:
```
const DB = require('../db')
const COLLECTION = 'comments'
let db
// Get all comments
exports.all = (cb) => {
db = DB.getDB()
db.collection(COLLECTION).find().toArray(cb)
}
...
```
See the full node-testing project:
<https://github.com/francisrod01/node-testing>
Upvotes: 0
|
2018/03/17
| 2,082 | 6,822 |
<issue_start>username_0: I have a simple box for a project image, title, and description. On desktop, I have some jquery that brings up the project description on mouse enter. On mobile, I want this to just be a block of text that is always visible instead. What I've tried so far is to just "cancel" the javascript on mobile, but that makes the description entirely invisible at all times until the display is resized.
How can I change the `.text-container` (the description box) to always be visible on mobile, but maintain my mouse behavior on desktop?
Here is my code:
```js
$(document).ready(function(){
// coincides with mobile screen width check in media query in style_projects.css
var smallScreenWidth = 480;
$(".projectbox").mouseenter(
function(){
if ($(window).width() > smallScreenWidth)
$("#" + this.id + " .text-container").css("visibility", "visible");
}
);
$(".projectbox").mouseleave(
function(){
if ($(window).width() > smallScreenWidth)
$("#" + this.id + " .text-container").css("visibility", "hidden");
}
);
});
```
```css
body
{
font-family: Georgia, serif;
}
.projectbox
{
background-color: greenyellow;
width: 240px;
height: 181px;
cursor: pointer;
margin-bottom: 50px;
margin-left: auto;
margin-right: auto;
}
.projectbox img
{
height: 160px;
width: 240px;
position: absolute;
}
.text-container
{
background-color: rgba(50,50,50,0.5);
position: absolute;
visibility: hidden;
height: 160px;
width: 240px;
padding: 10px 10px 10px 10px;
}
@media only screen and (max-device-width: 480px)
{
.text-container
{
visibility: visible;
}
}
.titlebox
{
background-color: red;
padding: 10px 10px 10px 10px;
width: 100%;
height: 40px;
}
```
```html
Project1

This is a sample project description. It's not a bad description, but it's short.
```<issue_comment>username_1: Have you tried with `max-width` instead of `max-device-width`?
`@media only screen and (max-width: 480px)`
If you are debugging the issue on your laptop, it would be better to use `max-width` since you might not see the actual css results on a device.
<http://www.javascriptkit.com/dhtmltutors/cssmediaqueries2.shtml>
* max-width is the width of the target display area, e.g. the browser;
* max-device-width is the width of the device's entire rendering area, i.e. the actual device screen.
If you are using the max-device-width, when you change the size of the browser window on your desktop, the CSS style won't change to different media query setting;
If you are using the max-width, when you change the size of the browser on your desktop, the CSS will change to different media query setting and you might be shown with the styling for mobiles, such as touch-friendly menus.
Upvotes: 2 <issue_comment>username_2: You can simply give **visibility: visible !important;** in your media query css.
Upvotes: 0 <issue_comment>username_3: I think you want like this:
```js
$(document).ready(function(){
// coincides with mobile screen width check in media query in style_projects.css
var smallScreenWidth = 480;
$(".projectbox").mouseenter(
function(){
if ($(window).width() > smallScreenWidth)
$("#" + this.id + " .text-container").css("visibility", "visible");
if ($(window).width() < smallScreenWidth)
$("#" + this.id + " .text-container-mobile").css("visibility", "visible");
}
);
$(".projectbox").mouseleave(
function(){
if ($(window).width() > smallScreenWidth)
$("#" + this.id + " .text-container").css("visibility", "hidden");
if ($(window).width() < smallScreenWidth)
$("#" + this.id + " .text-container-mobile").css("visibility", "hidden");
}
);
});
```
```css
body
{
font-family: Georgia, serif;
}
.projectbox
{
background-color: greenyellow;
width: 240px;
height: 181px;
cursor: pointer;
margin-bottom: 50px;
margin-left: auto;
margin-right: auto;
}
.projectbox img
{
height: 160px;
width: 240px;
position: absolute;
}
.text-container
{
background-color: rgba(50,50,50,0.5);
position: absolute;
visibility: hidden;
height: 160px;
width: 240px;
padding: 10px 10px 10px 10px;
}
.text-container-mobile
{
background-color: rgba(50,50,50,0.5);
position: absolute;
visibility: hidden;
height: 160px;
width: 240px;
padding: 10px 10px 10px 10px;
}
@media only screen and (max-device-width: 480px)
{
.text-container
{
visibility: visible;
}
}
.titlebox
{
background-color: red;
padding: 10px 10px 10px 10px;
width: 100%;
height: 40px;
}
```
```html
Project1

This is a sample project description. It's not a bad description, but it's short.
mobile text
```
Upvotes: 1 <issue_comment>username_4: Please use the class hidden-xs to hide the description in mobiles.Similar classes hidden-md,hidden-lg etc also available
Upvotes: 1 <issue_comment>username_5: Try [this](https://jsfiddle.net/pankaj_bhr/h0khxr50/)
**HTML:**
```
Project1

This is a sample project description. It's not a bad description, but it's short.
```
**CSS:**
```
body {
font-family: Georgia, serif;
}
.projectbox
{
background-color: greenyellow;
width: 240px;
height: 181px;
cursor: pointer;
margin-bottom: 50px;
margin-left: auto;
margin-right: auto;
}
.projectbox img {
width: 100%;
height: 160px;
object-fit: cover;
}
.text-container {
background-color: rgba(50,50,50,0.5);
position: absolute;
top: 0;
left: 0;
height: 160px;
width: 100%;
margin: 0;
z-index: 999;
padding: 10px;
margin-top: 20px;
opacity: 0;
transition: all ease .3s;
}
.projectbox .block {
position: relative;
overflow: hidden;
}
.projectbox .block:hover .text-container {
margin-top: 0px;
opacity: 1;
}
.titlebox {
background-color: red;
padding: 10px 10px 10px 10px;
width: 100%;
height: 40px;
box-sizing: border-box;
}
@media only screen and (max-device-width: 480px)
{
.text-container {
opacity: 1;
margin: 0;
}
}
```
You can achieve this without using javascript code, I have made some CSS changes for animation effects and minor improvements.
Hope this may help you.
Upvotes: 2 [selected_answer]
|
2018/03/17
| 418 | 1,607 |
<issue_start>username_0: When I run this cpp code in devC++:
```
#include
using namespace std;
#include
void getdata();
void dis();
void getdata()
{
int radius;
cout<<"\n enter radius of circle:-";
cin>>radius;
}
void dis()
{
int rad;
cout<<"\n num is "<
```
My output shows:
```
enter radius of circle:-15
radius is 15
```
my question is:
`radius` and `rad` are local to their functions, then how does `rad` became the same value as `radius`? They are in different functions.
Can someone kindly explain what is happening?<issue_comment>username_1: **Undefined behavior.**
When `getdata()` is called, `radius` occupies a particular block of memory on the stack, and is populated with the user's input. When `getdata()` exits, that memory block becomes available for later reuse, and its content is not cleared.
When `dis()` is then called afterwards, `rad` is not being initialized, so it picks up whatever random value was already present in the memory block that it occupies.
Most likely, what is happening is that `rad` happens to occupy the same memory block that `radius` had previously occupied. That is why you see the same value. But this is not guaranteed, so don't rely on it. Add some additional variables above either `radius` or `rad` and you will see different behavior.
Always initialize your variables before you perform operations that are dependent on them.
Upvotes: -1 <issue_comment>username_2: If the variable is uninitialized it will hold the **garbage value**!.
in your case`rad` will print the garbage value .It may differ every time you compile.
Upvotes: -1
|
2018/03/17
| 909 | 2,729 |
<issue_start>username_0: So im trying to accept input of an RGB color values,which means they have to be integers within 0-255,such as:
>
> 123,245,230
>
>
>
but i want to make sure that they have formatted it correctly, so i'm taking the input as a string and im trying to force it into a list.
my original solution was
>
>
> ```
> colorList=colorListString.split(",")
> for i in range(3):
> colorList[i]=int(colorList[i])
> colorMatrix+=[colorList]
>
> ```
>
>
but this doesn't make sure that there is always 3 values, so i complicated it to first make sure that the input was 3 values determined with
>
>
> ```
> while colorListString.count(",") !=2:
> print("Color number ",x+1,": ")
> colorListString=input()
>
> ```
>
>
but now im running into the problem that i dont know how to make sure that the three values are indeed integers, and keep that neatly within the while loop<issue_comment>username_1: Regex should help.
```
import re
s = "123,245,230"
if re.match("^\d{3},\d{3},\d{3}$", s):
#process
```
Upvotes: 2 <issue_comment>username_2: Im not clear on whatg you're trying to do with the while loop.
Also you could check that the value is a valid rgb value as per below:
```
colorListString = '123,245,230'
colorMatrix = []
colorList =colorListString.split(",")
if len(colorList)==3:
for i in range(3):
val = int(colorList[i])
if val <= 255 and val >= 0:
colorList[i] = val
colorMatrix +=[colorList]
print(colorMatrix)
```
Upvotes: 0 <issue_comment>username_3: If you're trying to force an exception to be thrown if they enter more then 3 numbers, there are a few ways to do it:
```
>>> colorListString = '255, 255, 255, 255'
>>> # Unpacking
>>> r, g, b = colorListString.split(",")
ValueError: too many values to unpack (expected 3)
>>> # maxsplit argument to split
>>> colorList = colorListString.split(",", 2)
>>> for i in range(3): colorList[i]=int(colorList[i])
ValueError: invalid literal for int() with base 10: ' 255, 255'
>>> # Just check it manually
>>> colorList = colorListString.split(",")
>>> if len(colorList != 3) raise ValueError('RGB please')
ValueError: RGB please
```
No need to get any fancier, like counting the commas before the split or using a regex.
Upvotes: 1 [selected_answer]<issue_comment>username_4: Simple ***1-liner*** without `regex`:
```
s = "123,245,230"
if all(x <= 3 for x in [len(i) if int(i) >= 0 and int(i) <= 255 else 4 for i in s.split(',')]):
# process
```
This works with strings like `2,123,220`. You don't require to use `002,123,220`.
Also takes care that the 3-digit number in the string separated by comma is between `0` and `255`.
Upvotes: 0
|
2018/03/17
| 782 | 2,403 |
<issue_start>username_0: I would like to download Wikidata for a single entity. I know I can achieve using the URL, for example:
`https://www.wikidata.org/wiki/Special:EntityData/Q42.rdf`
Will give me Wikidata for **<NAME>** in RDF format.
But this data is fully, meaning complete with meta-data such as qualifiers and references. I am interested in primary data only.
Actually I am working on RDF Reification, and for that I need some sample non-RDF data which I can test my program on (like truthy Wikidata).
I do not wish to download entire Wikidata dumps (Which I know are available in truthy format).<issue_comment>username_1: Regex should help.
```
import re
s = "123,245,230"
if re.match("^\d{3},\d{3},\d{3}$", s):
#process
```
Upvotes: 2 <issue_comment>username_2: Im not clear on whatg you're trying to do with the while loop.
Also you could check that the value is a valid rgb value as per below:
```
colorListString = '123,245,230'
colorMatrix = []
colorList =colorListString.split(",")
if len(colorList)==3:
for i in range(3):
val = int(colorList[i])
if val <= 255 and val >= 0:
colorList[i] = val
colorMatrix +=[colorList]
print(colorMatrix)
```
Upvotes: 0 <issue_comment>username_3: If you're trying to force an exception to be thrown if they enter more then 3 numbers, there are a few ways to do it:
```
>>> colorListString = '255, 255, 255, 255'
>>> # Unpacking
>>> r, g, b = colorListString.split(",")
ValueError: too many values to unpack (expected 3)
>>> # maxsplit argument to split
>>> colorList = colorListString.split(",", 2)
>>> for i in range(3): colorList[i]=int(colorList[i])
ValueError: invalid literal for int() with base 10: ' 255, 255'
>>> # Just check it manually
>>> colorList = colorListString.split(",")
>>> if len(colorList != 3) raise ValueError('RGB please')
ValueError: RGB please
```
No need to get any fancier, like counting the commas before the split or using a regex.
Upvotes: 1 [selected_answer]<issue_comment>username_4: Simple ***1-liner*** without `regex`:
```
s = "123,245,230"
if all(x <= 3 for x in [len(i) if int(i) >= 0 and int(i) <= 255 else 4 for i in s.split(',')]):
# process
```
This works with strings like `2,123,220`. You don't require to use `002,123,220`.
Also takes care that the 3-digit number in the string separated by comma is between `0` and `255`.
Upvotes: 0
|
2018/03/17
| 506 | 1,502 |
<issue_start>username_0: I have following data in my table
```
column1 count1
1 2
2 3
2 5
3 4
3 1
4 3
5 4
6 7
7 3
8 0
9 2
10 1
```
Following is the output I want:
```
id sum(count1)
1 2
2 8
3+ 25
```
Following is the query I am using for this :
```
SELECT column1 AS id,sum(count1) FROM test
WHERE column1 < 3
GROUP BY id
UNION
SELECT '3+' AS id,sum(count1) FROM test
WHERE column1 >= 3
GROUP BY id
```
This is a rather inefficient way because we scan the table twice. Is there a better way of doing this ?<issue_comment>username_1: You just need simple `GROUP BY` clause but *conditional* with `case` expression
```
select
case when column1 >= 3 then 3 else column1 end AS id, sum (count1) count1
from table t
group by case when column1 >= 3 then 3 else column1 end
```
You could explore your desired result with *sub-query* or something similar
Upvotes: 2 [selected_answer]<issue_comment>username_2: A simple formulation in Impala is:
```
select least(column1, 3) as id, sum(count1) as count1
from table t
group by least(column1, 3)
order by min(id);
```
This does not return the `3+`. For that, we need to be careful about types:
```
select (case when column1 < 3 then cast(column1 as string) else '3+' end) as id, sum(count1)
from table t
group by (case when column1 < 3 then cast(column1 as string) else '3+' end)
order by min(id);
```
I also find `order by min(id)` a convenient way to get the result set in the right order.
Upvotes: 0
|
2018/03/17
| 1,613 | 6,108 |
<issue_start>username_0: In Java Precisely 3rd Ed., there's the following code snippet:
```
BiConsumer> arraySorter = Arrays::sort;
```
However, I noticed that even when I leave out after `::`, the method reference is still valid (which makes sense due to type parameters for `BiConsumer`).
However, I'm pretty confused about whether there are cases in which `::` would be *necessary* in a method reference, and if so, an example would be very helpful.<issue_comment>username_1: Java 8 implements [Generalized Target-Type Inference (JEP 101)](http://openjdk.java.net/jeps/101) that allows the compiler to infer the *type parameter of a generic method.* In your example, the Java 8 compiler infers the type parameter of the method `sort` from the right hand side of the assignment.
[JEP 101](http://openjdk.java.net/jeps/101) also proposed generalized target-type inference for chained method calls, but it was not implemented because of the complexity that it would have introduced to the inference algorithm (discussed [here](https://stackoverflow.com/questions/19934204/java-8-and-generalized-target-type-inference) and [here](http://mail.openjdk.java.net/pipermail/lambda-dev/2013-July/010531.html)). So, *chained generic-method calls* is an example in which the type parameter of a generic method cannot be inferred.
Consider the following code snippet:
```
class Main {
String s = MyList.nil().head(); // Incompatible types. Required: String. Found: Object.
static class MyList {
private E head;
static MyList nil() { return new MyList(); }
E head() { return head; }
}
}
```
The compiler fails to infer the type parameter for the generic method `nil()` in `String s = MyList.nil().head()`. So, we have to provide more information to the inference algorithm, either by adding a type parameter
```
String s = MyList.nil().head();
```
or by splitting the chained calls
```
MyList ls = MyList.nil();
String s = ls.head();
```
*Note:* The chained-calls example does not contain a method reference to a generic method (`::<>` syntax) as in the original question, but the inference technique invoked in both examples is the same. Therefore, the limitations of the inference are also the same.
Upvotes: 2 <issue_comment>username_2: This is *type inference* in action, in the majority of the cases the compiler will infer the types for you and thus meaning you need not explicitly provide them.
However, there are certain circumstances in which you need to manually provide type hints.
Anyhow, Java 8 enhances the inference of generic arguments.
So, doing the following:
```
BiConsumer> arraySorter = Arrays::sort;
```
in completly valid due to type inference.
A couple of examples which I can think of right now that would not have worked in Java-7 but works in Java-8 is something like:
```
void exampleMethod(List people) {
// do logic
}
exampleMethod(Collections.emptyList())
```
Another example:
```
someMethodName(new HashMap<>());
...
void someMethodName(Map values);
```
You were required to explicitly provide the type arguments previously.
Also, because of the aforementioned type inference, it's the exact reason why we can now do something like:
```
...
...
.collect(Collectors.toList());
```
instead of this guy:
```
...
...
.collect(Collectors.toList());
```
---
Whether you should provide type arguments explicitly or not is a matter of preference in some cases and in others, you're forced to do so in order to help the compiler do its work.
Upvotes: 3 <issue_comment>username_3: I figured Java 10's local variable type inference (`var name = ...;`) would be the answer to this conundrum. Instead of the destination variable type providing the type for the method reference, the right-hand-side would need to fully specify the type, necessitating the type parameter (`::`) on the method reference.
First thought out the gate ...
```
var arraySorter = Arrays::sort;
```
... but method references, by themselves, do not define a type. They need to be translated to a functional object by the compiler, and the compiler won't search for known functional interfaces looking for the appropriate type, even if there was exactly one.
---
Next thought was to use a method reference as an argument to a method which returns a type based on the method's argument.
```
class Spy {
static Function f2(Function f) {
return f.andThen(f);
}
static T identity(T t) {
return t;
}
}
```
Using this, we can create our local variable passing a method reference to our method:
```
Function double\_identity = f2(Spy::identity);
```
As expected, we can remove the `::`
```
Function double\_identity = f2(Spy::identity);
```
Unexpectedly, local variable type inference is fine with it.
```
var double_identity = f2(Spy::identity); // Infers !
Object obj = null;
double\_identity.apply(obj);
```
But the real surprise comes when use the method reference type to override it.
```
var double_identity = f2(Spy::identity); // Error: Double != Object
```
After a bit of fighting, I figured out why. We have to apply the type to the `f2` method itself:
```
var double_identity = Spy.f2(Spy::identity); // Works.
```
In retrospect, this makes some sense. The type of the variable generally provides context for the outer function. Assigning the result to a `Function` variable lets the compiler infer the type of `f2(...)`, which then passes that type to the arguments. With `var name = ...`, without an explicit type, the only type is has available is `Object`, so the compiler infers `Spy.f2(...)`, and then determines the argument type must be a `Function`.
Unfortunately, it doesn't seem to parse from the inside out, so that `Spy::identity` doesn't cause the function to be inferred as `Spy.f2(...)` and the variable as `Function`. Maybe Java 11? Maybe it would break too much, and can't work.
It does, however, put an end to my attempts to abuse `var name = ...;` to solve the OP's conundrum.
---
Many thanks to @Eugene for critiquing my previous attempts prior to Java 10's release.
Upvotes: 4 [selected_answer]
|
2018/03/17
| 657 | 2,092 |
<issue_start>username_0: I have a process `MYPID=18686` that I am monitoring with top.
```
top -b -n 1000 -d 5 -p $MYPID | tail -1 > log.txt
```
When the process dies, the output file is empty.
My end goal is just to record cpu usage over the lifetime of that process. Specifying `-n 1000 -d 5` is a cheap workaround that runs top for the expected lifetime of the process. The `tail -1` was to eliminate continually recording the top headers.
The answers to this question were helpful, but not a solution.
[How to capture the output of a top command in a file in linux?](https://stackoverflow.com/questions/11729720/how-to-capture-the-output-of-a-top-command-in-a-file-in-linux)<issue_comment>username_1: `tail -1` does not produce the last line of each screen of output that `top` produces. It produces only the last line of *all* the lines.
Try:
```
top -b -n 1000 -d 5 -p "$MYPID" | grep "$MYPID" > log.txt
```
This will collect into `log.txt` all the lines that mention the PID but none of the headers (assuming the PID doesn't appear on a header).
Note that `grep` is usually buffered. This means that, if you check on `log.txt` during a run, it might be out of date. With GNU `grep`, you can keep `log.txt` up to date with the `--line-buffered` option:
```
top -b -n 1000 -d 5 -p "$MYPID" | grep --line-buffered "$MYPID" > log.txt
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: you can use below options to save output in a file
* -b : batch operation mode
* -n : number of iterations
* -d : delay (interuption between each iteration; no need of this if you need 1 iteration )
* -f : output file ( only available in some versions, better off not using it then )
now having your purpose in mind, we can execute the following command to save the output in file :
```
top -b -n 1000 -d 5 -p $MYPID | grep -i "Cpu(s)" | head -c21 | cut -d ' ' -f3| cut -d '%' -f1 > state.log
```
if you wish to run your command in the background mode, without the possibility of being killed, when you logout the session, you may try below :
`nohup top-pid.sh &`
Upvotes: 0
|
2018/03/17
| 1,303 | 4,390 |
<issue_start>username_0: I just started learning android,
trying an app which has a button , on click of `button`
>
> (MainActivity)
>
>
>
, a second activity
>
> (SecondActivity)
>
>
>
loads which has cardview image, on clicking it another activity
>
> activity\_swami
>
>
>
opens up which has to load the data from a url(in `json` format) into the `textview` (`CARDVIEW`).But its showing blank activity.
LOCAL STATIC DATA GETS DISPLAYED.
I have a `recyclerview` code in
>
> activity\_recycle
>
>
>
My files are **swami.java**
```
package com.example.vinay.scroll;
import android.app.ProgressDialog;
import android.os.Bundle;
import android.support.v7.app.AppCompatActivity;
import android.support.v7.widget.LinearLayoutManager;
import android.support.v7.widget.RecyclerView;
import android.widget.Toast;
import com.android.volley.Request;
import com.android.volley.RequestQueue;
import com.android.volley.Response;
import com.android.volley.VolleyError;
import com.android.volley.toolbox.StringRequest;
import com.android.volley.toolbox.Volley;
import org.json.JSONArray;
import org.json.JSONException;
import org.json.JSONObject;
import java.util.ArrayList;
import java.util.List;
public class swami extends AppCompatActivity {
// private static final String URL_DATA="http://192.168.31.152/MyApi/index.php";
private static final String URL_DATA="https://vinayshirashyad.000webhostapp.com/index.php";
private RecyclerView recyclerView;
private RecyclerView.Adapter adapter;
private List activitySwamiList;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity\_recycle);
recyclerView = (RecyclerView) findViewById(R.id.recycle);
recyclerView.setAdapter(adapter);
recyclerView.setLayoutManager(new LinearLayoutManager(getApplicationContext()));
recyclerView.setHasFixedSize(true);
activitySwamiList = new ArrayList<>();
loadRecyclerViewData();
}
private void loadRecyclerViewData(){
final ProgressDialog progressDialog = new ProgressDialog(this);
progressDialog.setMessage("Loading Data");
progressDialog.show();
StringRequest stringRequest = new StringRequest(Request.Method.GET,
URL\_DATA,
new Response.Listener() {
@Override
public void onResponse(String response) {
progressDialog.dismiss();
try {
JSONObject jsonObject = new JSONObject(response);
JSONArray array = jsonObject.getJSONArray("swami");
for(int i=0;i
```
>
> **Adapter class swamiAdapter.java**
>
>
>
```
package com.example.vinay.scroll;
import android.content.Context;
import android.support.v7.widget.RecyclerView;
import android.view.LayoutInflater;
import android.view.View;
import android.view.ViewGroup;
import android.widget.TextView;
import java.util.List;
/**
* Created by vinay on 3/16/2018.
*/
public class swamiAdapter extends RecyclerView.Adapter {
private List activitySwamiList;
private Context context;
public swamiAdapter(List activitySwamiList, Context context) {
this.activitySwamiList = activitySwamiList;
this.context = context;
}
@Override
public ViewHolder onCreateViewHolder(ViewGroup parent, int viewType) {
View v = LayoutInflater.from(parent.getContext())
.inflate(R.layout.activity\_swami,parent,false);
return new ViewHolder(v);
}
@Override
public void onBindViewHolder(ViewHolder holder, int position) {
ActivitySwami activitySwami = activitySwamiList.get(position);
// holder.id.setText(activitySwami.getId());
holder.name.setText(activitySwami.getName());
holder.desc.setText(activitySwami.getDesc());
}
@Override
public int getItemCount() {
return activitySwamiList.size();
}
public class ViewHolder extends RecyclerView.ViewHolder {
//public TextView id;
public TextView name;
public TextView desc;
public ViewHolder(View itemView) {
super(itemView);
// id = itemView.findViewById(R.id.id);
name = itemView.findViewById(R.id.name);
desc = itemView.findViewById(R.id.desc);
}
}
}
```
>
> **activity\_recycle.xml**
>
>
>
```
xml version="1.0" encoding="utf-8"?
```
>
> **activity\_swami.xml**
>
>
>
```
xml version="1.0" encoding="utf-8"?
```<issue_comment>username_1: You forgot to add data in your **`activitySwamiList`** `ArrayList`
Try below code
```
for(int i=0;i
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: Add the data to the `Arraylist`
```
activitySwamiList.add(activitySwami)
```
Upvotes: 1
|
2018/03/17
| 193 | 617 |
<issue_start>username_0: [](https://i.stack.imgur.com/P1dr1.png)
[](https://i.stack.imgur.com/ktmB3.png)
**after page Refresh Permissions are empty**
How to avoid deleting of permissions after refresh<issue_comment>username_1: You forgot to add data in your **`activitySwamiList`** `ArrayList`
Try below code
```
for(int i=0;i
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: Add the data to the `Arraylist`
```
activitySwamiList.add(activitySwami)
```
Upvotes: 1
|
2018/03/17
| 740 | 2,635 |
<issue_start>username_0: I am trying to update the quantity of the `Products` table via a stored procedure.
I am using an Oracle database in SQL Developer.
```
create or replace procedure alter_products
(
pid in number :=''
,qty in number :=''
)
as
begin
UPDATE PRODUCTS
SET P_QUANTITY = (select P_QUANTITY from PRODUCTS where PID = pid) - qty
WHERE PID = pid;
end alter_products;
```
The stored procedure was compiled successfully, but while trying to run it, I got this error:
>
> ORA-01427: single-row subquery returns more than one row
>
> ORA-06512: at "DB\_NAME.ALTER\_PRODUCTS", line 7
>
> ORA-06512: at line 8
>
>
><issue_comment>username_1: Your update statement should probably be something like this:
```
UPDATE PRODUCTS p
SET p.P_QUANTITY = p.P_QUANTITY - qty
WHERE p.PID = p_pid; --argument should have a different name than the column
-- to keep it clean.
```
**EDIT**: Refer @william Robertson's answer for detailed explanation.
Upvotes: 3 [selected_answer]<issue_comment>username_2: The parameter `pid` has the same name as a column in the table (Oracle identifier names are case-insensitive so writing one of them in uppercase doesn't help you), so where you have the condition
```
where pid = pid
```
this will be true for every row in the table, because `pid` is always equal to itself (unless it's null, and you mentioned in a comment that it's the PK, so it can't be null). Therefore you need to either rename the parameter or else prefix it:
```
create or replace procedure alter_products
( pid in number
, qty in number )
as
begin
update products p
set p_quantity =
( select p_quantity
from products p1
where p1.pid = alter_products.pid ) - qty
where p.pid = alter_products.pid;
end alter_products;
```
But then if `products.pid` is the PK of `products`, the row you are trying to look up in the subquery is the row you already have, so the subquery is redundant. Why not just:
```
create or replace procedure alter_products
( in_pid in products.pid%type
, in_qty in products.p_quantity%type )
as
begin
update products p
set p_quantity = p_quantity - in_qty
where p.pid = in_pid;
end alter_products;
```
The usual naming convention for parameters is `p_`, for example `p_pid`, but your table confusingly has a column named `p_quantity`, so I have used `in_` as the prefix.
I have also made the parameters mandatory as it doesn't seem to make any sense for them to be optional as in the original version, and anchored them to their corresponding column types.
Upvotes: 1
|
2018/03/17
| 1,335 | 4,440 |
<issue_start>username_0: I'm confused on how would i get the checkbox value so i get the "amount" and the "total" values. The computation is pretty simple. The checkbox value is 1.20 or 20%. The amount is (quantity \* price) / checkbox value, if the checkbox value have a check on it. And the total value is only (quantity \* price). Here's the link to my codes.
UPDATE!!! Now it's working but the problem is that it doesn't automatically calculates but i have to click outside the input field to update it.
[COMPLETE CODE IS HERE](https://stackblitz.com/edit/angular-miqplx?file=app/app.component.ts)
```
onChange(isChecked, id){
console.log(isChecked)
let quantity = (this.myForm.controls['rows']).controls[id]['controls']['quantity'].value
let price = (this.myForm.controls['rows']).controls[id]['controls']['price'].value
let checkbox = (this.myForm.controls['rows']).controls[id]['controls']['checkbox'].value
let x = (this.myForm.controls['rows']).at(id);
if(isChecked){
x.patchValue({
amount: (quantity \* price)/checkbox,
total: quantity \* price
});
}
else {
x.patchValue({
amount: (quantity \* price)/checkbox,
total: (quantity \* price)/checkbox,
});
}
}
```<issue_comment>username_1: Josep, when we use ReactiveForm, we usually control the changes subscribing to AbstractControl.valuesChange.
So your form don't need (cange) nor (NgModel), simply
```
...
| {{row.value.material\_id}} | {{row.value.material\_name}} | | | 20 % | | |
```
Then we control the "changes" adding in your code, after pathValues
```
patchValues() {
let rows = this.myForm.get('rows') as FormArray;
this.orders.forEach(material => {
...
})
setOnChange() //<--call a function
}
setOnChange()
{
const formarray=this.myForm.get('rows') as FormArray;
for (let i=0;i{
//see that in val we have all the values
//val.quantity,val.price,val.dis\_checkbox...
//I use an auxiliar variables to get the controls "total" and "amount"
let controlTotal=(this.myForm.get('rows') as FormArray).at(i).get('total')
let controlAmount=(this.myForm.get('rows') as FormArray).at(i).get('amount')
let value=(val.quantity)\*(val.price);
if (controlTotal.value!=value) //Only when the values are not equal
controlTotal.setValue(value);
value=val.dis\_checkbox?value/val.checkbox:value;
if (controlAmount.value!=value) //Only when the values are not equal
controlAmount.setValue(value);
});
}
}
```
When we're using valueChanges, we can subscribing to a uniq control, to one row of the formArray (mychoice) or to all the form.
Ok, the code it's not complete. normally if we have "calculated field" (in this case "total" and "amount") We needn't make this fields belong to the form. Why not use an array of object (e.g.totals=[{total:0,ammount:0}{total:0,ammount:0}..] and change/display the values?
```
|
...
| | 20 % | | |
```
Upvotes: 0 <issue_comment>username_1: Sorry the delay. I you want to use a "auxiliar variable" "totals" you must
```
//declare the variable at first
totals:any[]=[] //totals will be like, e.g. [{total:0,amount:0},{total:10,amount:23}...]
//In patchValues
this.orders.forEach(material => {
material.materials.forEach(x => {
rows.push(this.fb.group({
material_id: x.id,
material_name: x.name,
quantity: [null, Validators.required],
price: [null, Validators.required],
dis_checkbox: [true],
checkbox: [1.20]
})) //see that total and amount is NOT in the form
this.totals.push({amount:0,total:0}); //<--add this line
})
})
//And finally, we change the "setOnchange" (I commented the lines that you don't need)
setOnChange()
{
const formarray=this.myForm.get('rows') as FormArray;
for (let i=0;i{
//"total" and "amount" are simply variables
//we needn't look for the controls (not exist)
// let controlTotal=(this.myForm.get('rows') as FormArray).at(i).get('total')
// let controlAmount=(this.myForm.get('rows') as FormArray).at(i).get('amount')
let value=(val.quantity)\*(val.price);
this.totals[i].total=value; //<--just update the value of the variable
// if (controlTotal.value!=value)
// controlTotal.setValue(value);
value=val.dis\_checkbox?value/val.checkbox:value;
this.totals[i].amount=value; //<--idem
// if (controlAmount.value!=value)
// controlAmount.setValue(value);
});
}
}
```
Upvotes: 2 [selected_answer]
|
2018/03/17
| 627 | 2,094 |
<issue_start>username_0: I want to click an add to cart button in an ecommerce website, my problem is every item has a different button id. and i notice that the id is located in the div.
i try to select a wild card since every item has a different id.
```
addcart = driver.find_elements_by_css_selector("div[id*=\"addtocart-\"]")
addcart.click()
```
[here is the sample code, i want to click the addtocart-49831, how can i click that button?](https://i.stack.imgur.com/X0AEJ.png)<issue_comment>username_1: Create a list of urls which you want to hit and randomly select one and go to it. Then select the css\_selectors and randomly select one to click. Something like this:-
```
driver = webdriver.Firefox()
urls = ['url1','url2','url3']
url = urls[randint(0, len(urls) - 1)]
driver.get(url) # <- This will take you to a random page
current_page = driver.current_url
i=0
while i<10: # <- Create your loop as you want
try:
selectors = driver.find_elements_by_css_selector('.content-link.spf-link.yt-uix-sessionlink.spf-link') #<- get the css_selectors
l = selectors[randint(0, len(selectors) - 1)] #<- Select a random selector
l.click()
# Write your
# code here
except:
continue
```
Upvotes: 0 <issue_comment>username_2: There are many other ways to locate element without using it's `@id`:
* By its text:
```
addcart = driver.find_element_by_xpath("//button[.='Add To Cart']")
```
* By buttons' type:
```
addcart = driver.find_element_by_xpath("//button[@type='submit']")
```
* By class names:
```
driver.find_element_by_css_selector("div.add_to_cart_btn")
```
or
```
driver.find_element_by_class_name("add_to_cart_btn")
```
Note that in your code you're using `find_elements...()` to get element and then click it while this method returns a *list* of elements. You need to use `find_element...()` or `find_elements...()[0]` instead
P. S. Let me know if it's not what you want as the title *"click randomly with a randomly selected page"* does not really match the issue description
Upvotes: 2 [selected_answer]
|
2018/03/17
| 825 | 2,811 |
<issue_start>username_0: ```js
function CalcVolume() {
var radiusvalue = document.getElementById("radius").value
if (radiusvalue) {
if (isNaN(radiusvalue)) {
alert("Provide a number")
return;
}
volumevalue = 4 * 3.14 * radiusvalue * radiusvalue * radiusvalue
document.getElementById("volume").value = volumevalue;
} else {
alert("Provide an input")
}
}
```
```css
form {
border: 2px solid black;
padding-left: 15px;
height: 170px;
width: 200px;
}
```
```html
Radius
Volume
Calculate
```
In the above code segment,which is a small logic with UI to calc volume for a given radius, the text fields are auto cleared on clicking the button with text "Calculate".Can someone help me in understanding this?<issue_comment>username_1: The `form` gets submitted.
You can add `onsubmit="return false;"` to
Stack snippet
```js
function CalcVolume() {
var radiusvalue = document.getElementById("radius").value
if (radiusvalue) {
if (isNaN(radiusvalue)) {
alert("Provide a number")
return;
}
volumevalue = 4 * 3.14 * radiusvalue * radiusvalue * radiusvalue
document.getElementById("volume").value = volumevalue;
} else {
alert("Provide an input")
}
}
```
```css
form {
border: 2px solid black;
padding-left: 15px;
height: 170px;
width: 200px;
}
```
```html
Radius
Volume
Calculate
```
---
Or replace the `form` element with a `div`
Stack snippet
```js
function CalcVolume() {
var radiusvalue = document.getElementById("radius").value
if (radiusvalue) {
if (isNaN(radiusvalue)) {
alert("Provide a number")
return;
}
volumevalue = 4 * 3.14 * radiusvalue * radiusvalue * radiusvalue
document.getElementById("volume").value = volumevalue;
} else {
alert("Provide an input")
}
}
```
```css
.myform {
border: 2px solid black;
padding-left: 15px;
height: 170px;
width: 200px;
}
```
```html
Radius
Volume
Calculate
```
Upvotes: 2 <issue_comment>username_2: The default type of a `button` is `submit` which is causing the submission of the `form`. Change that to `button`:
`Calculate`
**Working Code:**
```js
function CalcVolume() {
var radiusvalue = document.getElementById("radius").value
if (radiusvalue) {
if (isNaN(radiusvalue)) {
alert("Provide a number")
return;
}
var volumevalue = 4 * 3.14 * radiusvalue * radiusvalue * radiusvalue
document.getElementById("volume").value = volumevalue;
} else {
alert("Provide an input")
}
}
```
```css
form {
border: 2px solid black;
padding-left: 15px;
height: 170px;
width: 200px;
}
```
```html
Radius
Volume
Calculate
```
Upvotes: 1 [selected_answer]
|
2018/03/17
| 678 | 2,252 |
<issue_start>username_0: [how to make like this one top and bottom with bootstrap](https://i.stack.imgur.com/RhZ6Y.png)
[how to float text-center with background color with bootstrap](https://i.stack.imgur.com/tDMi4.png)<issue_comment>username_1: The `form` gets submitted.
You can add `onsubmit="return false;"` to
Stack snippet
```js
function CalcVolume() {
var radiusvalue = document.getElementById("radius").value
if (radiusvalue) {
if (isNaN(radiusvalue)) {
alert("Provide a number")
return;
}
volumevalue = 4 * 3.14 * radiusvalue * radiusvalue * radiusvalue
document.getElementById("volume").value = volumevalue;
} else {
alert("Provide an input")
}
}
```
```css
form {
border: 2px solid black;
padding-left: 15px;
height: 170px;
width: 200px;
}
```
```html
Radius
Volume
Calculate
```
---
Or replace the `form` element with a `div`
Stack snippet
```js
function CalcVolume() {
var radiusvalue = document.getElementById("radius").value
if (radiusvalue) {
if (isNaN(radiusvalue)) {
alert("Provide a number")
return;
}
volumevalue = 4 * 3.14 * radiusvalue * radiusvalue * radiusvalue
document.getElementById("volume").value = volumevalue;
} else {
alert("Provide an input")
}
}
```
```css
.myform {
border: 2px solid black;
padding-left: 15px;
height: 170px;
width: 200px;
}
```
```html
Radius
Volume
Calculate
```
Upvotes: 2 <issue_comment>username_2: The default type of a `button` is `submit` which is causing the submission of the `form`. Change that to `button`:
`Calculate`
**Working Code:**
```js
function CalcVolume() {
var radiusvalue = document.getElementById("radius").value
if (radiusvalue) {
if (isNaN(radiusvalue)) {
alert("Provide a number")
return;
}
var volumevalue = 4 * 3.14 * radiusvalue * radiusvalue * radiusvalue
document.getElementById("volume").value = volumevalue;
} else {
alert("Provide an input")
}
}
```
```css
form {
border: 2px solid black;
padding-left: 15px;
height: 170px;
width: 200px;
}
```
```html
Radius
Volume
Calculate
```
Upvotes: 1 [selected_answer]
|
2018/03/17
| 567 | 2,047 |
<issue_start>username_0: I have an API method where the user can pass in their own query. The field in the collection is simply ns, so the user might pass something like:
```
v.search = function(query: Object){
// query => {ns:{$in:['foo','bar',baz]}} // valid!
// query => {ns:{$in:{}}} // invalid!
// query => {ns:/foo/} // valid!
});
```
is there some way to do this, like a smoke test that can fail queries that are obviously wrong?
**I am hoping that some MongoDB libraries would export this functionality... but in all likelihood they validate the query only by sending it to the database, which is in fact, the real arbiter of which query is valid/invalid.**
But I am looking to validate the query before sending it to the DB.<issue_comment>username_1: I don't think it's possible to do otherwise than by reflecting query.ns object and checking every of its property / associated value
Upvotes: 1 <issue_comment>username_2: Some modules that are part of [MongoDB Compass](https://www.mongodb.com/products/compass) have been made open source.
There are two modules that may be of use for your use case:
* [mongodb-language-model](https://github.com/mongodb-js/mongodb-language-model)
* [mongodb-query-parser](https://github.com/mongodb-js/query-parser)
Although they may not fit your use case 100%, it should give you a very close validation. For example `npm install mongodb-language-model`, then:
```
var accepts = require('mongodb-language-model').accepts;
console.log(accepts('{"ns":{"$in":["foo", "bar", "baz"]}}')); // true
console.log(accepts('{"ns":{"$in":{}}}')); // false
console.log(accepts('{"ns":{"$regex": "foo"}}')); // true
```
Also may be of interest, `npm install mongodb-query-parser` to parse a string value into a JSON query. For example:
```
var parse = require('mongodb-query-parser');
var query = '{"ns":{"$in":["foo", "bar", "baz"]}}';
console.log(parse.parseFilter(query)); // {ns:{'$in':['foo','bar','baz']}}
```
Upvotes: 5 [selected_answer]
|
2018/03/17
| 662 | 2,465 |
<issue_start>username_0: I'm writing a Django application and building the form manually.
```
{% csrf\_token %}
Value 1
```
my `models.py` contains
```
class TeachingLanguage(models.Model):
id = models.UUIDField(primary_key=True, default=uuid.uuid4, editable=False)
category = models.ForeignKey(Category, on_delete=models.CASCADE)
title = models.CharField(max_length=250)
modified = models.DateTimeField(auto_now=True)
created = models.DateTimeField(auto_now_add=True)
class META:
verbose_name_plural = 'teaching_languages'
db_table = 'languages'
def __str__(self):
return self.title
class Course(models.Model):
id = models.UUIDField(primary_key=True, default=uuid.uuid4, editable=False)
name = models.CharField(max_length=250)
teaching_language = models.ForeignKey(TeachingLanguage, on_delete=models.CASCADE)
description = models.TextField(blank=True)
```
I have to fill by the list of `TeachingLanguage`
The view.py file contains
```
class NewCourse(CreateView):
model = Course
fields = ['name', 'teaching_language', 'description']
def get_context_data(self, **kwargs):
context = super(NewCourse, self).get_context_data(**kwargs)
teaching_languages = TeachingLanguage.objects.all()
context['teaching_languages'] = teaching_languages
return context
def form_valid(self, form):
form.instance.created_by = self.request.user
form.save()
return super().form_valid(form)
```
**How to render the `teaching_languages` in `select` field to generate a dropdown list?**<issue_comment>username_1: Try this in your template:
```
{% csrf\_token %}
{{ form.teaching\_languages }}
```
This will generate only the teaching\_languages field from your form. [Click here](https://docs.djangoproject.com/en/2.0/topics/forms/#rendering-fields-manually) for more details.
Also you should remove the `get_context_data` method now because the `form` variable is passed to the template automatically and the form picks-up all the teaching languages automatically as they are set as Foreign Keys in the Course model.
Upvotes: 1 <issue_comment>username_2: If you **must** do it manually rather than just letting Django handle it, then you can simply iterate over the teaching languages that you've passed to the context:
```
{% for lang in teaching\_languages %}
{{ lang }}
{% endfor %}
```
Upvotes: 1 [selected_answer]
|
2018/03/17
| 3,020 | 10,087 |
<issue_start>username_0: I'm building a simple neural network that takes 3 values and gives 2 outputs.
I'm getting an accuracy of 67.5% and an average cost of 0.05
I have a training dataset of 1000 examples and 500 testing examples. I plan on making a larger dataset in the near future.
A little while ago I managed to get an accuracy of about 82% and sometimes a bit higher, but the cost was quite high.
I've been experimenting with adding another layer which is currently in the model and that is the reason I have got the loss under 1.0
I'm not sure what is going wrong, I'm new to Tensorflow and NNs in general.
Here is my code:
```
import tensorflow as tf
import numpy as np
import sys
sys.path.insert(0, '.../Dataset/Testing/')
sys.path.insert(0, '.../Dataset/Training/')
#other files
from TestDataNormaliser import *
from TrainDataNormaliser import *
learning_rate = 0.01
trainingIteration = 10
batchSize = 100
displayStep = 1
x = tf.placeholder("float", [None, 3])
y = tf.placeholder("float", [None, 2])
#layer 1
w1 = tf.Variable(tf.truncated_normal([3, 4], stddev=0.1))
b1 = tf.Variable(tf.zeros([4]))
y1 = tf.matmul(x, w1) + b1
#layer 2
w2 = tf.Variable(tf.truncated_normal([4, 4], stddev=0.1))
b2 = tf.Variable(tf.zeros([4]))
#y2 = tf.nn.sigmoid(tf.matmul(y1, w2) + b2)
y2 = tf.matmul(y1, w2) + b2
w3 = tf.Variable(tf.truncated_normal([4, 2], stddev=0.1))
b3 = tf.Variable(tf.zeros([2]))
y3 = tf.nn.sigmoid(tf.matmul(y2, w3) + b3) #sigmoid
#output
#wO = tf.Variable(tf.truncated_normal([2, 2], stddev=0.1))
#bO = tf.Variable(tf.zeros([2]))
a = y3 #tf.nn.softmax(tf.matmul(y2, wO) + bO) #y2
a_ = tf.placeholder("float", [None, 2])
#cost function
cross_entropy = tf.reduce_mean(-tf.reduce_sum(y * tf.log(a)))
#cross_entropy = -tf.reduce_sum(y*tf.log(a))
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cross_entropy)
#training
init = tf.global_variables_initializer() #initialises tensorflow
with tf.Session() as sess:
sess.run(init) #runs the initialiser
writer = tf.summary.FileWriter(".../Logs")
writer.add_graph(sess.graph)
merged_summary = tf.summary.merge_all()
for iteration in range(trainingIteration):
avg_cost = 0
totalBatch = int(len(trainArrayValues)/batchSize) #1000/100
#totalBatch = 10
for i in range(batchSize):
start = i
end = i + batchSize #100
xBatch = trainArrayValues[start:end]
yBatch = trainArrayLabels[start:end]
#feeding training data
sess.run(optimizer, feed_dict={x: xBatch, y: yBatch})
i += batchSize
avg_cost += sess.run(cross_entropy, feed_dict={x: xBatch, y: yBatch})/totalBatch
if iteration % displayStep == 0:
print("Iteration:", '%04d' % (iteration + 1), "cost=", "{:.9f}".format(avg_cost))
#
print("Training complete")
predictions = tf.equal(tf.argmax(a, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(predictions, "float"))
print("Accuracy:", accuracy.eval({x: testArrayValues, y: testArrayLabels}))
```<issue_comment>username_1: I see that you are using a softmax loss with sigmoidal activation functions in the last layer. Now let me explain the difference between softmax activations and sigmoidal.
You are now allowing the output of the network to be y=(0, 1), y=(1, 0), y=(0, 0) and y=(1, 1). This is because your sigmoidal activations "squish" each element in y between 0 and 1. Your loss function, however, assumes that your y vector sums to one.
What you need to do here is either to penalise the sigmoidal cross entropy function, which looks like this:
```
-tf.reduce_sum(y*tf.log(a))-tf.reduce_sum((1-y)*tf.log(1-a))
```
Or, if you want a to sum to one, you need to use softmax activations in your final layer (to get your a's) instead of sigmoids, which is implemented like this
```
exp_out = tf.exp(y3)
a = exp_out/tf reduce_sum(exp_out)
```
Ps. I'm using my phone on a train so please excuse typos
Upvotes: 0 <issue_comment>username_2: A few important notes:
* You don't have non-linearities between your layers. This means you're training a network which is equivalent to a single-layer network, just with a lot of wasted computation. This is easily solved by adding a simple non-linearity, e.g. tf.nn.relu after each matmul/+ bias line, e.g. y2 = tf.nn.relu(y2) for all bar the last layer.
* You are using a numerically unstable cross entropy implementation. I'd encourage you to use tf.nn.sigmoid\_cross\_entropy\_with\_logits, and removing your explicit sigmoid call (the input to your sigmoid function is what is generally referred to as the logits, or 'logistic units').
* It seems you are not shuffling your dataset as you go. This could be particularly bad given your choice of optimizer, which leads us to...
* Stochastic gradient descent is not great. For a boost without adding too much complication, consider using MomentumOptimizer instead. AdamOptimizer is my go-to, but play around with them.
When it comes to writing clean, maintainable code, I'd also encourage you to consider the following:
* Use higher level APIs, e.g. tf.layers. It's good you know what's going on at a variable level, but it's easy to make a mistake with all that replicated code, and the default values with the layer implementations are generally pretty good
* Consider using the tf.data.Dataset API for your data input. It's a bit scary at first, but it handles a lot of things like batching, shuffling, repeating epochs etc. very nicely
* Consider using something like the tf.estimator.Estimator API for handling session runs, summary writing and evaluation.
With all those changes, you might have something that looks like the following (I've left your code in so you can roughly see the equivalent lines).
For graph construction:
```
def get_logits(features):
"""tf.layers API is cleaner and has better default values."""
# #layer 1
# w1 = tf.Variable(tf.truncated_normal([3, 4], stddev=0.1))
# b1 = tf.Variable(tf.zeros([4]))
# y1 = tf.matmul(x, w1) + b1
x = tf.layers.dense(features, 4, activation=tf.nn.relu)
# #layer 2
# w2 = tf.Variable(tf.truncated_normal([4, 4], stddev=0.1))
# b2 = tf.Variable(tf.zeros([4]))
# y2 = tf.matmul(y1, w2) + b2
x = tf.layers.dense(x, 4, activation=tf.nn.relu)
# w3 = tf.Variable(tf.truncated_normal([4, 2], stddev=0.1))
# b3 = tf.Variable(tf.zeros([2]))
# y3 = tf.nn.sigmoid(tf.matmul(y2, w3) + b3) #sigmoid
# N.B Don't take a non-linearity here.
logits = tf.layers.dense(x, 1, actiation=None)
# remove unnecessary final dimension, batch_size * 1 -> batch_size
logits = tf.squeeze(logits, axis=-1)
return logits
def get_loss(logits, labels):
"""tf.nn.sigmoid_cross_entropy_with_logits is numerically stable."""
# #cost function
# cross_entropy = tf.reduce_mean(-tf.reduce_sum(y * tf.log(a)))
return tf.nn.sigmoid_cross_entropy_with_logits(
logits=logits, labels=labels)
def get_train_op(loss):
"""There are better options than standard SGD. Try the following."""
learning_rate = 1e-3
# optimizer = tf.train.GradientDescentOptimizer(learning_rate)
optimizer = tf.train.MomentumOptimizer(learning_rate)
# optimizer = tf.train.AdamOptimizer(learning_rate)
return optimizer.minimize(loss)
def get_inputs(feature_data, label_data, batch_size, n_epochs=None,
shuffle=True):
"""
Get features and labels for training/evaluation.
Args:
feature_data: numpy array of feature data.
label_data: numpy array of label data
batch_size: size of batch to be returned
n_epochs: number of epochs to train for. None will result in repeating
forever/until stopped
shuffle: bool flag indicating whether or not to shuffle.
"""
dataset = tf.data.Dataset.from_tensor_slices(
(feature_data, label_data))
dataset = dataset.repeat(n_epochs)
if shuffle:
dataset = dataset.shuffle(len(feature_data))
dataset = dataset.batch(batch_size)
features, labels = dataset.make_one_shot_iterator().get_next()
return features, labels
```
For session running you could use this like you have (what I'd call 'the hard way')...
```
features, labels = get_inputs(
trainArrayValues, trainArrayLabels, batchSize, n_epochs, shuffle=True)
logits = get_logits(features)
loss = get_loss(logits, labels)
train_op = get_train_op(loss)
init = tf.global_variables_initializer()
# monitored sessions have the `should_stop` method, which works with datasets
with tf.train.MonitoredSession() as sess:
sess.run(init)
while not sess.should_stop():
# get both loss and optimizer step in the same session run
loss_val, _ = sess.run([loss, train_op])
print(loss_val)
# save variables etc, do evaluation in another graph with different inputs?
```
but I think you're better off using a tf.estimator.Estimator, though some people prefer tf.keras.Models.
```
def model_fn(features, labels, mode):
logits = get_logits(features)
loss = get_loss(logits, labels)
train_op = get_train_op(loss)
predictions = tf.greater(logits, 0)
accuracy = tf.metrics.accuracy(labels, predictions)
return tf.estimator.EstimatorSpec(
mode=mode, loss=loss, train_op=train_op,
eval_metric_ops={'accuracy': accuracy}, predictions=predictions)
def train_input_fn():
return get_inputs(trainArrayValues, trainArrayLabels, batchSize)
def eval_input_fn():
return get_inputs(
testArrayValues, testArrayLabels, batchSize, n_epochs=1, shuffle=False)
# Where variables and summaries will be saved to
model_dir = './model'
estimator = tf.estimator.Estimator(model_fn, model_dir)
estimator.train(train_input_fn, max_steps=max_steps)
estimator.evaluate(eval_input_fn)
```
Note if you use estimators the variables will be saved after training, so you won't need to re-train each time. If you want to reset, just delete the model\_dir.
Upvotes: 3 [selected_answer]
|
2018/03/17
| 2,358 | 8,165 |
<issue_start>username_0: I want to remove duplicate values for states in chartDataResults2 array:
```
for (i = 0; i < dataResults.length; i++) {
function groupBy(array, f) {
var groups = {};
array.forEach(function(o) {
var group = JSON.stringify(f(o));
groups[group] = groups[group] || [];
groups[group].push(o);
});
return Object.keys(groups).map(function(group) {
return groups[group];
})
}
var result = groupBy(dataResults, function(item) {
return [item.State.Title];
});
States1 = result[i][0].State.Title;
chartDataResults2.push({
States: States1,
Mineraltype: Mineraltype,
CollectionAmount: CollectionAmount,
Major: major,
Minor: minor,
CoalLignite: coal,
balloonTextField: Mineraltype + ", Total Amount Collected:" + CollectionAmount,
});
}
```
States has values (Andhra Pradesh,Gujarat,Gujarat).It should remove duplicates and just keep (Andhra Pradesh,Gujarat).
```
{States: "ANDHRA PRADESH", CollectionAmount: 187, Major: 187, Minor: 0, …}
{States: "GUJARAT", CollectionAmount: 301, Major: 158, Minor: 90, …}
{States: "GUJARAT", CollectionAmount: 0, Major: 0, Minor: 0, …}
```<issue_comment>username_1: I see that you are using a softmax loss with sigmoidal activation functions in the last layer. Now let me explain the difference between softmax activations and sigmoidal.
You are now allowing the output of the network to be y=(0, 1), y=(1, 0), y=(0, 0) and y=(1, 1). This is because your sigmoidal activations "squish" each element in y between 0 and 1. Your loss function, however, assumes that your y vector sums to one.
What you need to do here is either to penalise the sigmoidal cross entropy function, which looks like this:
```
-tf.reduce_sum(y*tf.log(a))-tf.reduce_sum((1-y)*tf.log(1-a))
```
Or, if you want a to sum to one, you need to use softmax activations in your final layer (to get your a's) instead of sigmoids, which is implemented like this
```
exp_out = tf.exp(y3)
a = exp_out/tf reduce_sum(exp_out)
```
Ps. I'm using my phone on a train so please excuse typos
Upvotes: 0 <issue_comment>username_2: A few important notes:
* You don't have non-linearities between your layers. This means you're training a network which is equivalent to a single-layer network, just with a lot of wasted computation. This is easily solved by adding a simple non-linearity, e.g. tf.nn.relu after each matmul/+ bias line, e.g. y2 = tf.nn.relu(y2) for all bar the last layer.
* You are using a numerically unstable cross entropy implementation. I'd encourage you to use tf.nn.sigmoid\_cross\_entropy\_with\_logits, and removing your explicit sigmoid call (the input to your sigmoid function is what is generally referred to as the logits, or 'logistic units').
* It seems you are not shuffling your dataset as you go. This could be particularly bad given your choice of optimizer, which leads us to...
* Stochastic gradient descent is not great. For a boost without adding too much complication, consider using MomentumOptimizer instead. AdamOptimizer is my go-to, but play around with them.
When it comes to writing clean, maintainable code, I'd also encourage you to consider the following:
* Use higher level APIs, e.g. tf.layers. It's good you know what's going on at a variable level, but it's easy to make a mistake with all that replicated code, and the default values with the layer implementations are generally pretty good
* Consider using the tf.data.Dataset API for your data input. It's a bit scary at first, but it handles a lot of things like batching, shuffling, repeating epochs etc. very nicely
* Consider using something like the tf.estimator.Estimator API for handling session runs, summary writing and evaluation.
With all those changes, you might have something that looks like the following (I've left your code in so you can roughly see the equivalent lines).
For graph construction:
```
def get_logits(features):
"""tf.layers API is cleaner and has better default values."""
# #layer 1
# w1 = tf.Variable(tf.truncated_normal([3, 4], stddev=0.1))
# b1 = tf.Variable(tf.zeros([4]))
# y1 = tf.matmul(x, w1) + b1
x = tf.layers.dense(features, 4, activation=tf.nn.relu)
# #layer 2
# w2 = tf.Variable(tf.truncated_normal([4, 4], stddev=0.1))
# b2 = tf.Variable(tf.zeros([4]))
# y2 = tf.matmul(y1, w2) + b2
x = tf.layers.dense(x, 4, activation=tf.nn.relu)
# w3 = tf.Variable(tf.truncated_normal([4, 2], stddev=0.1))
# b3 = tf.Variable(tf.zeros([2]))
# y3 = tf.nn.sigmoid(tf.matmul(y2, w3) + b3) #sigmoid
# N.B Don't take a non-linearity here.
logits = tf.layers.dense(x, 1, actiation=None)
# remove unnecessary final dimension, batch_size * 1 -> batch_size
logits = tf.squeeze(logits, axis=-1)
return logits
def get_loss(logits, labels):
"""tf.nn.sigmoid_cross_entropy_with_logits is numerically stable."""
# #cost function
# cross_entropy = tf.reduce_mean(-tf.reduce_sum(y * tf.log(a)))
return tf.nn.sigmoid_cross_entropy_with_logits(
logits=logits, labels=labels)
def get_train_op(loss):
"""There are better options than standard SGD. Try the following."""
learning_rate = 1e-3
# optimizer = tf.train.GradientDescentOptimizer(learning_rate)
optimizer = tf.train.MomentumOptimizer(learning_rate)
# optimizer = tf.train.AdamOptimizer(learning_rate)
return optimizer.minimize(loss)
def get_inputs(feature_data, label_data, batch_size, n_epochs=None,
shuffle=True):
"""
Get features and labels for training/evaluation.
Args:
feature_data: numpy array of feature data.
label_data: numpy array of label data
batch_size: size of batch to be returned
n_epochs: number of epochs to train for. None will result in repeating
forever/until stopped
shuffle: bool flag indicating whether or not to shuffle.
"""
dataset = tf.data.Dataset.from_tensor_slices(
(feature_data, label_data))
dataset = dataset.repeat(n_epochs)
if shuffle:
dataset = dataset.shuffle(len(feature_data))
dataset = dataset.batch(batch_size)
features, labels = dataset.make_one_shot_iterator().get_next()
return features, labels
```
For session running you could use this like you have (what I'd call 'the hard way')...
```
features, labels = get_inputs(
trainArrayValues, trainArrayLabels, batchSize, n_epochs, shuffle=True)
logits = get_logits(features)
loss = get_loss(logits, labels)
train_op = get_train_op(loss)
init = tf.global_variables_initializer()
# monitored sessions have the `should_stop` method, which works with datasets
with tf.train.MonitoredSession() as sess:
sess.run(init)
while not sess.should_stop():
# get both loss and optimizer step in the same session run
loss_val, _ = sess.run([loss, train_op])
print(loss_val)
# save variables etc, do evaluation in another graph with different inputs?
```
but I think you're better off using a tf.estimator.Estimator, though some people prefer tf.keras.Models.
```
def model_fn(features, labels, mode):
logits = get_logits(features)
loss = get_loss(logits, labels)
train_op = get_train_op(loss)
predictions = tf.greater(logits, 0)
accuracy = tf.metrics.accuracy(labels, predictions)
return tf.estimator.EstimatorSpec(
mode=mode, loss=loss, train_op=train_op,
eval_metric_ops={'accuracy': accuracy}, predictions=predictions)
def train_input_fn():
return get_inputs(trainArrayValues, trainArrayLabels, batchSize)
def eval_input_fn():
return get_inputs(
testArrayValues, testArrayLabels, batchSize, n_epochs=1, shuffle=False)
# Where variables and summaries will be saved to
model_dir = './model'
estimator = tf.estimator.Estimator(model_fn, model_dir)
estimator.train(train_input_fn, max_steps=max_steps)
estimator.evaluate(eval_input_fn)
```
Note if you use estimators the variables will be saved after training, so you won't need to re-train each time. If you want to reset, just delete the model\_dir.
Upvotes: 3 [selected_answer]
|
2018/03/17
| 747 | 2,811 |
<issue_start>username_0: I am running excel 2003 and trying to call a couple of different macros, based on the cell value (all in the current worksheet). My macro doesn't work and I am having trouble to understand why - here is the macro:
```
Private Sub Worksheet_Change(ByVal Target As Range)
If Not Intersect(Target, Range("C9")) Is Nothing Then
Select Case Range("C9")
Case "Select": HideST
Case "YES": HideST
Case "NO": FindST
End Select
End If
End Sub
```
I've also tried this:
```
Private Sub Worksheet_Change(ByVal Target As Range)
Dim sourceSheet As Worksheet
Set sourceSheet = ActiveSheet
If Not Intersect(Target, Range("C9")) Is Nothing Then
Select Case ActiveSheet.Range("C9")
Case "Select": HideST
Case "YES": HideST
Case "NO": FindST
End Select
End If
End Sub
```
But, it did not work either ...
Then I tried completely different macro - with the same results:
```
Sub Worksheet_Change(ByVal Target As Range)
Set Target = Range("C9")
If Target.Value = "YES" Then
Call HideST
End If
If Target.Value = "NO" Then
Call FindST
End If
End Sub
```
When I select different value on C9 - nothing happens as macro is not triggered. Can anybody help please?
Cheers
Mile`S<issue_comment>username_1: **Option 1** (like Paul suggested in his comment)
```
Option Explicit
Private Sub Worksheet_Change(ByVal Target As Range)
If Not Intersect(Target, Range("C9")) Is Nothing Then
Select Case Ucase(Range("C9").value2)
Case "SELECT", "YES": HideST
Case "NO": FindST
End Select
End If
End Sub
```
**Option 2**
```
Option Explicit
Option Compare Text
Private Sub Worksheet_Change(ByVal Target As Range)
If Not Intersect(Target, Range("C9")) Is Nothing Then
Select Case Range("C9").value2
Case "Select", "YES" : HideST
Case "NO": FindST
End Select
End If
End Sub
```
For Option Compare have a look at the documentation at [MSDN](https://msdn.microsoft.com/de-de/vba/language-reference-vba/articles/option-compare-statement)
Upvotes: 3 [selected_answer]<issue_comment>username_2: ```
Option Explicit
Private Sub Worksheet_Change(ByVal Target As Range)
With Target
If .Column = 3 And .Row = 9 Then
Select Case UCase(.Value2)
Case "SELECT", "YES": HideST
Case "NO": FindST
End Select
End If
End With
End Sub
```
---
Make sure you place the code in the module for the proper sheet (where cell C9 is)
[](https://i.stack.imgur.com/BkIUq.png)
Upvotes: 1
|
2018/03/17
| 591 | 2,099 |
<issue_start>username_0: There is a new option for the End User in Windows 10 Creator's Update Edition. The End User can change properties for an EXE on the compatibility tab to "Override High DPI Scaling Behavior" and set it to System (Enhanced). I tested it and it works well for some classic win32 apps.
I want to to do this by code through an API call or through the manifest. There is no information on that. Is that possible?
From the comment I got, it want to clarify that this is a NEW CREATORS UPDATE FEATURE and I particularly want to know how to set the "SYSTEM (ENHANCED)" choice for override feature through manifest or code.<issue_comment>username_1: The details are covered in [this blog post](https://blogs.windows.com/buildingapps/2017/04/04/high-dpi-scaling-improvements-desktop-applications-windows-10-creators-update/):
```
True/PM
PerMonitorV2, PerMonitor
```
>
> For more details on various manifest issues, see [Manifest Madness](https://blogs.msdn.microsoft.com/chuckw/2013/09/10/manifest-madness/)
>
>
>
Upvotes: 0 <issue_comment>username_2: I found the answer in another SO post:
[Enhanced system DPI scaling with VS2017](https://stackoverflow.com/questions/46428510/enhanced-system-dpi-scaling-with-vs2017)
The correct clue is to investigate the new GDI Scaling manifest that is vastly improved in Creator's Update. That is used in System (Enhanced) setting.
None of the answers or comments came close. They kept referring to old articles. Moreover, someone marked the question negative:( Sad.
Upvotes: 3 [selected_answer]<issue_comment>username_3: DPI\_AWARENESS\_CONTEXT\_UNAWARE\_GDISCALED is a now a valid DPI\_AWARENESS\_CONTEXT in the latest Windows 10 SDK headers (17134 as of this writing), in windef.h:
define DPI\_AWARENESS\_CONTEXT\_UNAWARE\_GDISCALED ((DPI\_AWARENESS\_CONTEXT)-5)
So you should call IsValidDpiAwarenessContext and SetProcessDpiAwarenessContext to take advantage of GDI Scaling at runtime, if you wish to avoid having to do this in the manifest.
This confirms that gdiScaling is mutually exclusive from Per monitor V2.
Upvotes: 2
|
2018/03/17
| 658 | 2,156 |
<issue_start>username_0: I have a pattern which looks like:
```
abc*_def(##)
```
and i want to look if this matches for some strings.
E.x. it matches for:
```
abc1_def23
abc10_def99
```
but does not match for:
```
abc9_def9
```
So the \* stands for a number which can have one or more digits.
The # stands for a number with one digit
I want the value in the parenthesis as result
What would be the easiest and simplest solution for this problem?
Replace the \* and # through regex expression and then look if they match?
Like this:
```
pattern = pattern.replace('*', '[0-9]*')
pattern = pattern.replace('#', '[0-9]')
pattern = '^' + pattern + '$'
```
Or program it myself?<issue_comment>username_1: The details are covered in [this blog post](https://blogs.windows.com/buildingapps/2017/04/04/high-dpi-scaling-improvements-desktop-applications-windows-10-creators-update/):
```
True/PM
PerMonitorV2, PerMonitor
```
>
> For more details on various manifest issues, see [Manifest Madness](https://blogs.msdn.microsoft.com/chuckw/2013/09/10/manifest-madness/)
>
>
>
Upvotes: 0 <issue_comment>username_2: I found the answer in another SO post:
[Enhanced system DPI scaling with VS2017](https://stackoverflow.com/questions/46428510/enhanced-system-dpi-scaling-with-vs2017)
The correct clue is to investigate the new GDI Scaling manifest that is vastly improved in Creator's Update. That is used in System (Enhanced) setting.
None of the answers or comments came close. They kept referring to old articles. Moreover, someone marked the question negative:( Sad.
Upvotes: 3 [selected_answer]<issue_comment>username_3: DPI\_AWARENESS\_CONTEXT\_UNAWARE\_GDISCALED is a now a valid DPI\_AWARENESS\_CONTEXT in the latest Windows 10 SDK headers (17134 as of this writing), in windef.h:
define DPI\_AWARENESS\_CONTEXT\_UNAWARE\_GDISCALED ((DPI\_AWARENESS\_CONTEXT)-5)
So you should call IsValidDpiAwarenessContext and SetProcessDpiAwarenessContext to take advantage of GDI Scaling at runtime, if you wish to avoid having to do this in the manifest.
This confirms that gdiScaling is mutually exclusive from Per monitor V2.
Upvotes: 2
|
2018/03/17
| 524 | 1,787 |
<issue_start>username_0: Here is a snippet of code that is used so a textbox ("TxtInput1") has only one decimal in it and only numbers in it:
```
private void TxtInput1_TextChanged(object sender, KeyPressEventArgs e)
{
if (!char.IsControl(e.KeyChar) && !char.IsDigit(e.KeyChar) && (e.KeyChar != '.'))
{
e.Handled = true;
}
// only allow one decimal point
if ((e.KeyChar == '.') && ((sender as TextBox).Text.IndexOf('.') > -1))
{
e.Handled = true;
}
}
```
But it gives me the following error:
>
> CS0123 No overload for 'TxtInput1\_TextChanged' matches delegate
> 'EventHandler'
>
>
>
I clicked on the error and it popped up with this:
```
form1.TxtInput1.Location = new System.Drawing.Point(92, 111);
form1.TxtInput1.Name = "TxtInput1";
form1.TxtInput1.Size = new System.Drawing.Size(43, 20);
form1.TxtInput1.TabIndex = 8;
form1.TxtInput1.TextChanged += new System.EventHandler(form1.TxtInput1_TextChanged);
```
The line `System.EventHandler(form1.TxtInput1_TextChanged);` is underlined red meaning it's wrong. Any fix for this issue?<issue_comment>username_1: The signature of your method does not match what is required to handle the TextChanged event. The second parameter for the TextChanged event is just `EventArgs`. But if you change it to that, the contents of your method won't then compile.
From the look of your method's signature, you need to be hooking this up to a KeyPress event instead.
Upvotes: 2 <issue_comment>username_2: Subscribe your handler `TxtInput1_TextChanged` to `KeyPress` event of `TxtInput1` rather than `TextChanged`. Error is due to signature mismatch of delegates.
change to below :
```
form1.TxtInput1.KeyPress+= new System.KeyPressEventHandler(form1.TxtInput1_TextChanged);
```
Upvotes: 1
|
2018/03/17
| 1,226 | 3,418 |
<issue_start>username_0: I am using **sqlsrv database** connection in my project as I need to connect to Microsoft SQL Database.
I have successfully installed the **sqlsrv driver** because I can connect to the Database to retrieve data.
But when I try to do the Laravel Migration, it shows an error:
```
Illuminate\Database\QueryException : could not find driver (SQL: select * from sysobjects where type = 'U' and name = migrations)
```
**Below is my .env**
```
DB_CONNECTION=sqlsrv
DB_HOST=**********.database.windows.net
DB_PORT=1433
DB_DATABASE=****************_4cd1_9d18_2a7d9ddbcd13
DB_USERNAME=***************_4cd1_9d18_2a7d9ddbcd13_ExternalWriter
DB_PASSWORD=***************
```
**php.ini**
```
extension=php_xmlrpc.dll
extension=php_xsl.dll
extension=php_pdo_sqlsrv_7_nts_x64.dll
extension=php_pdo_sqlsrv_7_nts_x86.dll
extension=php_pdo_sqlsrv_7_ts_x64.dll
extension=php_pdo_sqlsrv_7_ts_x86.dll
extension=php_pdo_sqlsrv_71_nts_x64.dll
extension=php_pdo_sqlsrv_71_nts_x86.dll
extension=php_pdo_sqlsrv_71_ts_x64.dll
extension=php_pdo_sqlsrv_71_ts_x86.dll
extension=php_sqlsrv_7_nts_x64.dll
extension=php_sqlsrv_7_nts_x86.dll
extension=php_sqlsrv_7_ts_x64.dll
extension=php_sqlsrv_7_ts_x86.dll
extension=php_sqlsrv_71_nts_x64.dll
extension=php_sqlsrv_71_nts_x86.dll
extension=php_sqlsrv_71_ts_x64.dll
extension=php_sqlsrv_71_ts_x86.dll
```<issue_comment>username_1: You may need to **upgrade your PHP version to PHP 7.1.**
Now make sure that the files **config/database.php** and **.env** are configured properly.
See this one: <https://laracasts.com/discuss/channels/general-discussion/connect-laravel-to-microsoft-sql>
Upvotes: 2 <issue_comment>username_2: After fighting with it for a while found a solution :
you need to downloaded it from [here](http://php.net/manual/en/ref.pdo-sqlsrv.php) then extract the files in your php folder
Upvotes: 1 <issue_comment>username_3: How to use SQL Server DLL files with PHP 7.2 version on xampp
<https://github.com/Microsoft/msphpsql/releases>
To download Windows DLLs for PHP 7.1 or above from the PECL repository, please go to the SQLSRV or PDO\_SQLSRV PECL page.
<https://pecl.php.net/package/sqlsrv/5.6.1/windows>
<https://pecl.php.net/package/pdo_sqlsrv/5.6.1/windows>
NOTE: Do not forget to remove 'php\_' prefix when you add extension dll filename in php.ini file. eg.
```
extension=pdo_sqlsrv
extension=sqlsrv
```
Do not change actual filename which you put in /xampp/php/ext/ directory.
"7.2 Thread Safe (TS) x86" worked for me for 'sqlsrv' and 'pdo\_sqlsrv' extensions.
Restart xampp or apache then you can see enabled 'pdo\_sqlsrv' extension when you print phpinfo() in a test php file
[](https://i.stack.imgur.com/8ghxx.png)
It was very hard to find this solution. I hope, this will also help you. All the best :)
Upvotes: 3 <issue_comment>username_4: For PHP 7.4 on Windows 10 (x64), I've solved same error s below:
1. Downloaded the driver from
<https://www.microsoft.com/en-us/download/details.aspx?id=20098>
2. Extracted the archive and copied the file
`php_pdo_sqlsrv_74_ts_x64.dll` to the `ext` folder (`C:\Program Files\php74\ext` in my setup)
3. Added `extension=php_pdo_sqlsrv_74_ts_x64.dll` into `php.ini` file
In my `.env` file I've `DB_CONNECTION=sqlsrv` so I first thought `php_sqlsrv_74_ts_x64.dll` should be used but it didn't work.
Upvotes: 0
|
2018/03/17
| 812 | 2,733 |
<issue_start>username_0: I am using Retrofit 2 on Android Studio to get list string no names from WORDSAPI SYNONYMS.
I got JSON from
>
> <https://wordsapiv1.p.mashape.com/words/go/synonyms?mashape-key=>{my-key}
>
>
>
----JSON----------
```
{
"word": "go",
"synonyms": [
"proceed",
"run",
"depart",
"go away",
"function",
...,
"get",
"plump",
"extend",
"fit"
]
}
```
Can you help me how to create **Call, SynonymsList and get list string of Synonyms** in Android code??
It makes me confusing because it is different to Array JSON!!
Thank you!<issue_comment>username_1: Your Pojo class :-
```
public class WordsObj {
String word;
List synonyms;
}
```
RetrofitInterface :-
```
@Get("yOURuRL")
Call getWords();
```
Your call :-
```
Retrofit retrofit = new Retrofit.Builder()
.baseUrl("base_url")
.build();
Call call\_for\_words = retrofit.create(RetrofitInterface.class).getWords();
```
Then make your call :-
```
call_for_words.enqueue(...) // for asynchronous
```
Or,
```
call_for_words.execute() // for synchornous
```
Upvotes: 0 <issue_comment>username_2: Do it like this
>
> ApiClient:
>
>
>
```
public class ApiClient {
private static Retrofit retrofit = null;
private static final String SERVER_BASE_URL = "https://wordsapiv1.p.mashape.com";
public static Retrofit getClient() {
if (retrofit == null) {
retrofit = new Retrofit.Builder()
.baseUrl(SERVER_BASE_URL)
.addConverterFactory(GsonConverterFactory.create())
.build();
}
return retrofit;
}
}
```
>
> ApiInterface :
>
>
>
```
public interface ApiInterface {
@GET("/words/go/synonyms")
Call getDataFromServer(@Query("mashape-key") String mashape\_key);
}
```
>
> MainActivity -> onCreate() :
>
>
>
```
ServerData serverData;
ApiInterface apiService = ApiClient.getClient().create(ApiInterface.class);
Call call = apiService.getDataFromServer(mashape\_key);
call.enqueue(new Callback() {
@Override
public void onResponse(Call call, Response response) {
try {
Log.e("~~~~~ response", response.body().string());
String response = responsebody.body().string();
serverData = gson.fromJson(response, ServerData.class);
} catch (IOException e) {
e.printStackTrace();
}
}
@Override
public void onFailure(Call call, Throwable t) {
}
});
```
>
> ServerData :
>
>
>
```
public class ServerData {
@SerializedName("word")
private String word;
@SerializedName("synonyms")
private ArrayList synonyms;
}
```
Upvotes: 2 [selected_answer]
|
2018/03/17
| 1,750 | 4,982 |
<issue_start>username_0: I want to use the tr command to map chars to new chars, for example:
>
> `echo "hello" | tr '[a-z]' '[b-za-b]'` Will output: `ifmmp`
>
>
>
(where each letter in the lower-case alphabet is shifted over one to the right)
See below the mapping to new chars for `'[b-za-b]'`:
>
> `[a b c d e f g h i j k l m n o p q r s t u v w x y z]` will map to:
>
>
> `[b c d e f g h i j k l m n o p q r s t u v w x y z a]`
>
>
>
---
However, when I want it to rotate multiple times, how can I use a variable to control the rotate-value for the tr command?
Eg: for a shift of 1:
>
> `echo "hello" | tr '[a-z]' '[b-za-b]'` without variables and:
>
>
> `echo "hello" | tr '[a-z]' '[(a+$var)-za-(a+$var)]'` where $var=1
>
>
> here I have: `(a+$var)-z` representing the same as `b-z` and
>
>
> ....................`a-(a+$var)` representing the same as `a-b`
>
>
>
I have tried converting the ascii value to a char to use within the tr command but I don't think that is allowed.
My problem is that bash is not interpreting:
`(a+$var)` as the char `b` when $var=1
`(a+$var)` as the char `c` when $var=2
... etc.
**How can I tell bash to interpret these equations as chars for the tr command**
**EDIT**
I tried doing it with an array but it's not working:
>
> chars=( {a..z} )
>
>
> var=2
>
>
> echo "hello" | tr '[a-z]' '[(${chars[var]}-z)(a-${chars[var]})]'
>
>
>
I used: `(${chars[var]}-z)` to represent `b-z` where var=1
Because `${chars[1]}` is `b` but this is not working. Am I missing something?<issue_comment>username_1: You could use octal `\XXX` character codes for characters to do what you intend. Using the octal codes you could do any arithmetic manipulations to numbers and then convert them to character codes
```bash
# rotr x
#
# if 0 <= x <= 25 rotr x outputs a set specification
# that could be used as an argument to tr command
# otherwise it outputs 'a-z'
function rotr(){
i = $(( 97 + $1 ))
if [ $i -lt 97 ] ; then
translation='a-z'
elif [ $i -eq 97 ] ; then
translation='\141-\172' # 141 is the octal code for "a"
# 172 is the octal code for "z"
elif [ $i -eq 98 ] ; then
translation='\142-\172\141'
elif [ $i -lt 122 ] ; then # $i is between 99 and 121 ("c" and "y")
ii=$(echo "obase=8 ; $i" | bc)
jj=$(echo "obase=8 ; $(( $i - 1 ))" | bc)
translation=\\$ii'-\172\141-'\\$jj
elif [ $i -eq 122 ] ; then
translation='\172\141-\171'
else # $i > 122
tranlation='a-z'
fi
echo $translation
}
```
Now you could use this as follows
```
echo hello | tr 'a-z' $(rotr 7)
```
prints
```
olssv
```
Upvotes: 1 <issue_comment>username_2: What you are trying to do cannot be done using `tr` which does *not* handle your requirement. Moreover when you meant to modify and use variables to add to glob patterns in `bash` which is something you cannot possibly do.
There is a neat little trick you can do with bash arrays!. The `tr` command can take expanded array sequence over the plain glob patterns also. First define a source array as
```
source=()
```
Now add its contents as a list of character ranges from `a-z` using brace expansion as
```
source=({a..z})
```
and now for the transliterating array, from the source array, construct it as follows by using the indices to print the array elements
```
trans=()
```
Using a trick to get the array elements from the last with syntax `${array[@]: (-num)}` will get you the total length - num of the elements. So building the array first as
```
var=2
trans+=( "${source[@]:(-(26-$var))}" )
```
and now to build the second part of the array, use another trick `${array[@]:0:num}` to get the first `num` number of elemtents.
```
trans+=( "${source[@]:0:$(( $var + 1 ))}" )
```
So what we have done now is for a given value of `var=2`, we built the `trans` array as
```
echo "${trans[@]}"
c d e f g h i j k l m n o p q r s t u v w x y z a b c
```
Now you can just use it easily in the `tr` command
```
echo "hello" | tr "${source[*]}" "${trans[*]}"
jgnnq
```
You can just put it all in function and print its value as
```
transChar() {
local source
local trans
local result
source=({a..z})
trans=()
var="$2"
input="$1"
trans+=( "${source[@]:(-(26-$var))}" )
trans+=( "${source[@]:0:$(( $var + 1 ))}" )
result=$( echo "$input" | tr "${source[*]}" "${trans[*]}" )
echo "$result"
}
```
Some of the tests
```
transChar "hello" 1
ifmmp
transChar "hello" 2
jgnnq
transChar "hello" 3
khoor
```
Upvotes: 4 [selected_answer]<issue_comment>username_3: rot-random:
```
# generate alphabet as arr:
arr=( {1..26} )
i=$(($RANDOM%24+1))
# left and right
l=$(echo ${arr[$i]})
r=$(echo ${arr[$i+1]})
# reusing arr for testing:
echo ${arr[@]} | tr "a-z" "$r-za-$l"
echo "secret:" | tr "a-z" "$r-za-$l" ; echo $l $r $i
amkzmb:
h i 7
```
Upvotes: 2
|
2018/03/17
| 893 | 3,361 |
<issue_start>username_0: I'm having a weird issue with `asyncio.Queue` - instead of returning an item as soon as it available, the queue waits until it is full before returning anything. I realized that while using a queue to store frames collected from `cv2.VideoCapture`, the larger the `maxsize` of the queue was, the longer it took to show anything on screen, and then, it looked like a sequence of all the frames collected into the queue.
Is that a feature, a bug, or am i just using this wrong?
Anyway, here is my code
```
import asyncio
import cv2
import numpy as np
async def collecting_loop(queue):
print("cl")
cap = cv2.VideoCapture(0)
while True:
_, img = cap.read()
await queue.put(img)
async def processing_loop(queue):
print("pl")
await asyncio.sleep(0.1)
while True:
img = await queue.get()
cv2.imshow('img', img)
cv2.waitKey(5)
async def main(e_loop):
print("running main")
queue = asyncio.Queue(loop=e_loop, maxsize=10)
await asyncio.gather(collecting_loop(queue), processing_loop(queue))
loop = asyncio.get_event_loop()
try:
loop.run_until_complete(main(e_loop=loop))
except KeyboardInterrupt:
pass
finally:
loop.close()
```<issue_comment>username_1: The problem is that `await q.put()` **doesn't switch** to another task every call. Actually it does only when inserting a new value is *suspended* by *queue-full* state transition.
Inserting `await asyncio.sleep(0)` **forces** task switch.
Like in multithreaded code `file.read()` doesn't enforce OS thread switching but `time.sleep(0)` does.
Misunderstandings like this are pretty common for newbies, I've discussed very similar problem yesterday, see [github issue](https://github.com/aio-libs/janus/issues/82).
**P.S.**
Your code has much worse problem actually: you call *blocking synchronous* code from async function, it just is not how asyncio works.
If no asynchronous OpenCV API exists (yet) you *should* run OpenCV functions in a separate thread.
Already mentioned [janus](https://github.com/aio-libs/janus) can help with passing data between sync and async code.
Upvotes: 3 <issue_comment>username_2: >
> Is [the queue getter not waking up until the queue fills up] a feature, a bug, or am i just using this wrong?
>
>
>
You're using it wrong, but in a subtle way. As Andrew explained, `queue.put` doesn't guarantee a task switch, and the collector coroutine only runs blocking code and `queue.put`. Although the blockade is short, asyncio doesn't know that and thinks you are invoking `queue.put` in a really tight loop. The queue getters simply don't get a chance to run until the queue fills up.
The correct way to integrate asyncio and cv is to run the cv code in a separate thread and have the asyncio event loop wait for it to finish. The `run_in_executor` method makes that really simple:
```
async def collecting_loop(queue):
print("cl")
loop = asyncio.get_event_loop()
cap = cv2.VideoCapture(0)
while True:
_, img = await loop.run_in_executor(None, cap.read)
await queue.put(img)
```
`run_in_executor` will automatically suspend the collector coroutine while waiting for a new frame, allowing for the queued frame(s) to be processed in time.
Upvotes: 3
|
2018/03/17
| 1,024 | 3,770 |
<issue_start>username_0: I am working on a task with different modules.
I require a common mongodb connection for each and every module..
How can I write in some module and use in this because the db connection is also required in some other modules...
```
var MongoClient = require('mongodb').MongoClient;
var url = "mongodb://localhost:27017/";
var dbo;
MongoClient.connect(url, function(err, db) {
if (err) throw err;
dbo = db.db("mydb");
});
router.post('/', function(req, res) {
dbo.collection("customers").find({"userid":req.body.userid}).toArray(function(err, result) {
if (err) throw err;
if(result.length>0){
res.send("username already taken please enter differnt username ")
}
else if(req.body.fname==undefined||(!validator.isAlpha(req.body.fname))){
res.send("please enter only alphabets as fname ")
}
else if(req.body.lname==undefined||(!validator.isAlpha(req.body.lname))){
res.send("please enter only alphabets as lname ")
}
else if(req.body.userid==undefined||(!validator.isAlphanumeric(req.body.userid))){
res.send("please enter only alphanemric as user name ")
}
else if(req.body.pwd==undefined||req.body.pwd.length<6){
res.send("please enter atleast 6 charcaters as password ")
}
else{
var bcrypt = require('bcryptjs');
var salt = bcrypt.genSaltSync(10);
var hash = bcrypt.hashSync(req.body.pwd, salt);
req.body.pwd=hash;
dbo.collection("customers").insertOne(req.body, function(err, res) {
if (err) throw err;
console.log("1 document inserted");
});
res.send(req.body);
}
});
});
module.exports = router;
```<issue_comment>username_1: The problem is that `await q.put()` **doesn't switch** to another task every call. Actually it does only when inserting a new value is *suspended* by *queue-full* state transition.
Inserting `await asyncio.sleep(0)` **forces** task switch.
Like in multithreaded code `file.read()` doesn't enforce OS thread switching but `time.sleep(0)` does.
Misunderstandings like this are pretty common for newbies, I've discussed very similar problem yesterday, see [github issue](https://github.com/aio-libs/janus/issues/82).
**P.S.**
Your code has much worse problem actually: you call *blocking synchronous* code from async function, it just is not how asyncio works.
If no asynchronous OpenCV API exists (yet) you *should* run OpenCV functions in a separate thread.
Already mentioned [janus](https://github.com/aio-libs/janus) can help with passing data between sync and async code.
Upvotes: 3 <issue_comment>username_2: >
> Is [the queue getter not waking up until the queue fills up] a feature, a bug, or am i just using this wrong?
>
>
>
You're using it wrong, but in a subtle way. As Andrew explained, `queue.put` doesn't guarantee a task switch, and the collector coroutine only runs blocking code and `queue.put`. Although the blockade is short, asyncio doesn't know that and thinks you are invoking `queue.put` in a really tight loop. The queue getters simply don't get a chance to run until the queue fills up.
The correct way to integrate asyncio and cv is to run the cv code in a separate thread and have the asyncio event loop wait for it to finish. The `run_in_executor` method makes that really simple:
```
async def collecting_loop(queue):
print("cl")
loop = asyncio.get_event_loop()
cap = cv2.VideoCapture(0)
while True:
_, img = await loop.run_in_executor(None, cap.read)
await queue.put(img)
```
`run_in_executor` will automatically suspend the collector coroutine while waiting for a new frame, allowing for the queued frame(s) to be processed in time.
Upvotes: 3
|
2018/03/17
| 704 | 2,606 |
<issue_start>username_0: [TinyDb class is here](https://github.com/kcochibili/TinyDB--Android-Shared-Preferences-Turbo)
im using it simply by default codes like : `TinyDB tinyDB = new TinyDB(MyActivity.this);` and `tinyDB.putInt("hadi" , 10);`
but im getting an error that i cant understand . it say tinyDB is null object reference . you can see the error below :
```
Caused by: java.lang.NullPointerException: Attempt to invoke virtual method 'void com.example.ahmadi.TinyDB.TinyDB.putInt(java.lang.String, int)' on a null object reference
```<issue_comment>username_1: The problem is that `await q.put()` **doesn't switch** to another task every call. Actually it does only when inserting a new value is *suspended* by *queue-full* state transition.
Inserting `await asyncio.sleep(0)` **forces** task switch.
Like in multithreaded code `file.read()` doesn't enforce OS thread switching but `time.sleep(0)` does.
Misunderstandings like this are pretty common for newbies, I've discussed very similar problem yesterday, see [github issue](https://github.com/aio-libs/janus/issues/82).
**P.S.**
Your code has much worse problem actually: you call *blocking synchronous* code from async function, it just is not how asyncio works.
If no asynchronous OpenCV API exists (yet) you *should* run OpenCV functions in a separate thread.
Already mentioned [janus](https://github.com/aio-libs/janus) can help with passing data between sync and async code.
Upvotes: 3 <issue_comment>username_2: >
> Is [the queue getter not waking up until the queue fills up] a feature, a bug, or am i just using this wrong?
>
>
>
You're using it wrong, but in a subtle way. As Andrew explained, `queue.put` doesn't guarantee a task switch, and the collector coroutine only runs blocking code and `queue.put`. Although the blockade is short, asyncio doesn't know that and thinks you are invoking `queue.put` in a really tight loop. The queue getters simply don't get a chance to run until the queue fills up.
The correct way to integrate asyncio and cv is to run the cv code in a separate thread and have the asyncio event loop wait for it to finish. The `run_in_executor` method makes that really simple:
```
async def collecting_loop(queue):
print("cl")
loop = asyncio.get_event_loop()
cap = cv2.VideoCapture(0)
while True:
_, img = await loop.run_in_executor(None, cap.read)
await queue.put(img)
```
`run_in_executor` will automatically suspend the collector coroutine while waiting for a new frame, allowing for the queued frame(s) to be processed in time.
Upvotes: 3
|
2018/03/17
| 990 | 3,249 |
<issue_start>username_0: Just confused as to why when hover with border-bottom the border plays well with the content but when changed to border-top it pushes down the content.
Here's the code...
```css
ul {
list-style-type: none;
margin: 0;
padding: 0;
width: 15%;
position: fixed;
background-color: yellow;
}
li a {
display: block;
height: 50px;
color: red;
text-decoration: none;
text-align: center;
text-transform: uppercase;
line-height: 50px;
transition: .2s all linear;
}
li a:hover {
color: white;
border-bottom: 50px solid lightskyblue;
box-sizing: border-box;
}
```
```html
* [peace](#peace)
* [love](#love)
```<issue_comment>username_1: See `border-top` adds the height in the element from top to downward direction and `border-bottom` add height from bottom also in downward direction...so applying the `border-top` will always push your text to down
Actually you have applied `box-sizing: border-box` with `height: 50px` to that element...it means element's border will not affect the height of that element if any `height` value is specified...
As MDN web docs says
>
> [***border-box*** tells the browser to account for any border and padding in the values you specify for width and height. If you set an element's width to 100 pixels, that 100 pixels will include any border or padding you added, and the content box will shrink to absorb that extra width. This typically makes it much easier to size elements](https://developer.mozilla.org/en-US/docs/Web/CSS/box-sizing)
>
>
>
If you remove `height` from the element you will see that `border-bottom` will add below text because no `height` value is defined...
```css
ul {
list-style-type: none;
margin: 0;
padding: 0;
width: 15%;
position: fixed;
background-color: yellow;
}
li a {
display: block;
color: red;
/* height: 50px; */
text-decoration: none;
text-align: center;
text-transform: uppercase;
line-height: 50px;
transition: .2s all linear;
box-sizing: border-box;
}
li a:hover {
color: white;
border-bottom: 50px solid lightskyblue;
}
```
```html
* [peace](#peace)
```
Upvotes: 0 <issue_comment>username_2: I would change a bit the structure to use an overlay instead of borders. And without the need to use fixed heights.
From what I see the border-top issue is actually the expected behaviour. Both `border-top` and `border-bottom` push content downwards, but since you are using `box-sizing: border-box` and the bottom border is already underneath the text you won't see the content go down.
```css
ul {
list-style-type: none;
margin: 0;
padding: 0;
width: auto;
position: fixed;
background-color: yellow;
}
li a{
display: block;
position: relative;
color: red;
text-decoration: none;
text-align: center;
text-transform: uppercase;
padding: 20px;
line-height: 1;
}
li a .overlay{
position:absolute;
left: 0;
right: 0;
top: 0;
bottom: 100%;
transition: .2s all linear;
background-color: rgba(0,0,0,0.3);
}
li a:hover .overlay {
bottom: 0;
}
```
```html
* [peace](#peace)
* [love](#love)
```
Upvotes: 1
|
2018/03/17
| 691 | 2,195 |
<issue_start>username_0: For a bit of a practice assignment, my professor challenged the lecture to write up some code that removes HTML tags from a string of text. He mentioned a specific command that we would learn later on that would do this for us, but he wants us to do so manually.
Here's what I have so far:
```
#include
#include
using namespace std;
int main() {
string name = " smelly butts smell";
cout << name << endl;
int a = 0, b = 0;
for (int a = b; a < name.length(); a++) {
if (name[a] == '<') {
for (int b = a; b < name.length(); b++) {
if (name[b] == '>') {
name.erase(a, (b + 1));
break;
}
}
}
}
cout << name << endl;
system("pause");
return 0;
}
```
I feel like I'm close, but I'm not getting the correct output.<issue_comment>username_1: ```
for (int b = a; b < name.length(); b++) {
if (name[b] == '>') {
name.erase(a, (b + 1));
break;
}
}
```
In this part of code your are erasing a part of length (b), while you should erase a part of length (b - a)
Try this one:
```
for (int b = a; b < name.length(); b++) {
if (name[b] == '>') {
name.erase(a, (b - a + 1));
break;
}
}
```
It should works as you want.
Upvotes: 2 [selected_answer]<issue_comment>username_2: Here is another less convoluted and slightly cleaner way that is arguably more readable. It does not deal with nested tags, but you could expand upon it to make it better.
```
#include
#include
int main()
{
std::string html = " Something **slightly less** profane here ";
while (html.find("<") != std::string::npos)
{
auto startpos = html.find("<");
auto endpos = html.find(">") + 1;
if (endpos != std::string::npos)
{
html.erase(startpos, endpos - startpos);
}
}
std::cout << html << '\n';
return 0;
}
```
For clarity, `std::string::npos` is returned when the sought after string has no position in the string. So while there are still HTML opening tags in the document. Erase everything between the first opening and first closing bracket you can find. It does not separate from `5 < 2` for example and , so there are flaws, but it shows a different approach you can apply as a starting point.
Upvotes: 2
|
2018/03/17
| 5,239 | 19,909 |
<issue_start>username_0: I've made a very simple Android puzzle app with a ViewPager that lets the user swipe through an array of puzzles. I'm seeing an error in production that I don't know how to reproduce or debug:
```
java.lang.IllegalStateException:
at android.support.v4.view.ViewPager.a (ViewPager.java:204)
at android.support.v4.view.ViewPager.c (ViewPager.java:2)
at android.support.v4.view.ViewPager.onMeasure (ViewPager.java:207)
at android.view.View.measure (View.java:22002)
at android.view.ViewGroup.measureChildWithMargins (ViewGroup.java:6580)
at android.widget.FrameLayout.onMeasure (FrameLayout.java:185)
at android.view.View.measure (View.java:22002)
at android.view.ViewGroup.measureChildWithMargins (ViewGroup.java:6580)
at android.support.v7.internal.widget.ActionBarOverlayLayout.onMeasure (ActionBarOverlayLayout.java:257)
at android.view.View.measure (View.java:22002)
at android.view.ViewGroup.measureChildWithMargins (ViewGroup.java:6580)
at android.widget.FrameLayout.onMeasure (FrameLayout.java:185)
at android.view.View.measure (View.java:22002)
at android.view.ViewGroup.measureChildWithMargins (ViewGroup.java:6580)
at android.widget.LinearLayout.measureChildBeforeLayout (LinearLayout.java:1514)
at android.widget.LinearLayout.measureVertical (LinearLayout.java:806)
at android.widget.LinearLayout.onMeasure (LinearLayout.java:685)
at android.view.View.measure (View.java:22002)
at android.view.ViewGroup.measureChildWithMargins (ViewGroup.java:6580)
at android.widget.FrameLayout.onMeasure (FrameLayout.java:185)
at com.android.internal.policy.DecorView.onMeasure (DecorView.java:721)
at android.view.View.measure (View.java:22002)
at android.view.ViewRootImpl.performMeasure (ViewRootImpl.java:2414)
at android.view.ViewRootImpl.performTraversals (ViewRootImpl.java:2159)
at android.view.ViewRootImpl.doTraversal (ViewRootImpl.java:1390)
at android.view.ViewRootImpl$TraversalRunnable.run (ViewRootImpl.java:6762)
at android.view.Choreographer$CallbackRecord.run (Choreographer.java:966)
at android.view.Choreographer.doCallbacks (Choreographer.java:778)
at android.view.Choreographer.doFrame (Choreographer.java:713)
at android.view.Choreographer$FrameDisplayEventReceiver.run (Choreographer.java:952)
at android.os.Handler.handleCallback (Handler.java:789)
at android.os.Handler.dispatchMessage (Handler.java:98)
at android.os.Looper.loop (Looper.java:169)
at android.app.ActivityThread.main (ActivityThread.java:6595)
at java.lang.reflect.Method.invoke (Method.java)
at com.android.internal.os.Zygote$MethodAndArgsCaller.run (Zygote.java:240)
at com.android.internal.os.ZygoteInit.main (ZygoteInit.java:767)
```
What should I do to figure out how to reproduce this error myself, and to figure out how to fix it? My android programming skills are fairly limited, and the stack trace above is extremely cryptic to me.
In case it's helpful, here is a second stack trace that appears to be related:
```
java.lang.IllegalStateException:
at android.support.v4.view.ViewPager.access$000 (ViewPager.java)
or .access$200 (ViewPager.java)
or .addNewItem (ViewPager.java)
or .calculatePageOffsets (ViewPager.java)
or .canScroll (ViewPager.java)
or .completeScroll (ViewPager.java)
or .determineTargetPage (ViewPager.java)
or .distanceInfluenceForSnapDuration (ViewPager.java)
or .executeKeyEvent (ViewPager.java)
or .getChildRectInPagerCoordinates (ViewPager.java)
or .infoForChild (ViewPager.java)
or .initViewPager (ViewPager.java)
or .isGutterDrag (ViewPager.java)
or .onPageScrolled (ViewPager.java)
or .onSecondaryPointerUp (ViewPager.java)
or .populate (ViewPager.java)
or .recomputeScrollPosition (ViewPager.java)
or .scrollToItem (ViewPager.java)
or .setCurrentItem (ViewPager.java)
or .setCurrentItemInternal (ViewPager.java)
or .smoothScrollTo (ViewPager.java)
at android.support.v4.view.ViewPager.arrowScroll (ViewPager.java)
or .populate (ViewPager.java)
or .requestParentDisallowInterceptTouchEvent (ViewPager.java)
at android.support.v4.view.ViewPager.onMeasure (ViewPager.java)
at android.view.View.measure (View.java:19759)
at android.view.ViewGroup.measureChildWithMargins (ViewGroup.java:6124)
at android.widget.FrameLayout.onMeasure (FrameLayout.java:185)
at android.view.View.measure (View.java:19759)
at android.view.ViewGroup.measureChildWithMargins (ViewGroup.java:6124)
at android.support.v7.internal.widget.ActionBarOverlayLayout.onMeasure (ActionBarOverlayLayout.java)
at android.view.View.measure (View.java:19759)
at android.view.ViewGroup.measureChildWithMargins (ViewGroup.java:6124)
at android.widget.FrameLayout.onMeasure (FrameLayout.java:185)
at android.view.View.measure (View.java:19759)
at android.view.ViewGroup.measureChildWithMargins (ViewGroup.java:6124)
at android.widget.LinearLayout.measureChildBeforeLayout (LinearLayout.java:1464)
at android.widget.LinearLayout.measureVertical (LinearLayout.java:758)
at android.widget.LinearLayout.onMeasure (LinearLayout.java:640)
at android.view.View.measure (View.java:19759)
at android.view.ViewGroup.measureChildWithMargins (ViewGroup.java:6124)
at android.widget.FrameLayout.onMeasure (FrameLayout.java:185)
at com.android.internal.policy.DecorView.onMeasure (DecorView.java:687)
at android.view.View.measure (View.java:19759)
at android.view.ViewRootImpl.performMeasure (ViewRootImpl.java:2283)
at android.view.ViewRootImpl.performTraversals (ViewRootImpl.java:2036)
at android.view.ViewRootImpl.doTraversal (ViewRootImpl.java:1258)
at android.view.ViewRootImpl$TraversalRunnable.run (ViewRootImpl.java:6348)
at android.view.Choreographer$CallbackRecord.run (Choreographer.java:871)
at android.view.Choreographer.doCallbacks (Choreographer.java:683)
at android.view.Choreographer.doFrame (Choreographer.java:619)
at android.view.Choreographer$FrameDisplayEventReceiver.run (Choreographer.java:857)
at android.os.Handler.handleCallback (Handler.java:751)
at android.os.Handler.dispatchMessage (Handler.java:95)
at android.os.Looper.loop (Looper.java:154)
at android.app.ActivityThread.main (ActivityThread.java:6123)
at java.lang.reflect.Method.invoke (Method.java)
at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run (ZygoteInit.java:867)
at com.android.internal.os.ZygoteInit.main (ZygoteInit.java:757)
```
Should I post a shortened/simplified version of my ViewPager code?
---
Edit: Here's the class that does most of the work:
```
public class SolvePuzzle extends ActionBarActivity {
// The code is longer than is shown here, but hopefully this is enough to be helpful
static AppSectionsPagerAdapter mAppSectionsPagerAdapter;
static ViewPager mViewPager;
static int level;
static String images[];
static String levelDescriptions[];
static String puzzles[];
static String answers[];
static String hints[];
static String congratulationsArray[];
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_solve_puzzle);
// Some code related to getSupportActionBar(); which I've cut out for brevity
Intent intent = getIntent();
if (intent != null) {
level = intent.getIntExtra(PuzzleSelection.LEVEL, 0); // Possible bug here with default level set to 0?
}
// Here there's code that populates images, levelDescriptions, puzzles, answers, and other arrays using the level integer
// The arrays will have different lengths depending on the level
// i.e. there is a different number of puzzles in each level
// I do something along the lines of puzzles = getPuzzlesGivenLevel(level); and similarly for answers, hints, etc.
mAppSectionsPagerAdapter = new AppSectionsPagerAdapter(getSupportFragmentManager());
mViewPager = (ViewPager) findViewById(R.id.pager);
mViewPager.setAdapter(mAppSectionsPagerAdapter);
mViewPager.setOnPageChangeListener(new OnPageChangeListener() {
@Override public void onPageScrollStateChanged(int arg0) {
}
@Override public void onPageScrolled(int arg0, float arg1, int arg2) {
}
@Override public void onPageSelected(int puzzleIndex) {
callSetShareIntent(puzzles[puzzleIndex]); // Make share button share the correct puzzle
}
});
mViewPager.setCurrentItem(indexFirstUnsolvedPuzzle());
}
private int indexFirstUnsolvedPuzzle() {
// Gets index of first unsolved puzzle in this level
}
private void callSetShareIntent(String puzzleStatement) {
// Creates an Intent called shareIntent; and then calls setShareIntent(shareIntent);
}
@Override
public boolean onOptionsItemSelected(MenuItem item) {
switch (item.getItemId()) {
case android.R.id.home:
NavUtils.navigateUpFromSameTask(this);
return true;
}
return super.onOptionsItemSelected(item);
}
@Override
public boolean onCreateOptionsMenu(Menu menu) {
getMenuInflater().inflate(R.menu.solve_puzzle, menu);
MenuItem item = menu.findItem(R.id.menu_item_share);
// Following http://stackoverflow.com/questions/19118051/unable-to-cast-action-provider-to-share-action-provider
mShareActionProvider = (ShareActionProvider) MenuItemCompat.getActionProvider(item);
if (mShareActionProvider == null) {
// Following http://stackoverflow.com/questions/19358510/why-menuitemcompat-getactionprovider-returns-null
mShareActionProvider = new ShareActionProvider(this);
MenuItemCompat.setActionProvider(item, mShareActionProvider);
}
callSetShareIntent(puzzles[mViewPager.getCurrentItem()]);
return true; // Return true to display menu
}
private void setShareIntent(Intent shareIntent) {
if (mShareActionProvider != null) {
mShareActionProvider.setShareIntent(shareIntent); // Should be called whenever new fragment is displayed
}
}
public class AppSectionsPagerAdapter extends FragmentPagerAdapter {
public AppSectionsPagerAdapter(FragmentManager fm) {
super(fm);
}
@Override
public Fragment getItem(int puzzleIndex) {
Fragment fragment = new SolvePuzzleFragment();
Bundle args = new Bundle();
args.putInt(PUZZLE_INDEX, puzzleIndex);
fragment.setArguments(args);
return fragment;
}
@Override
public int getCount() {
return puzzles.length;
}
}
public static class SolvePuzzleFragment extends Fragment implements OnClickListener {
public double correctAnswer;
public int puzzleIndex;
@Override
public View onCreateView(LayoutInflater inflater, ViewGroup container, Bundle savedInstanceState) {
View rootView = inflater.inflate(R.layout.fragment_solve_puzzle, container, false);
Bundle args = getArguments();
puzzleIndex = args.getInt(PUZZLE_INDEX);
// Sets a bunch of TextViews using the puzzleIndex
// For example, get string in puzzles[puzzleIndex] and put it in a TextView, et cetera
// Set a bunch of onClickListenters
}
// A bunch of functions for checking the user's answer, opening congratulations, etc
// E.g. public void openCongratulationsAlert(View view) { ... }, public void openIncorrectAnswerToast() { ... }
}
}
```
Please let me know if this needs more explanation.<issue_comment>username_1: I installed your app in an emulator and was able to reproduce this crash with the monkey tool (<https://developer.android.com/studio/test/monkey.html>). This tool simulates user actions such as clicks, touch, rotations, etc., at a high speed. I couldn't reproduce it manually though.
You wrote in your question that you wanted to know how to figure out what the problem could be, so I will explain my process in detail. For the actual solution, skip to section 5.
**1. How to know the message of the IllegalStateException**
The first stack trace says the exception was thrown at ViewPager.java, so I searched for the string "throw new IllegalStateException" in that file. There are different versions, but the few I checked throw this exception in only **6 places**,
* **two** are about programmatic ViewPager drags, but your app doesn't do that, so I discarded these
* **two** are about to not calling super methods within a class that inherits from ViewPager, but since in your code you're just using ViewPager, I discarded those too
* **one** was in `addView()`, which does not appear on your stack traces, so I discarded that too.
**The only one remaining** is thrown at `populate(int)`, and it **had to be** this one. But just to be sure, I checked the second stacktrace:
* The fourth "at" line says `ViewPager.onMeasure()` was called. There are no "or" lines here, so this is for sure.
* The third "at" line gives three options. Looking at the source, only `populate()` is ever called from `onMeasure()`, so the it had to be that one.
* The second "at" line gives 21 options, but again looking at the source, `populate()` does just calls `populate(int)`... the same method as before, and so it all fits.
Here is the throwing code in `populate(int)`:
```
throw new IllegalStateException("The application's PagerAdapter changed the adapter's"
+ " contents without calling PagerAdapter#notifyDataSetChanged!"
+ " Expected adapter item count: " + mExpectedAdapterCount + ", found: " + N
+ " Pager id: " + resName
+ " Pager class: " + getClass()
+ " Problematic adapter: " + mAdapter.getClass());
```
And this is why I answered this the first time.
**2. Reproducing the bug**
After downloading your app in an emulator, I ran this from the command line:
`adb shell monkey -p atorch.statspuzzles -v --pct-rotation 20 100000`
This sends the emulator 100.000 semi-random touch events, rotations, volume up/down, etc., at a very high speed, to test the app under stress. If you run this command with a debug build of your app, you should be able to see the stack trace I got in step 1.
**3. Getting information about the bug (mostly speculative)**
From here, you can put a lot of `Log.d()` lines in your code, run `monkey`, and that should give you an idea of what your users did to crash your app. Of course I can't do that, so I wrote a tiny app with the code you provided, and ran it with `monkey` to see if I could get something else. What I did was:
1. Removed all references to your R class (like, replaced your layout with one of my own, with just a ViewPager, and replaced your Fragment layout with a simple View.
2. Returned zero from `indexFirstUnsolvedPuzzle()`, just for Android Studio not to bother
3. In the Fragment class I added an `onClickListener` which launched a dialog, just because your app has dialogs.
4. Simulated level selection with a `PuzzleSelection` Activity that randomly selects a level launches `SolvePuzzle` at onResume()
5. Added `onBackPressed()` on both Activities, just to be able to log the back press.
6. I added `Log.d()` on every single method to be able to follow the process in logcat.
I ran this app with `monkey` and just before crashing I got this in the log:
```
(section 1)
SolvePuzzle@84e25c: back pressed
ViewPager{9256292}: scrolled, offset: 9.2589855E-4
AppSectionsPagerAdapter@4b49965: getCount() invoked, we have 3 elements
(section 2)
Creating Activity PuzzleSelection@580af24. Orientation: portrait
Starting puzzle for level 1
ViewPager{9256292}: scrolled, offset: 0.0
AppSectionsPagerAdapter@4b49965: getCount() invoked, we have 3 elements
AppSectionsPagerAdapter@4b49965: getCount() invoked, we have 3 elements
(section 3)
Creating Activity SolvePuzzle@16d7aaffor level 3. Orientation: portrait
Creating adapter AppSectionsPagerAdapter@958bebc with 2 elements.
AppSectionsPagerAdapter@958bebc: getCount() invoked, we have 2 elements
AppSectionsPagerAdapter@958bebc: getCount() invoked, we have 2 elements
ViewPager{c002954}: scrollStateChanged to 0
AppSectionsPagerAdapter@4b49965: getCount() invoked, we have 2 elements
```
**4. Cause of the crash**
Note that the adapter in section 1 (@4b49965) kept getting called in section 2, when `SolvePuzzle` was no longer showing on the screen. It was called even in section 3, after a new Adapter had been created. The result of getCount() is different in section 1 and section 3 for this adapter, which means that **the adapter had changed its contents**, and so the exception was thrown.
Most likely this adapter kept being used after its `SolveActivity` had been finished because its `ViewPager` was doing some housekeeping after becoming invisible. The problem is that the old Activity read an reference to a String array that had just been written by the new Activity, and **this only happened because the arrays are static members** of `SolvePuzzle`.
(As a side note, those static arrays indeed cause another bug in your app: if you choose level X, solve a problem there and go back to the menu through the congratulations dialog, then choose another level Y, and then press back twice, you end up in level Y, instead of level X.)
**5. Solution**
As a general rule, only use static fields when they are truly immutable, such as constants, or at least when you can synchronize concurrent access properly. Static classes, on the other hand, are almost always preferable to inner classes, becuse of memory leaks.
In your particular case, you should:
1. Remove the `static` keyword from all your arrays (and really, from all your fields).
AppSectionsPagerAdapter mAppSectionsPagerAdapter;
ViewPager mViewPager;
int level;
String images[];
String levelDescriptions[];
String puzzles[];
String answers[];
String hints[];
String congratulationsArray[];
You can also make these fields private, though that's not strictly necessary.
2. Make your adapter class static as well. Android Studio will complain that it can't find `puzzles`, so you can pass it via constructor, and also pass the other former static arrays.
```
public static class AppSectionsPagerAdapter extends FragmentPagerAdapter {
private String images[];
private String puzzles[];
...
public AppSectionsPagerAdapter(FragmentManager fm,
String[] puzzles,
String[] images,
...
) {
super(fm);
this.puzzles = puzzles;
this.images = images;
...
}
/* ... */
}
```
3. `In getItem()`, instead of passing Fragments the array index, pass them the specific values they need:
```
@Override
public Fragment getItem(int puzzleIndex) {
Fragment fragment = new SolvePuzzleFragment();
Bundle args = new Bundle();
args.putString(PUZZLE_CONTENT, puzzles[puzzleIndex]);
args.putString(PUZZLE_IMAGE, images[puzzleIndex]);
...
fragment.setArguments(args);
return fragment;
}
```
4. In the fragment, retrieve the new arguments instead of the array index.
That should do it.
Upvotes: 4 [selected_answer]<issue_comment>username_2: the problem is in those line
`static AppSectionsPagerAdapter mAppSectionsPagerAdapter;
static ViewPager mViewPager;`
Why did you make it *static*?
Upvotes: 2
|
2018/03/17
| 520 | 1,762 |
<issue_start>username_0: I want to define length for some particular columns in select statement and i want to concatenate the two columns i.e `sponsor id` and `sponsor` like "ABC-123" in SAS proc sql . Please help
here is the code
```
proc sql;
select
project_id,
sponsor_id,
empl_country,
region,
empl_dept_descr,
empl_bu_descr,
sponsor,
full_name,
mnth_name FROM Stage0;
quit;
```<issue_comment>username_1: The `CATX` function will concatenate any number of arguments, of any type, strip the values and place a common delimiter (also stripped) between each. For example:
```
proc sql;
create table want as
select
catx('-', name, age) as name_age length=20
, catx(':', name, sex, height) as name_gender_height
from sashelp.class;
```
The length of a new variable will be 200 characters if the variable the CATX result is being assigned to does not have a length specified.
Stripping means the leading and trailing spaces are removed. Arguments that are missing values do not become part of the concatenation.
[SAS Documentation for CATX](http://support.sas.com/documentation/cdl/en/lrdict/64316/HTML/default/viewer.htm#a002257076.htm)
Upvotes: 3 <issue_comment>username_2: If you don't know the length you can use a strip() function which will remove leading and trailing spaces, in this case it will remove the spaces generated by the catx() default length:
strip(catx('-', sponsor\_id, sponsor))
**Code:**
```
proc sql;
select
project_id,
sponsor_id,
empl_country,
region,
empl_dept_descr,
empl_bu_descr,
sponsor,
full_name,
mnth_name ,
/* New column */
strip(catx('-', sponsor_id, sponsor)) as new_id
FROM Stage0;
quit;
```
Upvotes: 2 [selected_answer]
|
2018/03/17
| 1,040 | 4,044 |
<issue_start>username_0: I have an activity which let user click on any mood he is in. I have created a gridView and inflated it with relative layout containing an imageView and a textView. I have set a customAdaptor to it. I have set onItemClickListener on gridView but it is not responding.
Java class and adaptor class
```
public class SelectMoodActivity extends Activity {
protected ArrayList data;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity\_select\_mood);
data = MoodProvider.getData();
ArrayAdapter moodsArrayAdapter = new CustomGridViewAdapter(this, R.layout.row\_grid, data);
GridView gridView = findViewById(R.id.moodSelect);
gridView.setAdapter(moodsArrayAdapter);
/\*
Tried to setOnClickEvent for MOOD Select \*/
gridView.setOnItemClickListener(new AdapterView.OnItemClickListener() {
@Override
public void onItemClick(AdapterView parent, View view, int position, long id) {
Toast.makeText(getApplicationContext(), "You selected a Mood", Toast.LENGTH\_LONG).show();
}
});
}
class CustomGridViewAdapter extends ArrayAdapter {
Context context;
ArrayList moodsArrayList;
int resource;
public CustomGridViewAdapter(Context context, int resource, ArrayList objects) {
super(context, resource, objects);
this.context = context;
this.moodsArrayList = objects;
this.resource = resource;
}
@Override
public View getView(int position, View convertView, ViewGroup parent) {
View view = convertView;
ViewHolder holder;
if(view == null){
LayoutInflater inflater = LayoutInflater.from(context);
view = inflater.inflate(resource, parent, false);
holder = new ViewHolder();
holder.moodTitle = view.findViewById(R.id.item\_text);
holder.moodIcon = view.findViewById(R.id.item\_image);
view.setTag(holder);
}else{
holder = (ViewHolder) view.getTag();
}
Moods mood = moodsArrayList.get(position);
holder.moodTitle.setText(mood.getMoodTitle());
int res = context.getResources().getIdentifier(mood.getMoodIcon(), "drawable", context.getPackageName());
holder.moodIcon.setImageResource(res);
/\*
Tried to setOnClickEvent for MOOD Select inside the adaptor but this also didn't work.
view.setId(position);
view.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View v) {
Toast.makeText(getApplicationContext(), "Please Enter All Details", Toast.LENGTH\_LONG).show();
}
});
\*/
return view;
}
}
static class ViewHolder {
TextView moodTitle;
ImageView moodIcon;
}
}
```
XML Files
```
xml version="1.0" encoding="utf-8"?
```
row\_grid.xml
```
xml version="1.0" encoding="utf-8"?
```<issue_comment>username_1: add **android:descendantFocusability="blocksDescendants** in rawGrid>RelativLayout
and add
**focusable="false"** in rawGrid>RelativLayout>textview, rawGrid>RelativLayout>imageview
and
**focusable="true"** in gridview
Upvotes: 0 <issue_comment>username_2: make one interface in custom adapter class and used in click event like below code ...
```
onItemClickListner onItemClickListner;
public void setOnItemClickListner(CustomGridViewAdapter.onItemClickListner onItemClickListner) {
this.onItemClickListner = onItemClickListner;
}
public interface onItemClickListner {
void onClick();// hear you can pass any type of data as a argument;
}
```
then after getView() method in adapter class define below code for click event handleing.
```
view.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View view) {
onItemClickListner.onClick();
}
});
```
then after you bind adapter in grid view used below code in activity when you define adapter object for event handleing..
```
moodsArrayAdapter.setOnItemClickListner(new CustomGridViewAdapter.onItemClickListner() {
@Override
public void onClick() {
Toast.makeText(getApplicationContext(),"Hello world",0).show();
}
});
```
Upvotes: 1
|
2018/03/17
| 1,243 | 3,571 |
<issue_start>username_0: I have a binary image of cells. I want to use python to separate these cells individually. Each cell will save in an image. For example, I have 1000 cells, then the output will be 1000 images, each image contains 1 cell. Currently, I am using two ways to obtain it but the output is wrong
```
from skimage.morphology import watershed
from skimage.feature import peak_local_max
from skimage import morphology
import numpy as np
import cv2
from scipy import ndimage
from skimage import segmentation
image=cv2.imread('/home/toanhoi/Downloads/nuclei/9261_500_f00020_mask.png',0)
image=image[300:600,600:900]
# First way: peak_local_max
distance = ndimage.distance_transform_edt(image)
local_maxi = peak_local_max(distance, indices=False, footprint=np.ones((3, 3)), labels=image)
markers = morphology.label(local_maxi)
labels_ws = watershed(-distance, markers, mask=image)
markers[~image] = -1
labels_rw = segmentation.random_walker(image, markers)
cv2.imshow('watershed',labels_rw)
cv2.waitKey(5000)
# Second way way: using contour
_,contours,heirarchy=cv2.findContours(image,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)
cv2.drawContours(image,contours,-1,(125,125,0),1)
cv2.imshow('contours',image)
cv2.waitKey(5000)
```
[](https://i.stack.imgur.com/sLUel.png)<issue_comment>username_1: one way to do it is,
```
import numpy as np
import cv2
img = cv2.imread('sLUel.png') # image provided in question
img = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
mask = np.zeros(img.shape[:2],np.uint8)
_,contours,hierarchy = cv2.findContours(img,cv2.RETR_CCOMP,cv2.CHAIN_APPROX_NONE)
count = 0
for i, c in enumerate(contours):
if hierarchy[0,i,3] != -1:
continue
mask_copy = mask.copy()
cv2.drawContours(mask_copy,c,-1,(255,255,255),-1)
cv2.floodFill(mask_copy,None,(0,0),255)
mask_inv=cv2.bitwise_not(mask_copy)
cv2.imwrite(str(count)+'.png', mask_inv)
count+=1
```
there are 420 cells in the image.
Upvotes: 3 [selected_answer]<issue_comment>username_2: One can do the same with `scikit-image morphology's label()` function by finding the connected components in the binary image. Number of cells found is 421 though. `Morphologial` operations such as `erosion / dilation / closing / opening` can also be used for pre-processing the input image and to get the desired output.
```
from skimage import morphology as morph
from skimage.io import imread, imsave
from skimage.color import rgb2gray
import numpy as np
import matplotlib.pyplot as plt
im = rgb2gray(imread('sLUel.png'))
#im = (im > 0).astype(np.uint8)
#labeled = morph.label(morph.binary_opening(im, selem=morph.disk(radius=2)), connectivity=2)
labeled = morph.label(im, connectivity=2)
print(len(np.unique(labeled)))
for i in np.unique(labeled)[1:]: # skip the first component since it's the background
im_obj = np.zeros(im.shape)
im_obj[labeled == i] = 1
imsave('sLUel_{:03d}.png'.format(i), im_obj)
plt.figure(figsize=(20,10))
plt.subplot(121), plt.imshow(im), plt.axis('off'), plt.title('original binary image', size=15)
plt.subplot(122), plt.imshow(labeled, cmap='spectral'), plt.axis('off'), plt.title('connected components (radius 2)', size=15)
plt.show()
```
with the following output
[](https://i.stack.imgur.com/uh8lm.png)
Here are the cells identified and separated:
[](https://i.stack.imgur.com/juLtc.gif)
Upvotes: 3
|
2018/03/17
| 958 | 3,774 |
<issue_start>username_0: I have an interesting problem, maybe I'm approaching this problem wrong but I feel that I'm on the right track. I am trying to make a superclass that my subclasses can reference. So I have most of the code setup but can't figure out the default constructor here is the error I'm getting:
>
> Cannot reference 'GregorianDate.day' before supertype constructor has been called.
>
>
>
```
public class GregorianDate extends Date {
//Initialise the variables
public int month = 1;
public int day = 1;
public int year = 1970;
//*************** Constructors ***********************
GregorianDate() {
super(month,day,year);
long numToAdd = System.currentTimeMillis();
numToAdd += java.util.TimeZone.getDefault().getRawOffset();
numToAdd /= 86400000;
super.addDays(numToAdd);
}
//Parameterized constructor
GregorianDate(int passedYear, int passedMonth, int passedDay){
super(passedMonth, passedDay, passedYear);
}
```
SuperClass file:
```
public class Date {
public int month;
public int day;
public int year;
Date(int passedMonth, int passedDay, int passedYear){
month = passedMonth;
day = passedDay;
year = passedYear;
}
```
I've tried adding a default constructor with nothing and calling it by super(); with the same result. Any help is appreciated.<issue_comment>username_1: I think you have to call super constructor first.
You can try it
```
public class Date {
public int month;
public int day;
public int year;
Date()
Date(int passedMonth, int passedDay, int passedYear){
month = passedMonth;
day = passedDay;
year = passedYear;
}
```
Then use it first in the constructor before your class constructor.
```
public class GregorianDate extends Date {
//Initialise the variables
public int month = 1;
public int day = 1;
public int year = 1970;
//*************** Constructors ***********************
GregorianDate() {
super(month,day,year);
long numToAdd = System.currentTimeMillis();
numToAdd += java.util.TimeZone.getDefault().getRawOffset();
numToAdd /= 86400000;
super.addDays(numToAdd);
}
//Parameterized constructor
GregorianDate(int passedYear, int passedMonth, int passedDay){
super(passedMonth, passedDay, passedYear);
}
```
Upvotes: 0 <issue_comment>username_2: A few problems with your code.
First, as the error suggest, you cannot use the member variable before the super call because the object has not been fully initialized yet. Second, your subclass shouldn't have same public variables as the superclass. Third, a common practice in Java is to use getter/settter instead of public variables.
Here is my version of your code:
```
public class GregorianDate extends Date {
//Define constants
private final static int MONTH = 1;
private final static int DAY = 1;
private final static int YEAR = 1970;
//*************** Constructors ***********************
GregorianDate() {
super(MONTH,DAY,YEAR);
long numToAdd = System.currentTimeMillis();
numToAdd += java.util.TimeZone.getDefault().getRawOffset();
numToAdd /= 86400000;
super.addDays(numToAdd);
}
//Parameterized constructor
GregorianDate(int passedYear, int passedMonth, int passedDay){
super(passedMonth, passedDay, passedYear);
}
//getters and setters here
}
```
SuperClass file:
```
public class Date {
private int month;
private int day;
private int year;
Date(int passedMonth, int passedDay, int passedYear){
this.month = passedMonth;
this.day = passedDay;
this.year = passedYear;
}
//Getters and setters here
}
```
Upvotes: 3 [selected_answer]
|
2018/03/17
| 648 | 1,765 |
<issue_start>username_0: In the following data frame, I need to take the mean of all values in `a, b , c`
```
values <- data.frame(value = c("a", "a", "a", "a", "a",
"b", "b", "b",
"c", "c", "c", "c"), i = c(1,2,3,4,5,6,7,8,9,10,11,12))
```
To achieve this, I tried using `aggregate` function as follows:
```
agg <- aggregate(values, by = list(values$value), FUN = mean)
```
The output does result in the mean values of `i` but I do not think this is the correct way. The output also throws a couple of warnings.
```
Warning messages:
1: In mean.default(X[[i]], ...) :
argument is not numeric or logical: returning NA
2: In mean.default(X[[i]], ...) :
argument is not numeric or logical: returning NA
3: In mean.default(X[[i]], ...) :
argument is not numeric or logical: returning NA
> agg
Group.1 value i
1 a NA 3.0
2 b NA 7.0
3 c NA 10.5
```<issue_comment>username_1: Here is another easy solution using `dplyr`:
```
library(dplyr)
values %>%
group_by(value) %>%
summarise(i = mean(i))
# A tibble: 3 x 2
value i
1 a 3.0
2 b 7.0
3 c 10.5
```
Upvotes: 0 <issue_comment>username_2: Remove the warning message with:
```
aggregate(values, by = list(values$value), FUN = function(x) mean(as.numeric(x)))
```
Which returns:
>
>
> ```
> Group.1 value i
> 1 a 1 3.0
> 2 b 2 7.0
> 3 c 3 10.5
>
> ```
>
>
Alternatively, we could use dot notation:
```
aggregate(. ~ value, values, mean)
```
Or use formulas:
```
aggregate(i ~ value, values, mean)
```
Both return:
>
>
> ```
> value i
> 1 a 3.0
> 2 b 7.0
> 3 c 10.5
>
> ```
>
>
Upvotes: 2 [selected_answer]
|
2018/03/17
| 400 | 996 |
<issue_start>username_0: Code to receive message like "hi i am man" and convert it to upper case "HI I AM MAN".
```
n = read(newsockfd, buffer, 255);
printf("Here is the message: %s\n", buffer);
```<issue_comment>username_1: Here is another easy solution using `dplyr`:
```
library(dplyr)
values %>%
group_by(value) %>%
summarise(i = mean(i))
# A tibble: 3 x 2
value i
1 a 3.0
2 b 7.0
3 c 10.5
```
Upvotes: 0 <issue_comment>username_2: Remove the warning message with:
```
aggregate(values, by = list(values$value), FUN = function(x) mean(as.numeric(x)))
```
Which returns:
>
>
> ```
> Group.1 value i
> 1 a 1 3.0
> 2 b 2 7.0
> 3 c 3 10.5
>
> ```
>
>
Alternatively, we could use dot notation:
```
aggregate(. ~ value, values, mean)
```
Or use formulas:
```
aggregate(i ~ value, values, mean)
```
Both return:
>
>
> ```
> value i
> 1 a 3.0
> 2 b 7.0
> 3 c 10.5
>
> ```
>
>
Upvotes: 2 [selected_answer]
|
2018/03/17
| 460 | 1,284 |
<issue_start>username_0: I am trying to save data (images) in firebase datadase.
```
StorageReference storageReference;
```
But I got error
in this code of line.
`storageReference = FirebaseDatabase.getInstance().getReference();`
it keep saying that it's an incompitible types
```
Required: com.google.firebase.storage.StorageReference
Found: com.google.firebase.database.DatabaseReference
```
I don't understand where is mistake and how to solve this problem.
Thank a lot<issue_comment>username_1: Here is another easy solution using `dplyr`:
```
library(dplyr)
values %>%
group_by(value) %>%
summarise(i = mean(i))
# A tibble: 3 x 2
value i
1 a 3.0
2 b 7.0
3 c 10.5
```
Upvotes: 0 <issue_comment>username_2: Remove the warning message with:
```
aggregate(values, by = list(values$value), FUN = function(x) mean(as.numeric(x)))
```
Which returns:
>
>
> ```
> Group.1 value i
> 1 a 1 3.0
> 2 b 2 7.0
> 3 c 3 10.5
>
> ```
>
>
Alternatively, we could use dot notation:
```
aggregate(. ~ value, values, mean)
```
Or use formulas:
```
aggregate(i ~ value, values, mean)
```
Both return:
>
>
> ```
> value i
> 1 a 3.0
> 2 b 7.0
> 3 c 10.5
>
> ```
>
>
Upvotes: 2 [selected_answer]
|
2018/03/17
| 893 | 2,713 |
<issue_start>username_0: Can someone please provide a regular expression that allows:
* Up to 18 digits before the decimal point
* Two digits after decimal point
Valid Values: 18.50, 5556.50
Invalid Values: 18.555, 5879.5877 etc.<issue_comment>username_1: I would do this:
```
\b\d{1,18}\.\d{1,2}\b
```
* `\b` captures word boundaries; if you enclose your regex between two `\b` you will restrict your search to words, i.e. text delimited by space, punctuations etc.
* `\d` captures digits
* `\d{1,18}` captures at least 1 digit and maximum 18 digits
* `\.` captures a dot; you have to prepend a backslash to escape the dot character because it is a special symbol for regex engines and it is used to capture everything
* `\d{1,2}` captures at least 1 digit and maximum 2 digits
Live demo [here](https://regex101.com/r/EbXnlU/3), so you can test it as you please.
Upvotes: 1 <issue_comment>username_2: To match up to 18 digits before the decimal point you could use `\d{1,18}`. To match 2 digits after the decimal point you could use `\d{2}`.
Your regex could then look like:
[`\b\d{1,18}\.\d{2}\b`](https://regex101.com/r/CeaMjJ/1/)
That would match:
* `\b` Word boundary
* `\d{1,18}` Match a digits 1-18 times
* `\.` Match a dot
* `\d{2}` Match 2 digits
* `\b` Word boundary
Upvotes: 0 <issue_comment>username_3: For validation inputs you don't need to use `\b` in your regex.
This would help you:
```
^\d{1,18}\.\d{1,2}$
```
With this [Demo](https://regex101.com/r/NIzpKZ/1)
[](https://i.stack.imgur.com/7D4Kz.png)
Upvotes: 0 <issue_comment>username_4: Thank You Everyone!
I think I have just developed something more simple than creating regular expressions. Here is what I did:
```
Dim mrp_point As Integer = 0
Dim mrp_decimal_count As Integer = 1
Private Sub txtMRP_KeyPress(ByVal sender As Object, ByVal e As System.Windows.Forms.KeyPressEventArgs) Handles txtMRP.KeyPress
If mrp_decimal_count < 2 Then
If mrp_point = 0 Then
e.Handled = Not (Char.IsDigit(e.KeyChar) Or e.KeyChar = "." Or Asc(e.KeyChar) = 8)
If Not txtMRP.Text = "" Then
Dim mrp As String = txtMRP.Text
Dim last As Char = mrp(mrp.Length - 1)
If last = "." Then
mrp_point = 1
End If
End If
ElseIf mrp_point = 1 Then
e.Handled = Not (Char.IsDigit(e.KeyChar) Or Asc(e.KeyChar) = 8)
mrp_decimal_count = mrp_decimal_count + 1
End If
Else
e.Handled = Not (e.KeyChar = " " Or Asc(e.KeyChar) = 8)
End If
End Sub
```
Anyways, Thanks for your time and help! :)
Upvotes: 0
|
2018/03/17
| 989 | 3,129 |
<issue_start>username_0: Hello all i try to override unlink method of sale order line. Function called but raise UserError validation not removed.
**Odoo default function:**
```py
@api.multi
def unlink(self):
if self.filtered(lambda x: x.state in ('sale', 'done')):
raise UserError(_('You can not remove a sale order line.\nDiscard changes and try setting the quantity to 0.'))
return super(SaleOrderLine, self).unlink()
```
**Custom Override function:**
```py
@api.multi
def unlink(self):
if self.filtered(lambda x: x.state in ('sale', 'done')):
pass
return super(test, self).unlink()
```
Thanks in advance.<issue_comment>username_1: I would do this:
```
\b\d{1,18}\.\d{1,2}\b
```
* `\b` captures word boundaries; if you enclose your regex between two `\b` you will restrict your search to words, i.e. text delimited by space, punctuations etc.
* `\d` captures digits
* `\d{1,18}` captures at least 1 digit and maximum 18 digits
* `\.` captures a dot; you have to prepend a backslash to escape the dot character because it is a special symbol for regex engines and it is used to capture everything
* `\d{1,2}` captures at least 1 digit and maximum 2 digits
Live demo [here](https://regex101.com/r/EbXnlU/3), so you can test it as you please.
Upvotes: 1 <issue_comment>username_2: To match up to 18 digits before the decimal point you could use `\d{1,18}`. To match 2 digits after the decimal point you could use `\d{2}`.
Your regex could then look like:
[`\b\d{1,18}\.\d{2}\b`](https://regex101.com/r/CeaMjJ/1/)
That would match:
* `\b` Word boundary
* `\d{1,18}` Match a digits 1-18 times
* `\.` Match a dot
* `\d{2}` Match 2 digits
* `\b` Word boundary
Upvotes: 0 <issue_comment>username_3: For validation inputs you don't need to use `\b` in your regex.
This would help you:
```
^\d{1,18}\.\d{1,2}$
```
With this [Demo](https://regex101.com/r/NIzpKZ/1)
[](https://i.stack.imgur.com/7D4Kz.png)
Upvotes: 0 <issue_comment>username_4: Thank You Everyone!
I think I have just developed something more simple than creating regular expressions. Here is what I did:
```
Dim mrp_point As Integer = 0
Dim mrp_decimal_count As Integer = 1
Private Sub txtMRP_KeyPress(ByVal sender As Object, ByVal e As System.Windows.Forms.KeyPressEventArgs) Handles txtMRP.KeyPress
If mrp_decimal_count < 2 Then
If mrp_point = 0 Then
e.Handled = Not (Char.IsDigit(e.KeyChar) Or e.KeyChar = "." Or Asc(e.KeyChar) = 8)
If Not txtMRP.Text = "" Then
Dim mrp As String = txtMRP.Text
Dim last As Char = mrp(mrp.Length - 1)
If last = "." Then
mrp_point = 1
End If
End If
ElseIf mrp_point = 1 Then
e.Handled = Not (Char.IsDigit(e.KeyChar) Or Asc(e.KeyChar) = 8)
mrp_decimal_count = mrp_decimal_count + 1
End If
Else
e.Handled = Not (e.KeyChar = " " Or Asc(e.KeyChar) = 8)
End If
End Sub
```
Anyways, Thanks for your time and help! :)
Upvotes: 0
|
2018/03/17
| 623 | 1,991 |
<issue_start>username_0: I want to create a leaflet marker cluster group and I want to add all the markers, and for that I have written the below-mentioned code. But I am getting the error `TypeError: L.markerClusterGroup is not a constructor`
I am not getting that there is an error in the scripting or in the code that I have written
```
```
```js
var markers = L.markerClusterGroup();
markers.addLayer(L.marker(getRandomLatLng(map)));
map.addLayer(markers);
```<issue_comment>username_1: You do not need to include both `leaflet.markercluster.js` and `leaflet.markercluster-src.js`; you just need one of them.
In the `head` section of your HTML, include the following:
```
```
Then, in your JavaScript, create a marker cluster group:
```
var markers = L.markerClusterGroup();
```
create some markers:
```
var marker = L.marker(new L.LatLng(0, 0));
```
add the markers to the cluster group:
```
markers.addLayer(marker);
```
and finally add the cluster group to the map:
```
map.addLayer(markers);
```
Take a look at [this JSBin](http://jsbin.com/fimaxap/2/edit?html,js,output) to see a working example.
Upvotes: 6 [selected_answer]<issue_comment>username_2: Here you have a simple working example with Open Street Maps, just try it by running this snippet.
As you may see we need to include two CSS files from `markercluster`, namely `MarkerCluster.css` and `MarkerCluster.Default.css`. We must include the CSS and JS files from `markercluster` only after the inclusion of the leaflet files.
```js
var map = L.map('map').setView([38, -8], 7)
L.tileLayer('https://{s}.tile.openstreetmap.org/{z}/{x}/{y}.png').addTo(map);
var markers = L.markerClusterGroup();
for (let i=0; i<1000; i++) {
const marker = L.marker([
getRandom(37, 39),
getRandom(-9.5, -6.5)
])
markers.addLayer(marker)
}
map.addLayer(markers);
function getRandom(min, max) {
return Math.random() * (max - min) + min;
}
```
```css
#map {height: 400px}
```
```html
```
Upvotes: 3
|
2018/03/17
| 563 | 1,989 |
<issue_start>username_0: I am developing an Angular 4 application with Laravel (Restful API). I have tested the API with Postman. It's working fine but when I call the API from Angular 4 I am getting this error:
[](https://i.stack.imgur.com/brybR.jpg)
```
import { Injectable } from '@angular/core';
import { Headers, Http } from '@angular/http';
import { User } from '../models/user';
import 'rxjs/add/operator/toPromise';
@Injectable()
export class AuthService {
private BASE_URL: string = 'http://localhost/fmfb_hr/public/api';
private headers: Headers = new Headers({'Content-Type': 'application/json'});
constructor(private http: Http) {}
login(user): Promise {
let url: string = `${this.BASE\_URL}/login`;
return this.http.post(url, user, {headers: this.headers}).toPromise();
}
}
```<issue_comment>username_1: Add these three lines to CORSify your server. If you used Apache as web server, add these lines to your `.htaccess` file:
```
Header set Access-Control-Allow-Origin "*"
Header set Access-Control-Allow-Methods "GET,PUT,POST,DELETE"
Header set Access-Control-Allow-Headers "Content-Type, Authorization"
```
Upvotes: 4 <issue_comment>username_2: Try this code add both header in one request, I hope this works
```
import { Injectable } from '@angular/core';
import { Headers, Http } from '@angular/http';
import { User } from '../models/user';
import 'rxjs/add/operator/toPromise';
@Injectable()
export class AuthService {
private BASE_URL: string = 'http://localhost/fmfb_hr/public/api';
private headers: Headers = new Headers({});
constructor(private http: Http) {}
login(user): Promise {
let url: string = `${this.BASE\_URL}/login`;
this.headers.append('Content-Type','application/x-www-form-urlencoded');
this.headers.append('Content-Type','application/json');
return this.http.post(url, user, {headers: this.headers}).toPromise();
}
}
```
Upvotes: 3 [selected_answer]
|
2018/03/17
| 882 | 2,844 |
<issue_start>username_0: Hi I am doing a unit test for QuickSort algorithm (ie sort random numbers in random number to ascending order). However I could not declare a array of numbers in Collection. Could you help. Thank you.
Also there is syntax error in this line
>
> this.arrayIn[]= arrayIn;
>
>
>
Am I testing this correctly
>
> assertEquals(arrayOut, QuickSort.sort(arrayIn));
>
>
>
```
package week8;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.junit.runners.Parameterized;
import org.junit.runners.Parameterized.Parameters;
import static org.junit.Assert.*;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.Collection;
@RunWith(Parameterized.class)
public class QuickSortTest {
private int[] arrayIn[];
private int length;
private int[] arrayOut[];
public QuickSortTest(int[] arrayIn, int length, int[] arrayOut) {
this.arrayIn[]= arrayIn;
this.length=length;
this.arrayOut[]=arrayOut;
}
@Parameters
public static Collection parameters(){
return Arrays.asList(new Object[][]{
{{1,4,6,3,5,4},6,{1,3,4,4,5,6}}
});
}
@Test
public void test\_quicksort() {
assertEquals(arrayOut, QuickSort.sort(arrayIn));
}
}
```<issue_comment>username_1: The declaration of private attribute `arrayIn`:
```
private int[] arrayIn[];
```
is equivalent to:
```
private int[][] arrayIn;
```
So when assigning a value to it, you should do:
```
this.arrayIn = ...; // without the brackets []
```
I don't understand why you're using the parameterized test. Parameterized test is useful for testing multiple input set. And you've only one input set: an in-array and a length. So for instance, using one test method is enough. And make sure you use the right assert method: `assertEquals(Object, Object)` is not adapted here, you should use [`assertArrayEquals(int[], int[])`](https://junit.org/junit4/javadoc/4.12/org/junit/Assert.html#assertArrayEquals(int[],%20int[])):
```
public class QuickSortTest {
@Test
public void test_quicksort() {
...
assertArrayEquals(expectedArr, actualArr);
}
}
```
Upvotes: 0 <issue_comment>username_2: Thanks for the tip on assertArrayEquals - was getting gibberish characters when i used assertEquals.
Thanks for the declaration too! It works now!
Yes i was going to add more parameters but since i'm still getting error for the 1st case, i just stopped to check. I have revised my code as below.
```
@Parameters
public static Collection parameters(){
return Arrays.asList(new Object[][]{
{new int[] {1,4,1,6,3,5}, 6, new int[] {1,1,3,4,5,6}},
{new int[] {1,4,1,6,3,5}, 6, new int[] {1,3,1,4,5,6}},
{new int[] {70000,4,1,6,3,5}, 7, new int[] {1,3,4,5,6,70000}},
});
}
@Test
public void test\_quicksort() {
new QuickSort().sort(arrayIn);
assertArrayEquals(arrayOut, arrayIn);
}
```
Upvotes: 1
|
2018/03/17
| 322 | 1,148 |
<issue_start>username_0: i using croper.js for to crop and store a image. following function i use croped blob to convert image format. that function not working properly.
```
cropper.getCroppedCanvas().toBlob(function (blob) {
var formData = new FormData();
var creimag = document.createElement('img');
creimag.src = 'data:image/png;base64,'+ blob;
var processeddata=document.body.appendChild(creimag);
formData.append('file', processeddata);
});
```<issue_comment>username_1: Try URL.[createObjectURL](https://developer.mozilla.org/en-US/docs/Web/API/URL/createObjectURL)
```
creimag.src = URL.createObjectURL(blob);
```
Upvotes: 0 <issue_comment>username_2: The way you are coding this, you are appending an `![]()` tag where a file/`Blob` is expected.
<https://github.com/fengyuanchen/cropperjs#getcroppedcanvasoptions>
This page has an example for this use case. You have to `append` `blob` to the form. Quote from the example code:
```
cropper.getCroppedCanvas().toBlob(function (blob) {
var formData = new FormData();
formData.append('croppedImage', blob);
...
```
Upvotes: 2
|
2018/03/17
| 1,018 | 3,317 |
<issue_start>username_0: I am creating a tree from a list `["abc", "abd", "aec", "add", "adcf"]` using anytree package of python3. In this tree first character of each list element - `a` is a root, and subsequently, other characters are added as their children. When I render tree it looks like:
```
a
├── b
│ ├── c
│ └── d
├── e
│ └── c
└── d
├── d
└── c
└── f
```
But when I render the tree to picture using `to_picture` method, the image is -
[](https://i.stack.imgur.com/sipxy.png)
I don't want the common nodes to be merged, as it is adding unwanted paths to my tree.<issue_comment>username_1: As `anytree` `to_picture()` function use graphviz to generate the file.
From [here = similar question](https://stackoverflow.com/questions/45861379/how-to-prohibit-nodes-merge-in-graphviz), you can read :
>
> graphviz uses the node id as label. If distinct nodes need to have the same label, the label has to be defined explicitly.
>
>
>
Upvotes: 1 <issue_comment>username_2: The node are arranged in `graphviz` using their id. In your case the graph is generated with just node names and then `graphviz` created the loop edge as you get it.
What you really want is different `id` for each node and a `label` associated with the same. The `DotExporter` class has a attribute named `nodeattrfunc` to which we can pass a function or a lambda and generate attributes for the nodes
Below is how you would do it based on your array
```
import anytree
from anytree import Node, RenderTree
data = ["abc", "abd", "aec", "add", "adcf"]
from anytree.exporter import DotExporter
nodes = {}
first_node = None
for elem in data:
parent_node = None
parent_node_name = ""
for i, val in enumerate(elem):
if i not in nodes:
nodes[i] = {}
key = parent_node_name + val
if key not in nodes[i]:
print("Creating new node for ", key)
nodes[i][key] = Node(key, parent=parent_node, display_name=val)
if first_node is None:
first_node = nodes[i][key]
parent_node = nodes[i][key]
parent_node_name = val
print(nodes)
DotExporter(nodes[0]["a"],
nodeattrfunc=lambda node: 'label="{}"'.format(node.display_name)).to_dotfile("graph.txt")
DotExporter(nodes[0]["a"],
nodeattrfunc=lambda node: 'label="{}"'.format(node.display_name)).to_picture("graph.png")
```
This will generate the below dot file
```
digraph tree {
"a" [label="a"];
"ab" [label="b"];
"bc" [label="c"];
"bd" [label="d"];
"ae" [label="e"];
"ec" [label="c"];
"ad" [label="d"];
"dd" [label="d"];
"dc" [label="c"];
"cf" [label="f"];
"a" -> "ab";
"a" -> "ae";
"a" -> "ad";
"ab" -> "bc";
"ab" -> "bd";
"ae" -> "ec";
"ad" -> "dd";
"ad" -> "dc";
"dc" -> "cf";
}
```
And the below graph
[](https://i.stack.imgur.com/Hfer7.png)
Upvotes: 5 [selected_answer]<issue_comment>username_3: I was about to implement the recommended fix but found that using [UniqueDotExporter](https://anytree.readthedocs.io/en/latest/_modules/anytree/exporter/dotexporter.html#UniqueDotExporter) dealt with the repeated ids perfectly!
Upvotes: 1
|
2018/03/17
| 726 | 1,793 |
<issue_start>username_0: How can I enable immediate light night setting in Windows 10 via a command?
This is the setting from inside the "Night light settings" inside "Display" settings, and not the one from "Display" which only enable the general timed setting.
Also a command to set the temperature will be useful.<issue_comment>username_1: You can use `start ms-settings:nightlight`
\nYou can find more here: <https://learn.microsoft.com/en-us/windows/uwp/launch-resume/launch-settings-app>
Upvotes: -1 <issue_comment>username_2: Directly manipulating the appropriate registry settings seems to work.
For example:
```
rem Disable
reg add HKCU\Software\Microsoft\Windows\CurrentVersion\CloudStore\Store\Cache\DefaultAccount\$$windows.data.bluelightreduction.bluelightreductionstate\Current /v Data /t REG_BINARY /d 0200000088313cdb4584d4010000000043420100d00a02c614dabef0d9dd88a1ea0100 /f
rem Enable
reg add HKCU\Software\Microsoft\Windows\CurrentVersion\CloudStore\Store\Cache\DefaultAccount\$$windows.data.bluelightreduction.bluelightreductionstate\Current /v Data /t REG_BINARY /d 02000000d3f1d47c4584d40100000000434201001000d00a02c61487dad3e6d788a1ea0100 /f
rem Heavy Reduction
reg add HKCU\Software\Microsoft\Windows\CurrentVersion\CloudStore\Store\Cache\DefaultAccount\$$windows.data.bluelightreduction.settings\Current /v Data /t REG_BINARY /d 02000000e113e4af4784d4010000000043420100c20a00ca140e0900ca1e0e0700cf28f625ca320e142e2b00ca3c0e052e0e0000 /f
rem Light Reduction
reg add HKCU\Software\Microsoft\Windows\CurrentVersion\CloudStore\Store\Cache\DefaultAccount\$$windows.data.bluelightreduction.settings\Current /v Data /t REG_BINARY /d 020000006a092c904784d4010000000043420100c20a00ca140e0900ca1e0e0700cf28aa41ca320e142e2b00ca3c0e052e0e0000 /f
```
Upvotes: 4 [selected_answer]
|
2018/03/17
| 906 | 2,818 |
<issue_start>username_0: I am trying to change my bool vector, items[0] to true in game.cpp/.hpp via DeerPark.cpp. However, I cannot figure out why Xcode is throwing this error message. Thank you all for your time and assistance.
This is my error message,
```
No viable overloaded '='
```
and it take place in DeerPark.cpp when I do
```
input[1]= true; //and
input[0]= true;
```
Game.hpp
```
#include
#include
class Game
{
private:
std::vector items = std::vector(3);
public:
int intRange(int min, int max, int input);
void printMenu();
};
```
Game.cpp
```
#include "Game.hpp"
#include
#include
using namespace std;
void Game::printMenu()
{
items[0] = false;
items[1] = false;
items[2] = false;
}
```
DeerPark.hpp
```
#include
#include "Game.hpp"
class DeerPark : public Space
{
protected:
int feedCounter;
public:
DeerPark();
void feed(Character \*person, std::vector\*input);
void get(Character \*person, std::vector\*input);
void kick(Character \*person);
};
```
DeerPark.cpp
```
#include "DeerPark.hpp"
#include "Space.hpp"
#include
#include "Game.hpp"
using namespace std;
DeerPark::DeerPark() : Space()
{
feedCounter = 0;
}
void DeerPark::feed(Character \*person, vector\*input)
{
feedCounter = feedCounter + 1;
if(feedCounter == 3)
{
input[1]= true;
}
}
void DeerPark::get(Character \*person, vector\*input)
{
Input[0] = true;
}
void DeerPark::kick(Character \*person)
{
person->setStrength(-5);
}
```<issue_comment>username_1: It appears you are writing `Input[0]` with a capital `I` while the parameter is in fact called `input`. You are attempting to assign to something that does not exist.
Specifically here:
```
void DeerPark::get(Character *person, vector\*input)
{
Input[0] = true;
}
```
Change that to `(*input)[0] = true;`
Also, like others are pointing out, since it's passed as a pointer you must dereference the vector before you can subscript it. Also shown in the above snippet. Otherwise you are trying to assign to the pointer. So in short, a typo and an indirection error.
Upvotes: 2 <issue_comment>username_2: `vector\*input` function parameter is a pointer to `vector` so to access first element you will need to write `(*input)[0]`. Or (even better) pass by reference:
```
void DeerPark::feed(Character *person, vector & input)
```
Upvotes: 0 <issue_comment>username_3: in `DeerPark::feed`, the `input` argument is a `vector\*` pointer, and therefore `input[1]` would be a reference to a `vector`, and `vector::operator=` doesn't accept a `bool` value. That's why the compiler complains about "No viable overloaded '='".
The correct way to fix this is to dereference the pointer:
```
(*input)[1]=true;
```
Same issue with `DeerPark::get` (after fixing the typo where `Input` should be `input`).
Upvotes: 3 [selected_answer]
|
2018/03/17
| 683 | 2,436 |
<issue_start>username_0: This is a hackerrank problem "to insert a node in the nth position of a linked list". From the problem's description, we are supposed to insert 10 at position 1.
`3 > 5 > 4 > 2 > Null` is an existing linked list and we have to insert 10 at index one. The output should look like this `3 >10 > 5 > 4 > 2 > Null`. This is my code:
```
def InsertNth(head, data, position):
temp = Node(data)
if (head == None):
head = temp #if head is Null then new node is the head
temp.next = None
#return head
else:
current = head
i = 0
while(i < position - 1):
#for i in range(position-1):
current = current.next
i = i +1
next_node = current.next #keep the address to the next element
current.next = temp .
temp.next = next_node
return head
```
The output I get using this code is `2> 10 > 5 > 4 > 2 > Null`, which is wrong. I tried a lot to fix, but the code looks fine to me. Can anyone please point out my error, with some explanation ?<issue_comment>username_1: It appears you are writing `Input[0]` with a capital `I` while the parameter is in fact called `input`. You are attempting to assign to something that does not exist.
Specifically here:
```
void DeerPark::get(Character *person, vector\*input)
{
Input[0] = true;
}
```
Change that to `(*input)[0] = true;`
Also, like others are pointing out, since it's passed as a pointer you must dereference the vector before you can subscript it. Also shown in the above snippet. Otherwise you are trying to assign to the pointer. So in short, a typo and an indirection error.
Upvotes: 2 <issue_comment>username_2: `vector\*input` function parameter is a pointer to `vector` so to access first element you will need to write `(*input)[0]`. Or (even better) pass by reference:
```
void DeerPark::feed(Character *person, vector & input)
```
Upvotes: 0 <issue_comment>username_3: in `DeerPark::feed`, the `input` argument is a `vector\*` pointer, and therefore `input[1]` would be a reference to a `vector`, and `vector::operator=` doesn't accept a `bool` value. That's why the compiler complains about "No viable overloaded '='".
The correct way to fix this is to dereference the pointer:
```
(*input)[1]=true;
```
Same issue with `DeerPark::get` (after fixing the typo where `Input` should be `input`).
Upvotes: 3 [selected_answer]
|
2018/03/17
| 1,367 | 4,243 |
<issue_start>username_0: I have the following dictionary
```
{'a':{'se':3, 'op':2}, 'b':{'se':4, 'op':3}}
```
I need to convert it as follows:
```
{'se':{'a':3, 'b': 4}, 'op':{'a':2,'b':3}}
```
This is the following code I could come up with:
```
from collections import defaultdict
a = {'a':{'se':3, 'op':2}, 'b':{'se':4, 'op':3}}
b = defaultdict(dict)
for key1, value1 in a.items():
for key2, value2 in value1.items():
b[key2].update({key1: value2})
```
The following gets the job done but I am fond of one-liners. Is there a one-liner to the above or even a better way (better performance such as to eliminate two loops)?<issue_comment>username_1: Turning this into a one-liner is doable in multiple ways, but all of them are going to be ugly. For example:
```
# Gather k2-k1-v2
g = ((k2,k1,v2) for k1,v1 in a.items() for k2,v2 in v1.items())
# Sort by k2
sg = sorted(g)
# Group by k2
gsg = itertools.groupby(sg, key=operator.itemgetter(0))
# Turn it into a dict of dicts
b = {k: {ksk[1]: ksk[2] for ksk in group} for (k, group) in gsg}
```
Putting it all together:
```
b = {k: {ksk[1]: ksk[2] for ksk in group}
for (k, group) in itertools.groupby(
sorted(((k2,k1,v2) for k1,v1 in a.items() for k2,v2 in v1.items())),
key=operator.itemgetter(0))}
```
Well, it's one expression, and you can put it all on one line if you don't know how many columns it is. But it's certainly not as readable as your original version.
And as for performance? It takes about twice as long from a quick test. Coldspeed's version is somewhere in between. Changing one list to an iterator makes it slightly slower on small dicts like your original example, but much faster on big ones, but either way, it didn't beat the original in any tests, and got slower than the itertools version with very big values. But of course if performance actually matters, you should measure them on your real data.
And if you think about it, there can't be any way to eliminate the nested loop (except by replacing one of the loops with something equivalent, like recursion, or unrolling it based on the fact that your example happens to have exactly two items in each inner dict—which presumably isn't true for your real problem). After all, you have to visit each key of each inner dictionary, which you can't do without a loop over the outer dictionary. It is possible to turn those loops into comprehension loops instead of statements, or C loops inside `map` or `list` (or maybe inside some Pandas function?). Both my version and Coldspeed's put the nested loop in a comprehension, and at least one of the additional linear loops (which don't add to the algorithmic complexity, but probably do add significantly to the real-life time for small collections like your example) are buried inside a builtin function. But speeding up the looping while doing more overall work isn't always a worthwhile tradeoff.
Upvotes: 1 [selected_answer]<issue_comment>username_2: So this improves @cs95, and provides a more readable 1 line.
There are 2 lines here - but one may already have the inner keys ('k').
The point is that you can use the dict 'a' to transfer the values.
```
a = {'a':{'se':3, 'op':2}, 'b':{'se':4, 'op':3}}
k = list(a.values())[0].keys()
b = {i: {o: a[o][i] for o in a} for i in k} # one line dict inversion
print(f'{a}\n{b}')
```
However, if you are doing this, you may not be using the best data structure; instead you could use a dictionary keyed by tuples, such as
```
a = {('a', 'se'):3, ('a', 'op'):2, ('b', 'se'):4, ('b', 'op'):3}
```
You can then sort by tuple position, and filter by tuple key
```
c = sorted(a, key=lambda x:x[1])
d = sorted(a, key=lambda x:x[0])
e = list(filter(lambda x:x[0] == 'a', a)) # list
print(f'a: {a}\nc: {c}\nd: {d}\ne: {e}')
```
yields
```
a: {('a', 'se'): 3, ('a', 'op'): 2, ('b', 'se'): 4, ('b', 'op'): 3}
c: [('a', 'op'), ('b', 'op'), ('a', 'se'), ('b', 'se')]
d: [('a', 'se'), ('a', 'op'), ('b', 'se'), ('b', 'op')]
e: [('a', 'se'), ('a', 'op')]
```
Of course you can still access objects using keys:
```
x = a['a', 'op'] # returns 2
```
Even better maybe, if you are using a fixed set of keys, is to use a tuple of enums rather than str.
Upvotes: 2
|
2018/03/17
| 1,392 | 4,280 |
<issue_start>username_0: I just want to recover the table to the original structure. However, it has an error and I don't know how to debug it.
```
rm(list=ls())
Mytable<-function(mytable){
rows<-dim(mytable)[1]
cols<-dim(mytable)[2]
datatable<-NULL
for(i in 1:rows){
for(j in 1:mytable$N[i])
{
rowdata=mytable[i,c(1:2)]
datatable=cbind(datatable,rowdata)
}
}
row.names(datatable)<-c(1:dim(datatable)[1])
return(datatable)
}
sex<-rep(c('M','F'),each=4)
grade<-rep(c('B','C','D','E'),2)
number<-c(2,11,12,5,2,13,10,3)
table<-data.frame(S=sex,G=grade,N=number,stringsAsFactors=F)
mydata<-Mytable(table)
```<issue_comment>username_1: Turning this into a one-liner is doable in multiple ways, but all of them are going to be ugly. For example:
```
# Gather k2-k1-v2
g = ((k2,k1,v2) for k1,v1 in a.items() for k2,v2 in v1.items())
# Sort by k2
sg = sorted(g)
# Group by k2
gsg = itertools.groupby(sg, key=operator.itemgetter(0))
# Turn it into a dict of dicts
b = {k: {ksk[1]: ksk[2] for ksk in group} for (k, group) in gsg}
```
Putting it all together:
```
b = {k: {ksk[1]: ksk[2] for ksk in group}
for (k, group) in itertools.groupby(
sorted(((k2,k1,v2) for k1,v1 in a.items() for k2,v2 in v1.items())),
key=operator.itemgetter(0))}
```
Well, it's one expression, and you can put it all on one line if you don't know how many columns it is. But it's certainly not as readable as your original version.
And as for performance? It takes about twice as long from a quick test. Coldspeed's version is somewhere in between. Changing one list to an iterator makes it slightly slower on small dicts like your original example, but much faster on big ones, but either way, it didn't beat the original in any tests, and got slower than the itertools version with very big values. But of course if performance actually matters, you should measure them on your real data.
And if you think about it, there can't be any way to eliminate the nested loop (except by replacing one of the loops with something equivalent, like recursion, or unrolling it based on the fact that your example happens to have exactly two items in each inner dict—which presumably isn't true for your real problem). After all, you have to visit each key of each inner dictionary, which you can't do without a loop over the outer dictionary. It is possible to turn those loops into comprehension loops instead of statements, or C loops inside `map` or `list` (or maybe inside some Pandas function?). Both my version and Coldspeed's put the nested loop in a comprehension, and at least one of the additional linear loops (which don't add to the algorithmic complexity, but probably do add significantly to the real-life time for small collections like your example) are buried inside a builtin function. But speeding up the looping while doing more overall work isn't always a worthwhile tradeoff.
Upvotes: 1 [selected_answer]<issue_comment>username_2: So this improves @cs95, and provides a more readable 1 line.
There are 2 lines here - but one may already have the inner keys ('k').
The point is that you can use the dict 'a' to transfer the values.
```
a = {'a':{'se':3, 'op':2}, 'b':{'se':4, 'op':3}}
k = list(a.values())[0].keys()
b = {i: {o: a[o][i] for o in a} for i in k} # one line dict inversion
print(f'{a}\n{b}')
```
However, if you are doing this, you may not be using the best data structure; instead you could use a dictionary keyed by tuples, such as
```
a = {('a', 'se'):3, ('a', 'op'):2, ('b', 'se'):4, ('b', 'op'):3}
```
You can then sort by tuple position, and filter by tuple key
```
c = sorted(a, key=lambda x:x[1])
d = sorted(a, key=lambda x:x[0])
e = list(filter(lambda x:x[0] == 'a', a)) # list
print(f'a: {a}\nc: {c}\nd: {d}\ne: {e}')
```
yields
```
a: {('a', 'se'): 3, ('a', 'op'): 2, ('b', 'se'): 4, ('b', 'op'): 3}
c: [('a', 'op'), ('b', 'op'), ('a', 'se'), ('b', 'se')]
d: [('a', 'se'), ('a', 'op'), ('b', 'se'), ('b', 'op')]
e: [('a', 'se'), ('a', 'op')]
```
Of course you can still access objects using keys:
```
x = a['a', 'op'] # returns 2
```
Even better maybe, if you are using a fixed set of keys, is to use a tuple of enums rather than str.
Upvotes: 2
|
2018/03/17
| 294 | 903 |
<issue_start>username_0: ```
<!--
function PopUp() {
var TheURL = "http://www.widgetsu.com/";
var PopWidth = "1000";
var PopHeight = "1000";
var PosTop = "";
var PosLeft = "";
var SettingS = "toolbar=no,menubar=no,scrollbars=no,resizable=yes,width=" + PopWidth + ",height=" + PopHeight + ",top=" + PosTop + ",left=" + PosLeft;
var PopUp = open(TheURL,"box",SettingS);
}
window.onclick=PopUp;
// -->
```
The onclick pop up works when ever i click, how can i make it to work only once wen ever a page reloads<issue_comment>username_1: You could have some sort of global variable to keep track of if it's been clicked or not.
For example:
```
<!--
var clicked = false;
function PopUp() {
if (!clicked) {
// Your previous function code here
clicked = true;
}
}
window.onclick=PopUp;
// -->
```
Upvotes: 1 <issue_comment>username_2: You `onload="PopUp()"` in the body tag.
Upvotes: 0
|
2018/03/17
| 196 | 704 |
<issue_start>username_0: I want to change the hint property of an `edittext` field from my activity layout in a function I created in mainActivity.kt. The previous value of hint was from `string.xml` in `resources -> values -> string.xml` file.
How do I access `string` in main and change its value to something else?<issue_comment>username_1: You could have some sort of global variable to keep track of if it's been clicked or not.
For example:
```
<!--
var clicked = false;
function PopUp() {
if (!clicked) {
// Your previous function code here
clicked = true;
}
}
window.onclick=PopUp;
// -->
```
Upvotes: 1 <issue_comment>username_2: You `onload="PopUp()"` in the body tag.
Upvotes: 0
|
2018/03/17
| 234 | 756 |
<issue_start>username_0: I have a variable that comes from the "file\_get\_contents()" function. Using this variable with an if function always gives false.
```
$content = file_get_contents($url);
echo $content; // Echos 1
if ($content == "1") {
// It doesn't matter if I use 1, '1' or "1"
echo "True";
}
else {
echo "False"; // Always echos this one.
}
```<issue_comment>username_1: You could have some sort of global variable to keep track of if it's been clicked or not.
For example:
```
<!--
var clicked = false;
function PopUp() {
if (!clicked) {
// Your previous function code here
clicked = true;
}
}
window.onclick=PopUp;
// -->
```
Upvotes: 1 <issue_comment>username_2: You `onload="PopUp()"` in the body tag.
Upvotes: 0
|
2018/03/17
| 1,131 | 4,038 |
<issue_start>username_0: According to [cppreference](http://en.cppreference.com/w/cpp/container/unordered_set/emplace),
>
> Rehashing occurs only if the new number of elements is greater than `max_load_factor()*bucket_count()`.
>
>
>
In addition, [[unord.req]/15](http://www.eel.is/c++draft/unord.req#15) has similar rules:
>
> The `insert` and `emplace` members shall not affect the validity of iterators if `(N+n) <= z * B`, where `N` is the number of elements in the container prior to the insert operation, `n` is the number of elements inserted, `B` is the container's bucket count, and `z` is the container's maximum load factor.
>
>
>
However, consider the following example:
```
#include
#include
int main()
{
std::unordered\_set s;
s.emplace(1);
s.emplace(42);
std::cout << s.bucket\_count() << ' ';
std::cout << (3 > s.max\_load\_factor() \* s.bucket\_count()) << ' ';
s.emplace(2);
std::cout << s.bucket\_count() << ' ';
}
```
With GCC 8.0.1, it outputs
```none
3 0 7
```
This means after emplacing 2, a rehashing occurs though the new number of elements (3) is **not** greater than `max_load_factor()*bucket_count()` (note the second output is 0). Why does this happen?<issue_comment>username_1: From **26.2.7 Unordered associative containers**
>
> The number of buckets is automatically increased as elements are added to an unordered associative container, *so that the average number of elements per bucket is kept below a bound*.
>
>
>
> ```
> b.load_factor() Returns the average number of elements per bucket.
>
> b.max_load_factor() Returns a positive number that the container attempts
> to keep the load factor less than or equal to. The container
> automatically increases the number of buckets as necessary
> to keep the load factor below this number.
>
> ```
>
>
I agree, the first part of description of `max_load_factor` suggests that the load factor could reach that value, but in the second part and in the foregoing quote it's clearly stated that the load factor will be kept below this number. So, you have found a mistake in cppreference.
In your code, without rehashing, after the third insertion your would have `s.load_factor` equal to `s.max_load_factor()`.
EDIT: To answer changes in the question i checked available to me VS implementation of `unordered_set`, it is implemented as
>
>
> ```
> // hash table -- list with vector of iterators for quick access
>
> ```
>
>
and then you ask for an iterator, using e.g. `lower_bound`, you get iterator to the list element, which doesn't get invalidated by rehashing. So, it agrees with [unord.req]/15.
Upvotes: 2 <issue_comment>username_2: You're confusing the fact that the `bucket_count()` has changed with the invalidation of iterators. Iterators are only invalidated in case of rehash, which will not be one if new number of elements is less than or equal to `max_load_factor()*bucket_count()` (btw if `size()>max_load_factor()*bucket_count()` rehashing *can* occur, but doesn't have to).
As this was not the case in your example, no rehashing occurred and iterators remain valid. However, the bucket count had to be increased to accommodate the new element.
I experimented a bit (expanding on your code) with Mac OSX's clang, which kept the iterators valid even after `rehash(size())` (which did change the element's bucket association, tested directly by iterating over the buckets and printing their contents).
Upvotes: 2 <issue_comment>username_3: The rehash condition is changed since [Issue 2156](https://cplusplus.github.io/LWG/issue2156). Before the change, a rehash occurs when the new number of elements is **no less than** `max_load_factor()*bucket_count()`, and it becomes "greater than" after the change.
GCC 8.0.1 does not implement this change. There is already a [bug report](https://gcc.gnu.org/bugzilla/show_bug.cgi?id=87135), and it has been fixed in GCC 9.
Upvotes: 3 [selected_answer]
|
2018/03/17
| 1,158 | 4,437 |
<issue_start>username_0: I want to set local storage value in bellow script code.
```
```
I want to set file path which is stored/saved in **local storage**.
I Want to do like this
```
```<issue_comment>username_1: You can use [HTMLScriptElement](https://developer.mozilla.org/en-US/docs/Web/API/HTMLScriptElement) API. Check the [Dynamically importing scripts](https://developer.mozilla.org/en-US/docs/Web/API/HTMLScriptElement#Dynamically_importing_scripts). I used their code to show you how you can do the same.
```
function loadError (oError) {
throw new URIError("The script " + oError.target.src + " is not accessible.");
}
function importScript (sSrc, fOnload) {
var oScript = document.createElement("script");
var oHead = document.head || document.getElementsByTagName("head")[0];
oScript.type = "text\/javascript";
oScript.onerror = loadError;
if (fOnload) { oScript.onload = fOnload; }
oHead.appendChild(oScript);
oScript.src = sSrc;
oScript.defer = true;
}
importScript(localStorage.getItem('languageFilePath'));
```
A little background what [*defer*](https://html.spec.whatwg.org/multipage/scripting.html#attr-script-defer) attribute does..
>
> For classic scripts, if the `async` attribute is present, then the classic script will be fetched in parallel to parsing and evaluated as soon as it is available (potentially before parsing completes). If the `async` attribute is not present but the `defer` attribute is present, then the classic script will be fetched in parallel and evaluated when the page has finished parsing. If neither attribute is present, then the script is fetched and evaluated immediately, blocking parsing until these are both complete.
>
>
>
Upvotes: 2 <issue_comment>username_2: You can do something as simple as this, using the [DOMContentLoaded](https://developer.mozilla.org/en-US/docs/Web/Events/DOMContentLoaded) event.
As soon as the initial HTML document has been completely loaded and parsed, the event fire and your script will load.
```
document.addEventListener('DOMContentLoaded', function() {
var s = document.createElement('script');
s.setAttribute('type', 'text/javascript');
s.setAttribute('src', localStorage.getItem('languageFilePath'));
document.getElementsByTagName('head')[0].appendChild(s);
})
```
---
Here is some more read on how to load script dynamically
* [Dynamically load JS inside JS](https://stackoverflow.com/questions/14521108/dynamically-load-js-inside-js)
---
The above script fragment were taken/simplified from what I personally use for my web pages, to support older browsers as well.
```
var DomLoaded = {
done: false, onload: [],
loaded: function () {
if (DomLoaded.done) return;
DomLoaded.done = true;
if (document.removeEventListener) { document.removeEventListener('DOMContentLoaded', DomLoaded.loaded, false); }
for (i = 0; i < DomLoaded.onload.length; i++) DomLoaded.onload[i]();
},
load: function (fireThis) {
this.onload.push(fireThis);
if (document.addEventListener) {
document.addEventListener('DOMContentLoaded', DomLoaded.loaded, false);
} else {
/*IE<=8*/
if (/MSIE/i.test(navigator.userAgent) && !window.opera) {
(function () {
try { document.body.doScroll('up'); return DomLoaded.loaded(); } catch (e) { }
if (/loaded|complete/.test(document.readyState)) return DomLoaded.loaded();
if (!DomLoaded.done) setTimeout(arguments.callee, 10);
})();
}
}
/* fallback */
window.onload = DomLoaded.loaded;
}
};
DomLoaded.load(function () {
var s = document.createElement('script');
s.setAttribute('type', 'text/javascript');
s.setAttribute('async', true);
s.setAttribute('defer', true);
s.setAttribute('src', '/demo_files/script.js');
document.getElementsByTagName('head')[0].appendChild(s);
});
```
Upvotes: 3 [selected_answer]<issue_comment>username_3: you can do simply by assigning id to script tag
```
</code></pre>
<p>and then set attribute of src in onload method of body, like below code.</p>
<pre><code><html>
<head>
<script type="text/javascript" id="script\_tag">
function assignData() {
document.getElementById('script\_tag').setAttribute('src',
window.localStorage.getItem('languageFilePath'));
}
```
Upvotes: 0
|
2018/03/17
| 320 | 1,073 |
<issue_start>username_0: I have sorted customer name in alphabetical order and displayed as checkbox list in a page by below mysql query,
```
SELECT * FROM customer_details WHERE customer_status ='1' order by customer_name ASC
```
But this query result the output "A.G.S Stores" after "Z Mart" wrongly as below,
```
White Field Super Market
Xpress Mart
Yellow Bell Super Bazar
Z Mart
A.G.S Stores
A.N.S. Pandian Super Market
A.R Super Market
Aadithan Traders
```
Can anyone give solution for this issue?<issue_comment>username_1: `[ASC]` is by default. What u can do is use DESC check if is sorting in desc order than remove desc from order by and use order by with ASC or omit `ASC`
Upvotes: 2 <issue_comment>username_2: Hello I tried you data in mysql having MYISAM Engine For Table ,
please check my results in image it is correct .
[](https://i.stack.imgur.com/x9H29.png)
[](https://i.stack.imgur.com/XHMkR.png)
Upvotes: 1
|
2018/03/17
| 543 | 2,085 |
<issue_start>username_0: I am trying to read from a mongo source and apply some transformation on it. But when I try to apply any transformation or even do a `df.count()`. I get the following error.
```
MongoTypeConversionException: Cannot cast DATE_TIME into a NullType
```
So I understand a column has a mixed data type, in this case, `NULL` and `TIMESTAMP`. I am using a `mongo-spark-connector_2.11` version `2.2.1` which says **ConflictTypes** has a base type as **StringType**.
So a workaround is to pass schema, column names and types so that the connector won't infer the types itself. But how do I pass schema?
Here's my Java code to read from mongo
```
DataFrameReader x = ss.read().format("com.mongodb.spark.sql.DefaultSource").options("spark.mongodb.input.uri", "");
Dataset = x.load();
```
`ss` is SparkSession object.<issue_comment>username_1: Please import Java SQL Date, it will resolve the issue.
```
import java.sql.Date;
//import java.util.Date;
```
Error Type:
```
Caused by: com.mongodb.spark.exceptions.MongoTypeConversionException: Cannot cast DATE_TIME into a StructType(StructField(date,IntegerType,false), S
```
Upvotes: 2 <issue_comment>username_2: MongoDB Connector for Spark default use 1000 sample of your every field to build its schema, so if one field **contain different datatype such as Null datatype and datetime datatype**, and MongoDB Connector for Spark can possiblely not sample the datetime data and take that filed as a Null datatype. At least, when your use count method, connector will try to load data from mongodb to speifiy data type in spark dataframe, and cause this error: "MongoTypeConversionException Cannot cast DATE\_TIME into a NullType"
**The Solution:**
**Add the sample data of MongoDB Connector for Spark to build a correct schema.** For example, in pyspark:
```
df = session.read.format("com.mongodb.spark.sql.DefaultSource").option('sampleSize', 50000).load()
```
Upvotes: 2 <issue_comment>username_3: I used import java.sql.Timestamp as my mongo field had Timestamp and everything worked fine.
Upvotes: 0
|
2018/03/17
| 370 | 1,486 |
<issue_start>username_0: Can we add plugin Rotate features in Buildfire mobile app..?
Right now buildfire mobile app showing in vertical mobile app, landscape / page rotating feature is not enabled.
Is it available this features or not..?<issue_comment>username_1: Please import Java SQL Date, it will resolve the issue.
```
import java.sql.Date;
//import java.util.Date;
```
Error Type:
```
Caused by: com.mongodb.spark.exceptions.MongoTypeConversionException: Cannot cast DATE_TIME into a StructType(StructField(date,IntegerType,false), S
```
Upvotes: 2 <issue_comment>username_2: MongoDB Connector for Spark default use 1000 sample of your every field to build its schema, so if one field **contain different datatype such as Null datatype and datetime datatype**, and MongoDB Connector for Spark can possiblely not sample the datetime data and take that filed as a Null datatype. At least, when your use count method, connector will try to load data from mongodb to speifiy data type in spark dataframe, and cause this error: "MongoTypeConversionException Cannot cast DATE\_TIME into a NullType"
**The Solution:**
**Add the sample data of MongoDB Connector for Spark to build a correct schema.** For example, in pyspark:
```
df = session.read.format("com.mongodb.spark.sql.DefaultSource").option('sampleSize', 50000).load()
```
Upvotes: 2 <issue_comment>username_3: I used import java.sql.Timestamp as my mongo field had Timestamp and everything worked fine.
Upvotes: 0
|
2018/03/17
| 512 | 1,733 |
<issue_start>username_0: I have HTML structured like that:
```
###
title
Posted on 15 March 2018
lorem ipsum, bla, bla, bla
```
If there is any way to capture.
>
> lorem ipsum, bla, bla, bla
>
>
>
part without "touching" or changing anything else `.in-featuredlisting`?
I tried `.in-featuredlisting p +` but this one doesn't work, because we don't have copy wrapped in element.<issue_comment>username_1: 1.Use `.contents()` along with `.text()` to get all text inside div.
2.At the same time neglect it's children elements text using `.not()` and `.children()`
3.use `$.trim()` to remove extra spaces.
```js
console.log($.trim($(".in-featuredlisting").contents().not($(".in-featuredlisting").children()).text()));
```
```html
###
title
Posted on 15 March 2018
lorem ipsum, bla, bla, bla
```
Upvotes: 1 [selected_answer]<issue_comment>username_2: You can follow these steps:
1. First clone the parent element inside which the `text` with no tag is presented.
2. Then select all the children of this element.
3. Now, remove all the element from it which will remove the element that has HTML tags
4. Now, again go back to selected element so that you can reference that to get the text (which was not removed in step 3)
5. Finally, get the text from the parent element which you can further `trim` to remove the white spaces preceding and succeeding the text.
```js
var el = $(".in-featuredlisting");
var res = el.clone() //step 1
.children() //step 2
.remove() //step 3
.end() //step 4
.text(); //step 5
console.log(res.trim());
```
```html
###
title
Posted on 15 March 2018
lorem ipsum, bla, bla, bla
```
Upvotes: 1
|
2018/03/17
| 790 | 2,425 |
<issue_start>username_0: the following code is a representation of a hashmap.
I am trying to update the values of 2 keys in this has map one by one, but on the second update both the keys are getting updated (please see the code comments to understand). I am not understanding what am i doing wrong here.
```
public class TestClass {
public static void main(String[] args) {
HashMap> MainHashMap = new HashMap>();
HashMap hmc1 = new HashMap();
HashMap hmc2 = new HashMap();
HashMap hmc3 = new HashMap();
HashMap updateHM = new HashMap();
hmc1.put("1", "AC");
hmc1.put("2", "TV");
hmc2.put("1", "Fan");
hmc2.put("2", "Light");
hmc3.put("1", "Iron");
hmc3.put("2", "Geyser");
MainHashMap.put("1", hmc1);
MainHashMap.put("2", hmc2);
MainHashMap.put("3", hmc3);
System.out.println(MainHashMap); // printing the mian hasp map with three hash maps
updateHM.put("1", "Bag");// creating a temp hash map
updateHM.put("2", "pen");
MainHashMap.put("1", updateHM); // updating the key 1 of the main hash map
updateHM.put("1", "jeet");// changing the temp hash map
updateHM.put("2", "vishu");
MainHashMap.put("2", updateHM);// updating the key 2 of the main hash map
System.out.println(MainHashMap);// we see that both the keys are updated
}
}
```
output:
```
{1={1=AC, 2=TV}, 2={1=Fan, 2=Light}, 3={1=Iron, 2=Geyser}}
{1={1=jeet, 2=vishu}, 2={1=jeet, 2=vishu}, 3={1=Iron, 2=Geyser}}
```
Pleas help.
Thanks<issue_comment>username_1: You are updating the `updateHM` Map object which is already stored in your `mainMap` against key `1` with value `{1=Bag, 2=pen}`. NOw the following line of code updates the `updateHM`.
```
updateHM.put("1", "jeet");// changing the temp hashmap
updateHM.put("2", "vishu");
```
HashMap objects are mutable. So any operation on the HashMap object may affect where it has used. So, if you want to solve your problem then create a new object of HashMap every time you want to update.Use the below code before updating the key `2` in mainMap
```
updateHM = new HashMap<>();
updateHM.put("1", "jeet");// changing the temp hash map
updateHM.put("2", "vishu");
MainHashMap.put("2", updateHM);
```
Upvotes: 2 [selected_answer]<issue_comment>username_2: A new `HashMap` instance is needed here before the update:
```
updateHM = new HashMap();
updateHM.put("1", "jeet");
updateHM.put("2", "vishu");
MainHashMap.put("2", updateHM);
```
Upvotes: 0
|
2018/03/17
| 1,031 | 4,338 |
<issue_start>username_0: I'm using NativeScript's [`push-plugin`](https://github.com/NativeScript/push-plugin) to receive notifications [sent from Firebase Cloud Messaging (FCM)](https://fcm.googleapis.com/fcm/send):
```
public constructor(
private _services: PushService,
) {
const settings = {
senderID: "XXXXXXXX",
badge: true,
clearBadge: true,
sound: true,
alert: true,
interactiveSettings: {
actions: [{
identifier: 'READ_IDENTIFIER',
title: 'Read',
activationMode: "foreground",
destructive: false,
authenticationRequired: true,
}, {
identifier: 'CANCEL_IDENTIFIER',
title: 'Cancel',
activationMode: "foreground",
destructive: true,
authenticationRequired: true,
}],
categories: [{
identifier: 'READ_CATEGORY',
actionsForDefaultContext: ['READ_IDENTIFIER', 'CANCEL_IDENTIFIER'],
actionsForMinimalContext: ['READ_IDENTIFIER', 'CANCEL_IDENTIFIER'],
}],
},
notificationCallbackIOS: data => {
console.log("DATA: " + JSON.stringify(data));
},
notificationCallbackAndroid: (message, data, notification) => {
console.log("MESSAGE: " + JSON.stringify(message));
console.log("DATA: " + JSON.stringify(data));
console.log("NOTIFICATION: " + JSON.stringify(notification));
alert(message);
},
};
PushNotifications.register(settings, data => {
console.log("REGISTRATION ID: " + data);
this.toDeviceToken = data;
PushNotifications.onMessageReceived(settings.notificationCallbackAndroid);
}, error => {
console.log(error);
});
}
```
Using this code, I can receive the success message in `notificationCallbackAndroid` method when I send them from Postman. This is what I get on the logs:
```
{
"multicast_id": 8382252195536318747,
"success": 1,
"failure": 0,
"canonical_ids": 0,
"results": [
{
"message_id": "0:1521270133609562%161a06bff9fd7ecd"
}
]
}
```
However, no notification shows up in the notification bar.
Do we have to use the separate method to show the notification?<issue_comment>username_1: The [`push-plugin`](https://github.com/NativeScript/push-plugin) will not create/display a notification automatically, it will just notify you when a notification is received.
You need to use another plugin, [`nativescript-local-notifications`](https://github.com/EddyVerbruggen/nativescript-local-notifications), to create/display it. So, inside your `notificationCallbackAndroid` and `notificationCallbackIOS` you should have something like this:
```
LocalNotifications.schedule([{
title: notificationData.title,
body: notificationData.body,
}]);
```
Also, `onMessageReceived` has been deprecated, now you should only use `notificationCallbackAndroid` and/or `notificationCallbackIOS`, so once you update `push-plugin` to the latest version, you can get rid of this line:
```
PushNotifications.onMessageReceived(settings.notificationCallbackAndroid);
```
Upvotes: 1 [selected_answer]<issue_comment>username_2: To add, it seems notifications work perfectly on iOS. But on Android, it only works when application is active.
Some forums state that the push payload needs a "notification" structure to automatically add to the notification tray, but i looked over the commit history and it looks like it has been (attempted) fixed.
Either way, i'm using the same push sending SDK with another app development sdk (Xamarin) and don't want custom code to send a different payload on Nativescript apps.
I will try username_1s answer as Android-only code and see if it helps. If not I will try to revert to an older version of this plugin, as I understand the entire problem is a regression.
PS. I see that this plugin has been obsoleted in favor of the firebase plugin, but i still like the simplicity (and smaller footprint?) of this one when only needing GCM and APS.
**UPDATE**
Installed local-notifications plugin, and added code for using it on Android only, and all is well.
Upvotes: 1
|
2018/03/17
| 651 | 2,668 |
<issue_start>username_0: I'm currently using the latest Spring Tool Suite (running on jdk 1.8) and I've downloaded the source code from the tutorial in this link: [accessing-data-mysql](https://spring.io/guides/gs/accessing-data-mysql/)
I tried running it using Tomcat and it shows `INFO: Server startup in 4581 ms` but when I accessed the link `localhost:8080/demo/all` , the page shows "HTTP Status 404" and the console didn't show any additional logs. The table "User" wasn't automatically created either when I checked my database.
I've just started learning Spring set-up from scratch and I'm feeling stumped... Thank you in advance for helping and any ideas on how to solve my problem would be highly appreciated.
Thank you.<issue_comment>username_1: The [`push-plugin`](https://github.com/NativeScript/push-plugin) will not create/display a notification automatically, it will just notify you when a notification is received.
You need to use another plugin, [`nativescript-local-notifications`](https://github.com/EddyVerbruggen/nativescript-local-notifications), to create/display it. So, inside your `notificationCallbackAndroid` and `notificationCallbackIOS` you should have something like this:
```
LocalNotifications.schedule([{
title: notificationData.title,
body: notificationData.body,
}]);
```
Also, `onMessageReceived` has been deprecated, now you should only use `notificationCallbackAndroid` and/or `notificationCallbackIOS`, so once you update `push-plugin` to the latest version, you can get rid of this line:
```
PushNotifications.onMessageReceived(settings.notificationCallbackAndroid);
```
Upvotes: 1 [selected_answer]<issue_comment>username_2: To add, it seems notifications work perfectly on iOS. But on Android, it only works when application is active.
Some forums state that the push payload needs a "notification" structure to automatically add to the notification tray, but i looked over the commit history and it looks like it has been (attempted) fixed.
Either way, i'm using the same push sending SDK with another app development sdk (Xamarin) and don't want custom code to send a different payload on Nativescript apps.
I will try username_1s answer as Android-only code and see if it helps. If not I will try to revert to an older version of this plugin, as I understand the entire problem is a regression.
PS. I see that this plugin has been obsoleted in favor of the firebase plugin, but i still like the simplicity (and smaller footprint?) of this one when only needing GCM and APS.
**UPDATE**
Installed local-notifications plugin, and added code for using it on Android only, and all is well.
Upvotes: 1
|
2018/03/17
| 574 | 2,039 |
<issue_start>username_0: I am currently obtaining `HTML` data format from the server via `AJAX`. The JQuery code looks like this, where the data obtained from server `result` is a bunch of `HTML`.
**JQuery**
```
$.ajax({
url: "/home",
type: "POST",
contentType: "application/json",
data: JSON.stringify(//some data sent...)
}).done(function(result){
//"result" is HTML data
document.getElementById("post").innerHTML = result;
//I would like to get another data from the server here.
});
```
Apart from that `HTML`, I would also like to receive another data, which is just a number. How can I obtain both `HTML` and the number data all at once using `AJAX`? Here is the ExpressJS code:
**NodeJS/Express**
```
router.post("/home", function(req, res) {
User.findById(id).exec(function(err, user){
res.render("home", {user: user});
//Do I need to add something here (like res.render), or modify the above res.render?
});
});
```<issue_comment>username_1: Have the function return an object so you can return the html and number you value
```
[Data Contract]
public class returnData {
[Data Member]
public string HTML {get; set;}
[Data Member]
public string number {get; set;}
}
```
Upvotes: 0 <issue_comment>username_2: You can just put the number in your `res.render` thing also.
```
res.render("home", { user: user, number: number }
```
Then if you need that number in your client-side script, add something to that effect in your ejs.
```
const myNumber = parseInt(<%= number %>)
```
More on doing that here: [How to get a data on the client side with javascript from the render express method?](https://stackoverflow.com/questions/16400876/how-to-get-a-data-on-the-client-side-with-javascript-from-the-render-express-met/42015384#42015384)
Basically, when the EJS template is created it's just stamping values into a file without knowing what the values mean. Then it gets turned into HTML in the browser, and the JS will run.
Upvotes: 1
|
2018/03/17
| 530 | 1,549 |
<issue_start>username_0: So i know that `f(n)=n^n` has a bigger growth compared to `g(n)=n!` and `t(n)=2^n` has a less growth
but i can't find any function that has the same order as n! and is not a factorial function
do we have such a function which is Θ(n!) and is not factorial? if we do have such functions then can you mention a few?<issue_comment>username_1: Take a look at [Double Factorial](https://en.wikipedia.org/wiki/Double_factorial) and try to compare them.
Upvotes: -1 <issue_comment>username_2: Yes - one of the most famous [asymptotic equivalent](https://en.wikipedia.org/wiki/Asymptotic_analysis) of `n!` is given by its [Stirling's approximation](https://en.wikipedia.org/wiki/Stirling%27s_approximation), namely:
```
(1) n! ~ sqrt(2.pi.n).(n/e)^n
```
Note the use of *equivalence* which is *stronger* than the Θ relationship. The former implies the latter:
```
(2) f(n) ~ g(n) => f(n) = Θ(g(n))
```
With (1) and (2) you get:
```
n! = Θ(sqrt(2.pi.n).(n/e)^n)
```
Since you are asking for a Θ approximation rather than an equivalence, you can create as many functions as you want, for instance multiplying by `2 - sin(n)` (which isn't particularly useful!):
```
n! = Θ((2 - sin(n)).sqrt(2.pi.n).(n/e)^n)
```
Upvotes: 3 [selected_answer]<issue_comment>username_3: A simple example is computing all possible permutations of an array:
* n choices for first element
* n - 1 for 2nd
* n - 2 for 3rd
* and so on ...
In total there are n(n - 1)(n - 2)... = n! permutations (if elements are unique or tagged).
Upvotes: 1
|
2018/03/17
| 636 | 2,063 |
<issue_start>username_0: ```
OleDbConnection con;
OleDbDataReader read;
OleDbCommand cmd;
private void btn_clicked(Object sender, EventArgs e)
{
con = new OleDbConnection(WindowsFormsApplication2.Properties.Settings.Default.DBConString);
con.Open();
cmd = new OleDbCommand( " SELECT * FROM tbPeople WHERE country = ' " textbox.Text " ' ", con );
read = cmd.ExecuteReader();
while(read.Read())
{
textboxTest.text += read["lastname"].toString() + ", " + read["firstname"].toString() + "\n";
}
}
```
I'm new to Entity Framework, and I would like this code to be converted.
Thanks in advance :)<issue_comment>username_1: Assuming you have already created edmx data model and have data context as `dbContext`, your equivalent query will look like somewhat :
```
var country=textbox.text;
var peoples=dbContext.tbPeople.where(c=>c.country==country).ToList()
foreach(var people in peoples)
{
textboxTest.text += people.lastname + ", " + people.firstname + "\n";
}
```
Upvotes: 1 [selected_answer]<issue_comment>username_2: Assuming when you create your entities you don't prefix with tb and also use normal casing
```
using(var db = new MyAwesomeContext())
{
var names = db.People.Where(x => x.Country == textbox.Text)
.Select(x => $"{x.LastName}, {x.FirstName}")
.ToList();
textboxTest.text = string.Join("\n", names);
}
```
---
**Additional Resources**
[Enumerable.Where Method (IEnumerable, Func)](https://msdn.microsoft.com/en-us/library/bb534803(v=vs.110).aspx)
>
> Filters a sequence of values based on a predicate.
>
>
>
[Enumerable.Select Method (IEnumerable, Func)](https://msdn.microsoft.com/en-us/library/bb548891(v=vs.110).aspx)
>
> Projects each element of a sequence into a new form.
>
>
>
[String.Join Method (String, String[])](https://msdn.microsoft.com/en-us/library/57a79xd0(v=vs.110).aspx)
>
> Concatenates all the elements of a string array, using the specified
> separator between each element.
>
>
>
Upvotes: 1
|